Archive of ‘evaluation’ category
Exemplary Course Program Rubric
Course Design addresses elements of instructional design. For the purpose of this rubric, course design includes such elements as structure of the course, learning objectives, organization of content, and instructional strategies.
Interaction and Collaboration
Interaction denotes communication between and among learners and instructors, synchronously or asynchronously. Collaboration is a subset of interaction and refers specifically to those activities in which groups are working interdependently toward a shared result. This differs from group activities that can be completed by students working independently of one another and then combining the results, much as one would when assembling a jigsaw puzzle with parts of the puzzle worked out separately then assembled together. A learning community is defined here as the sense of belonging to a group, rather than each student perceiving himself/herself studying independently.
Assessment focuses on instructional activities designed to measure progress toward learning outcomes, provide feedback to students and instructors, and/or enable grading or evaluation. This section addresses the quality and type of student assessments within the course.
Learner Support addresses the support resources made available to students taking the course. Such resources may be accessible within or external to the course environment. Learner support resources address a variety of student services.
more on online teaching in this IMS blog
more on rubrics in this IMS blog
Discussion on the EDUCAUSE Blended and Online Learning Group’s listserv
develop anonymous mid-course student evaluations allowing students to reflect on course and progress and informing instructor about what is working or not in the course.
– what is working well for you in the course?
– what is not working well for you in the course?
- What is helping you learn?
- What is hindering your learning?
- What suggestions do you have to make the course better for you, your peers, or the instructor?
Katie Linder Research Director Extended Campus, Oregon State University 4943 The Valley Library Corvallis, Oregon 97331 Phone 541-737-4629 | Fax 541-737-2734 Email: firstname.lastname@example.org
At the University of Illinois, we have been using Informal Early Feedback as a way to gauge information from our students to help improve the courses before the end. Here are a couple of links to our site. The first is the main page on what IEF is and the second is the question bank we offer to faculty. This is a starting point for them, then we meet with those who want to work on tweaking them for their specific needs.
* About IEF: https://citl.illinois.edu/citl-101/measurement-evaluation/teaching-evaluation/ief
* Question Bank: https://citl.illinois.edu/citl-101/measurement-evaluation/teaching-evaluation/ief/ief-question-bank
If you have any questions at all, don’t hesitate to ask.
Sol Roberts-Lieb Associate Director, Center for Innovation in Teaching and Learning Pedagogy Strategy Team and Industry Liaison UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
more on student evaluations in this IMS blog:
Lessons From Finland: What Educators Can Learn About Leadership
Humanities need convincing data to demonstrate their value, says expert
Humanities scholars have always been good at conveying the importance of their work through stories, writes Paula Krebs for Inside Higher Ed, but they have been less successful at using data to do so. This need not be the case, adds Krebs, who recounts a meeting with faculty members, local employers, and public humanities representatives to discuss how to better measure the impact of a humanities education on graduates. Krebs offers a list of recommendations and concrete program changes, such as interviewing employers about their experiences with hiring graduates, that might help humanities programs better prepare students for postgraduate life.
Academica Group <email@example.com>
Adding Good Data to Good Stories
a list of the skills that we think graduates have cultivated in their humanities education:
- Critical thinking
- Communications skills
- Writing skills, with style
- Organizational skills
- Listening skills
- Cultural competencies, intercultural sensitivity and an understanding of cultural and historical context, including on global topics
- Empathy/emotional intelligence
- Qualitative analysis
- People skills
- Ethical reasoning
- Intellectual curiosity
As part of our list, we also agreed that graduates should have the ability to:
- Meet deadlines
- Construct complex arguments
- Provide attention to detail and nuance (close reading)
- Ask the big questions about meaning, purpose, the human condition
- Communicate in more than one language
- Understand differences in genre (mode of communication)
- Identify and communicate appropriate to each audience
- Be comfortable dealing with gray areas
- Think abstractly beyond an immediate case
- Appreciate differences and conflicting perspectives
- Identify problems as well as solving them
- Read between the lines
- Receive and respond to feedback
Then we asked what we think our graduates should be able to do but perhaps can’t — or not as a result of anything we’ve taught them, anyway. The employers were especially valuable here, highlighting the ability to:
- Use new media, technologies and social media
- Work with the aesthetics of communication, such as design
- Perform a visual presentation and analysis
- Identify, translate and apply skills from course work
- Perform data analysis and quantitative research
- Be comfortable with numbers
- Work well in groups, as leader and as collaborator
- Take risks
- Identify processes and structures
- Write and speak from a variety of rhetorical positions or voices
- Support an argument
- Identify an audience, research it and know how to address it
- Know how to locate one’s own values in relation to a task one has been asked to perform
Rubrics: An Undervalued Teaching Tool
Stephanie Almagno, PhD
Here are five different ways to apply the same rubric in your classroom.
1. A Rubric for Thinking (Invention Activity)
2. A Rubric for Peer Feedback (Drafting Activity)
3. A Rubric for Teacher Feedback (Revision Activity)
4. A Rubric for Mini-Lessons (Data Indicate a Teachable Moment)
5. A Rubric for Making Grades Visible (Student Investment in Grading)
How often have we heard that students believe grades to be arbitrary or capricious? Repeated use of a single rubric is good for both students and instructors. Switching roles between author and editor results in students’ increased familiarity with the process and the components of good writing. Over the course of the semester, students will synthesize the rubric’s components into effective communication. The instructor, too, will shift from “sage on the stage” to “guide on the side,” answering fewer questions (and answering the same question fewer times). In other words, students will gain greater independence as writers and thinkers. And this is good for all of us.
For more detailed information, go to the full version of the article: http://www.facultyfocus.com/articles/effective-teaching-strategies/rubrics-an-undervalued-teaching-tool/
More on rubrics in this blog
For what it’s worth, here’s something I used ‘long ago’ on rubrics:
Links to information about rubrics:
The folks at TeacherVision.com weigh in on rubrics.
How to create a Rubric
The Chicago Public Schools page on writing rubrics from scratch
The Rubric Bank
The Chicago Schools again with a list of rubrics for various subject areas
Rubrics Resources – Westfield (MA) Public Schools
A links page to many other sources about using rubrics to improve instruction.
Kathy Schrock’s Guide for Educators – Assessment Rubrics
Kathy Schrock’s links listing for rubrics – examples and about them
Rubric How-To’s – MidLink’s Teacher Resource Room
Caroline McCullen’s (a multimedia teacher) page about rubrics with links to other sources on the topic
Rubrics by Bernie Dodge
The Master details how rubrics and WebQuests dovetail nicely.
An example of a web-based tool that can generate rubrics at the click of a button.
TeAch-nology.com’s Teacher Rubric Makers
Yet another example of a web-based tool that promises to generate rubrics.
A defense of student evaluations: They’re biased, misleading, and extremely useful.
The answer requires us to think about power. If you look hard at the structure of academia, you will see a lot of teachers who, in one way or another, lack power: adjuncts and term hires (a large population, and growing); untenured faculty (especially in universities like mine); faculty, even tenured faculty, in schools where budget cuts loom; graduate students, always and everywhere. You might see evaluations as instruments by which students, or administrators, exercise power over those vulnerable employees. But if you are a student, and especially if you are a student who cares what grades you get or who needs recommendations, then teachers, for you—even adjuncts and graduate teaching assistants—hold power.
Chairmen and deans also need to know when classroom teaching fails: when a professor makes catastrophically wrong assumptions as to what students already know, for example, or when students find a professor incomprehensible thanks to her thick Scottish accent. My note: indeed, when chairmen and deans KNOW what they are doing and are NOT using evaluations for their own power.
Student Course Evaluations Get An ‘F’ : NPR Ed : NPR
Philip Stark is the chairman of the statistics department at the University of California, Berkeley. “I’ve been teaching at Berkeley since 1988, and the reliance on teaching evaluations has always bothered me,” he says.
Stark is the co-author of “An Evaluation of Course Evaluations,” a new paper that explains some of the reasons why.
Michele Pellizzari, an economics professor at the University of Geneva in Switzerland, has a more serious claim: that course evaluations may in fact measure, and thus motivate, the opposite of good teaching. Here’s what he found. The better the professors were, as measured by their students’ grades in later classes, the lower their ratings from students.
“Show me your stuff,” Stark says. “Syllabi, handouts, exams, video recordings of class, samples of students’ work. Let me know how your students do when they graduate. That seems like a much more holistic appraisal than simply asking students what they think.”
Pls have a link to the PDF file
Here some opinions from the comments section:
Formative assessments are only good if you use them to alter your teaching or for students to adjust their learning. Too often, I’ve seen exit tickets used and nothing is done with the results.
Please consider other IMS blog postings on assessment
Communicating Students convey information, describe process, and express ideas in accurate, engaging, and understandable ways.
Researching Students identify and access a variety of resources through which they retrieve and organize data they have determined to be authentic and potentially relevant to their task.
Thinking Critically Students use structured methods to weigh the relevance and impact of their decisions and actions against desired outcomes and adjust accordingly.
Thinking Creatively Students comprehend and employ principles of creative and productive problem solving to understand and mitigate real-world problems.
Keep in mind, however, that standards don’t prepare students for anything. They are a framework of expectations and educational objectives. Without the organization and processes to achieve them, they are worthless.
Significance An instructionally useful assessment measures students’ attainment of a worthwhile curricular aim—for instance, a high-level cognitive skill or a substantial body of important knowledge.
Teachability An instructionally useful assessment measures something teachable. Teachability means that most teachers, if they deliver reasonably effective instruction aimed at the assessment’s targets, can get most of their students to master what the test measures.
Describability A useful assessment provides or is directly based on sufficiently clear descriptions of the skills and knowledge it measures so that teachers can design properly focused instructional activities.
Reportability An instructionally useful assessment yields results at a specific enough level to inform teachers about the effectiveness of the instruction they provide.
Nonintrusiveness In clear recognition that testing time takes away from teaching time, an instructionally useful assessment shouldn’t take too long to administer—it should not intrude excessively on instructional activities.
How Open Badges Could Really Work In Education
Higher education institutions are abuzz with the concept of Open Badges. The concept was presented to SCSU CETL some two years ago, but it remained mute on the SCSU campus. Part of the presentation to the SCSU CETL included the assertion that “Some advocates have suggested that badges representing learning and skills acquired outside the classroom, or even in Massive Open Online Courses (MOOCs), will soon supplant diplomas and course credits.”
“For higher education institutions interested in keeping pace, establishing a digital ecosystem around badges to recognize college learning, skill development and achievement is less a threat and more an opportunity. Used properly, Open Badge systems help motivate, connect, articulate and make transparent the learning that happens inside and outside classrooms during a student’s college years.”
Educational programs that use learning design to attach badges to educational experiences according to defined outcomes can streamline credit recognition.
The badge ecosystem isn’t just a web-enabled transcript, CV, and work portfolio rolled together. It’s also a way to structure the process of education itself. Students will be able to customize learning goals within the larger curricular framework, integrate continuing peer and faculty feedback about their progress toward achieving those goals, and tailor the way badges and the metadata within them are displayed to the outside world.
A Digital Badge Initiative in First-Year Writing Courses
a WordPress theme coupled with the BadgeOS plugin, a free program that enables credit issuing in the form of digital badges. The badges themselves were developed with Credly, a free online service that allows users to create, customize, store and issue achievement-based digital badges. In total, the only cost of the program development has been the domain hosting fee.