Archive of ‘evaluation’ category
Most professors we hear from want to assess their students on higher levels and that if current assessments kept student at the lowest level of Bloom’s Taxonomy, they wouldn’t feel rewarded as educators.
However, assessment is by far the most labour-intensive part of teaching. Assessment plans and rubrics must be prepped. Test questions must be written. Every student needs a mark, personalized feedback and a road-map for improvement. The larger the class, the more work for the instructor. Add in formative assessments like weekly assignments and exercises that precipitate subtle, ongoing tweaks to the syllabus and it’s easy to see why many faculty opt to stick with what they know: An accumulation of easy-to-grade summative assessments that almost inevitably rely upon memorization and the most basic understanding of concepts
Curation Activities can be one of the most effective teaching strategies to help students compare what they’re learning in the classroom with real-world examples, and gain insight into how they can relate to each other.
Curation Activities can apply to all disciples, such as Business, Arts, or Sciences.
When students explain what they’ve learned to other students, they help consolidate and strengthen connections to those concepts while simultaneously engaging in active learning Find more project ideas here.
By actively engaging with their classmates and applying their own evaluative skills to feedback they’re delivering to their peers, students are developing lifelong critical thinking and creative analysis skills. Additionally, peer assessment is proven to be effective in getting students faster feedback from diverse sources, increases meta-cognition, independence and self-reflection, and improves student learning. These are all important skills that provide value far beyond the classroom. More details on the benefits of peer assessment here.
more on curation in this IMS blog
Microcredentials and Digital Badges in Higher Education
November 27 – 29, 2018 Savannah, GA
Badging programs are rapidly gaining momentum in higher education – join us to learn how to get your badging efforts off the ground.
Key Considerations: Assessment of Competencies
During this session, you will learn how to ask the right questions and evaluate if badges are a good fit within your unique institutional context, including determining ROI on badging efforts. You’ll also learn how to assess the competencies behind digital badges.
Key Technology Considerations
This session will allow for greater understanding of Open Badges standards, the variety of technology software and platforms, and the portability of badges. We will also explore emerging trends in the digital badging space and discuss campus considerations.
Key Financial Considerations
During this hour, we will take a closer look at answering key financial questions surrounding badges:
- What does the business model look like behind existing institutional badging initiatives?
- Are these money-makers for an institution? Is there revenue potential?
- Where does funding for these efforts come from?
Partnering with Industry
Badging can be a catalyst for partnerships between higher education and industry. In this session, you will have the opportunity to learn more about strategies for collaborating with industry in the development of badges and how badges align with employer expectations.
Branding and Marketing Badges
Now that we have a better idea of the “why” and “what” of badges, how do we market their value to external and internal stakeholders? You’ll see examples of how other institutions are designing and marketing their badges.
Alongside your peers and our expert instructors, you will have the opportunity to brainstorm ideas, get feedback, ask questions, and get answers.
Next Steps and the Road Ahead: Where Badging in Higher Ed is Going
Most institutions are getting into the badging game, and we’ll talk about the far-reaching considerations in the world of badging. We’ll use this time to engage in forward-thinking and discuss the future of badging and what future trends in badging might be.
more on microcredentialing in this IMS blog
Meaningful Grading: A Guide for Faculty in the Arts
Natasha Haugnes, Hoag Holmgren, and Martin Springborg
Martin’s own LinkedIn post: https://www.linkedin.com/feed/update/activity:6425014893287657472/
College and university faculty in the arts (visual, studio, language, music, design, and others) regularly grade and assess undergraduate student work but often with little guidance or support. As a result, many arts faculty, especially new faculty, adjunct faculty, and graduate student instructors, feel bewildered and must “reinvent the wheel” when grappling with the challenges and responsibilities of grading and assessing student work.
Meaningful Grading: A Guide for Faculty in the Arts enables faculty to create and implement effective assessment methodologies—research based and field tested—in traditional and online classrooms. In doing so, the book reveals how the daunting challenges of grading in the arts can be turned into opportunities for deeper student learning, increased student engagement, and an enlivened pedagogy.
Exemplary Course Program Rubric
if problems with the link above, try this one:
Course Design addresses elements of instructional design. For the purpose of this rubric, course design includes such elements as structure of the course, learning objectives, organization of content, and instructional strategies.
Interaction and Collaboration
Interaction denotes communication between and among learners and instructors, synchronously or asynchronously. Collaboration is a subset of interaction and refers specifically to those activities in which groups are working interdependently toward a shared result. This differs from group activities that can be completed by students working independently of one another and then combining the results, much as one would when assembling a jigsaw puzzle with parts of the puzzle worked out separately then assembled together. A learning community is defined here as the sense of belonging to a group, rather than each student perceiving himself/herself studying independently.
Assessment focuses on instructional activities designed to measure progress toward learning outcomes, provide feedback to students and instructors, and/or enable grading or evaluation. This section addresses the quality and type of student assessments within the course.
Learner Support addresses the support resources made available to students taking the course. Such resources may be accessible within or external to the course environment. Learner support resources address a variety of student services.
more on online teaching in this IMS blog
more on rubrics in this IMS blog
Discussion on the EDUCAUSE Blended and Online Learning Group’s listserv
develop anonymous mid-course student evaluations allowing students to reflect on course and progress and informing instructor about what is working or not in the course.
– what is working well for you in the course?
– what is not working well for you in the course?
- What is helping you learn?
- What is hindering your learning?
- What suggestions do you have to make the course better for you, your peers, or the instructor?
Katie Linder Research Director Extended Campus, Oregon State University 4943 The Valley Library Corvallis, Oregon 97331 Phone 541-737-4629 | Fax 541-737-2734 Email: email@example.com
At the University of Illinois, we have been using Informal Early Feedback as a way to gauge information from our students to help improve the courses before the end. Here are a couple of links to our site. The first is the main page on what IEF is and the second is the question bank we offer to faculty. This is a starting point for them, then we meet with those who want to work on tweaking them for their specific needs.
* About IEF: https://citl.illinois.edu/citl-101/measurement-evaluation/teaching-evaluation/ief
* Question Bank: https://citl.illinois.edu/citl-101/measurement-evaluation/teaching-evaluation/ief/ief-question-bank
If you have any questions at all, don’t hesitate to ask.
Sol Roberts-Lieb Associate Director, Center for Innovation in Teaching and Learning Pedagogy Strategy Team and Industry Liaison UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
more on student evaluations in this IMS blog:
Lessons From Finland: What Educators Can Learn About Leadership
Humanities need convincing data to demonstrate their value, says expert
Humanities scholars have always been good at conveying the importance of their work through stories, writes Paula Krebs for Inside Higher Ed, but they have been less successful at using data to do so. This need not be the case, adds Krebs, who recounts a meeting with faculty members, local employers, and public humanities representatives to discuss how to better measure the impact of a humanities education on graduates. Krebs offers a list of recommendations and concrete program changes, such as interviewing employers about their experiences with hiring graduates, that might help humanities programs better prepare students for postgraduate life.
Academica Group <firstname.lastname@example.org>
Adding Good Data to Good Stories
a list of the skills that we think graduates have cultivated in their humanities education:
- Critical thinking
- Communications skills
- Writing skills, with style
- Organizational skills
- Listening skills
- Cultural competencies, intercultural sensitivity and an understanding of cultural and historical context, including on global topics
- Empathy/emotional intelligence
- Qualitative analysis
- People skills
- Ethical reasoning
- Intellectual curiosity
As part of our list, we also agreed that graduates should have the ability to:
- Meet deadlines
- Construct complex arguments
- Provide attention to detail and nuance (close reading)
- Ask the big questions about meaning, purpose, the human condition
- Communicate in more than one language
- Understand differences in genre (mode of communication)
- Identify and communicate appropriate to each audience
- Be comfortable dealing with gray areas
- Think abstractly beyond an immediate case
- Appreciate differences and conflicting perspectives
- Identify problems as well as solving them
- Read between the lines
- Receive and respond to feedback
Then we asked what we think our graduates should be able to do but perhaps can’t — or not as a result of anything we’ve taught them, anyway. The employers were especially valuable here, highlighting the ability to:
- Use new media, technologies and social media
- Work with the aesthetics of communication, such as design
- Perform a visual presentation and analysis
- Identify, translate and apply skills from course work
- Perform data analysis and quantitative research
- Be comfortable with numbers
- Work well in groups, as leader and as collaborator
- Take risks
- Identify processes and structures
- Write and speak from a variety of rhetorical positions or voices
- Support an argument
- Identify an audience, research it and know how to address it
- Know how to locate one’s own values in relation to a task one has been asked to perform
Rubrics: An Undervalued Teaching Tool
Stephanie Almagno, PhD
Here are five different ways to apply the same rubric in your classroom.
1. A Rubric for Thinking (Invention Activity)
2. A Rubric for Peer Feedback (Drafting Activity)
3. A Rubric for Teacher Feedback (Revision Activity)
4. A Rubric for Mini-Lessons (Data Indicate a Teachable Moment)
5. A Rubric for Making Grades Visible (Student Investment in Grading)
How often have we heard that students believe grades to be arbitrary or capricious? Repeated use of a single rubric is good for both students and instructors. Switching roles between author and editor results in students’ increased familiarity with the process and the components of good writing. Over the course of the semester, students will synthesize the rubric’s components into effective communication. The instructor, too, will shift from “sage on the stage” to “guide on the side,” answering fewer questions (and answering the same question fewer times). In other words, students will gain greater independence as writers and thinkers. And this is good for all of us.
For more detailed information, go to the full version of the article: http://www.facultyfocus.com/articles/effective-teaching-strategies/rubrics-an-undervalued-teaching-tool/
More on rubrics in this blog
For what it’s worth, here’s something I used ‘long ago’ on rubrics:
Links to information about rubrics:
The folks at TeacherVision.com weigh in on rubrics.
How to create a Rubric
The Chicago Public Schools page on writing rubrics from scratch
The Rubric Bank
The Chicago Schools again with a list of rubrics for various subject areas
Rubrics Resources – Westfield (MA) Public Schools
A links page to many other sources about using rubrics to improve instruction.
Kathy Schrock’s Guide for Educators – Assessment Rubrics
Kathy Schrock’s links listing for rubrics – examples and about them
Rubric How-To’s – MidLink’s Teacher Resource Room
Caroline McCullen’s (a multimedia teacher) page about rubrics with links to other sources on the topic
Rubrics by Bernie Dodge
The Master details how rubrics and WebQuests dovetail nicely.
An example of a web-based tool that can generate rubrics at the click of a button.
TeAch-nology.com’s Teacher Rubric Makers
Yet another example of a web-based tool that promises to generate rubrics.
A defense of student evaluations: They’re biased, misleading, and extremely useful.
The answer requires us to think about power. If you look hard at the structure of academia, you will see a lot of teachers who, in one way or another, lack power: adjuncts and term hires (a large population, and growing); untenured faculty (especially in universities like mine); faculty, even tenured faculty, in schools where budget cuts loom; graduate students, always and everywhere. You might see evaluations as instruments by which students, or administrators, exercise power over those vulnerable employees. But if you are a student, and especially if you are a student who cares what grades you get or who needs recommendations, then teachers, for you—even adjuncts and graduate teaching assistants—hold power.
Chairmen and deans also need to know when classroom teaching fails: when a professor makes catastrophically wrong assumptions as to what students already know, for example, or when students find a professor incomprehensible thanks to her thick Scottish accent. My note: indeed, when chairmen and deans KNOW what they are doing and are NOT using evaluations for their own power.
Student Course Evaluations Get An ‘F’ : NPR Ed : NPR
Philip Stark is the chairman of the statistics department at the University of California, Berkeley. “I’ve been teaching at Berkeley since 1988, and the reliance on teaching evaluations has always bothered me,” he says.
Stark is the co-author of “An Evaluation of Course Evaluations,” a new paper that explains some of the reasons why.
Michele Pellizzari, an economics professor at the University of Geneva in Switzerland, has a more serious claim: that course evaluations may in fact measure, and thus motivate, the opposite of good teaching. Here’s what he found. The better the professors were, as measured by their students’ grades in later classes, the lower their ratings from students.
“Show me your stuff,” Stark says. “Syllabi, handouts, exams, video recordings of class, samples of students’ work. Let me know how your students do when they graduate. That seems like a much more holistic appraisal than simply asking students what they think.”
Pls have a link to the PDF file
Here some opinions from the comments section:
Formative assessments are only good if you use them to alter your teaching or for students to adjust their learning. Too often, I’ve seen exit tickets used and nothing is done with the results.
Please consider other IMS blog postings on assessment