Humanities need convincing data to demonstrate their value, says expert
Humanities scholars have always been good at conveying the importance of their work through stories, writes Paula Krebs for Inside Higher Ed, but they have been less successful at using data to do so. This need not be the case, adds Krebs, who recounts a meeting with faculty members, local employers, and public humanities representatives to discuss how to better measure the impact of a humanities education on graduates. Krebs offers a list of recommendations and concrete program changes, such as interviewing employers about their experiences with hiring graduates, that might help humanities programs better prepare students for postgraduate life.
a list of the skills that we think graduates have cultivated in their humanities education:
Critical thinking
Communications skills
Writing skills, with style
Organizational skills
Listening skills
Flexibility
Creativity
Cultural competencies, intercultural sensitivity and an understanding of cultural and historical context, including on global topics
Empathy/emotional intelligence
Qualitative analysis
People skills
Ethical reasoning
Intellectual curiosity
As part of our list, we also agreed that graduates should have the ability to:
Meet deadlines
Construct complex arguments
Provide attention to detail and nuance (close reading)
Ask the big questions about meaning, purpose, the human condition
Communicate in more than one language
Understand differences in genre (mode of communication)
Identify and communicate appropriate to each audience
Be comfortable dealing with gray areas
Think abstractly beyond an immediate case
Appreciate differences and conflicting perspectives
Identify problems as well as solving them
Read between the lines
Receive and respond to feedback
Then we asked what we think our graduates should be able to do but perhaps can’t — or not as a result of anything we’ve taught them, anyway. The employers were especially valuable here, highlighting the ability to:
Use new media, technologies and social media
Work with the aesthetics of communication, such as design
Perform a visual presentation and analysis
Identify, translate and apply skills from course work
Perform data analysis and quantitative research
Be comfortable with numbers
Work well in groups, as leader and as collaborator
Take risks
Identify processes and structures
Write and speak from a variety of rhetorical positions or voices
Support an argument
Identify an audience, research it and know how to address it
Know how to locate one’s own values in relation to a task one has been asked to perform
purpose: draft a document for the provost to plan for charting the future goal 3.12 “develop a comprehensive strategy to increase awareness and development of e-textbooks and open educational resources (OERs)”
\\STCLOUDSTATE\HuskyNet\DeptFiles\LRS\ETextbooks
SCSU goal: to reduce the cost of textbooks as an affordable learning initiative. Amount of reduction is undetermined
According to Bossaler et al (2014), it might be worth considering that SCSU (MnSCU?) must go first through implementing of e-text[books] in courses first by using publisher materials and then by using “in-house” produce. At this point, SCSU does NOT have an aligned policy of integrating e-texts in courses across campus. Lack of such experience might make a strategy for adoption of e-textbooks much more complex and difficult to implement
stats are colored in green for convenience. Stats regarding the increase in textbook costs are re-printed from author to author: e.g. Acker (2011, p. 42). Murey and Perez (2011, p. 49 (bottom) – 50 (up)) reports stats from 2009 and projections for 2013 regarding etexbook adotion. Same authors, p. 50 second paragraph reports good stats regarding texbooks’ price increase : US$1122 per year for textbooks in 2010.
Wimmer at al (2014) presents a lucid graphic of the structure of the publishing process (see bottom of this blog entry for citation and perm link).
Wimmer at al (2014) discusses copyright and permissions, which is of interest for this research (p. 85)
regarding in-house creation of e-textbooks, see (Distance education, e-learning, education and training, 2015). It very much follow the example of SUNY, which Keith was laying out: a team of faculty charged with creation the e-textbook for mass consumption.
Besides the SUNY model Keith is envisioning for MnSCU (comparable), there is the option of clustering OER sources: e.g. NASTA as per Horejsi (2013), CourseSmart. FlatWorld Knowledge (Murrey and Perez, 2011) etc.
Hamedi & Ezaleila (2015) present an entire etextbook program. Article has been ordered through ILL. Same with Joseph (2015).
Open Educational Resources in Acker (2015, p. 44-47). Also in Murey and Perez (2011, p. 51).
Also in ICWL (Conference) (13th : 2014 : Tallinn, E., & Cao, Y. (2014): OpenDSA
Different models of pricing also in Acker (2015, p. 48). Keith touched on that
students learn equally well from etextbooks as from paper ones: Taylor (2011)
my note: there is no good definition about e-textbook in terms of the complexity, which e-textbook on campus might involve.
Considering Wimmer et al (2014) account on their campus experience in publishing e-textbook, a textbook may involve an LMS (Canvas) and blog (WordPress). Per my proposal during the F2F meeting, and following Rachel’s suggestion about discrimination of the different types of e-textbooks, here is an outline of e-textbook definition:
*******************
working definition for e-textbook for the purposes of SCSU:
e-textbook is a compilation of textual, multimedia and interactive material, which can be viewed on various electronic devices. E-textbook can: 1. be purchased from a publisher; 2. compiled in HTML format on faculty or group web space; 3. compiled on the content module of LMS (BB, D2L, Canvas, Moodle, etc.) 4. compiled on LMS (BB, D2L, Canvas, Moodle, etc.) and including all interactive materials: e.g. hyperlinks to MediaSpace multimedia, quizzes, etc.; 5. compiled on special apps, such as iBook Author, eCub, Sigil.
*******************
e-book
(Electronic-BOOK) The electronic counterpart of a printed book, which can be viewed on a desktop computer, laptop, smartphone, tablet or e-book reader (e-reader). When traveling, a huge number of e-books can be stored in portable units, dramatically eliminating weight and volume compared to paper. Electronic bookmarks make referencing easier, and e-book readers may allow the user to annotate pages.
Although fiction and non-fiction books come in e-book formats, technical material is especially suited for e-book delivery because it can be searched. In addition, programming code examples can be copied, which is why CD-ROMs that contained examples or the entire text were often packaged inside technical paper books.
E-Book Formats
Wimmer, Morrow, & Weber: Collaboration in eTextbook Publishing
There are several e-book formats on the market, including EPUB, Mobipocket (PRC, MOBI), eReader (PDB), Kindle (AZW, KF8) and Apple iBook (EPUB variation). Many e-readers also accept generic formats, including Adobe PDF and plain text (TXT).
According to a United States Government report, textbook prices have increased at over twice the rate of inflation in the last couple of decades. According to another report, the average student spends between $700 and $1,000 per year on textbooks while the cost of e-textbooks can be as much as 50% lower than paper textbooks.
Oxford dictionary, an electronic book or e-book is “an electronic version of a printed book that can be read on a computer or handheld device designed specifically for this purpose.” An e-textbook is defined as an e-book used for instructional or educational purposes and often includes features such as bookmarking, searching, highlighting, and note-taking as well as built-in dictionaries and pronunciation guides, embedded video-clips, embedded hyperlinks, and animated graphics.
E-textbooks have moved from occasional usage to a mainstream technology on college campuses. According to the Association of American Publishers, sales of e-books hit over $90 million; this is up over 200% when compared to the same month the previous year. When the cost of textbooks and the availability of formats are considered, the use of an e-textbook in the classroom may be the reasonable choice.
—————–
A digital textbook is a digital book or e-book intended to serve as the text for a class. Digital textbooks may also be known as e-textbooks or e-texts. Digital textbooks are a major component of technology-based education reform. They may serve as the texts for a traditional face-to-face class, an online course or degree.
The concepts of open access and open source support the idea of open textbooks, digital textbooks that are free (gratis) and easy to distribute, modify and update https://en.wikipedia.org/wiki/Digital_textbook
—————-
Exploring Students’ E-Textbook Practices in Higher Education
Authors: by Aimee Denoyelles, John Raible and Ryan Seilhamer Published: Monday, July 6, 2015. Instructional Designers, University of Central Florida
According to the United States Government Accountability Office, prices have increased 82 percent from 2002 to 2012.3This cost sometimes drives students to delay or avoid purchasing textbooks. Digital materials such as e-textbooks may offer a more cost-effective alternative.4 Also, the expectation for digital materials is gaining strength in the K–12 sector.5 For example, Florida school districts set a goal to spend at least half of classroom material funding on digital materials by the 2015–2016 school year. Given that 81 percent of first-time-in-college (FTIC) undergraduate students hailed from a Florida public high school during the fall 2014 semester at the University of Central Florida (UCF), it is important to anticipate student expectations of digital materials. Finally, the availability of digital materials has risen exponentially with the incredible popularity of mobile devices.
Key Issues
Despite the advantages that e-textbooks pose, such as interactive features and accessibility on mobile devices, several barriers exist regarding implementation in higher education, namely non-standardization of the platform, limited use by students, and the unclear role of the instructor in adoption.
a survey questionnaire in 2012 that explored basic usage and attitudes regarding e-textbooks.
—————————–
Bossaller, J., & Kammer, J. (2014). Faculty Views on eTextbooks: A Narrative Study. College Teaching, 62(2), 68-75. doi:10.1080/87567555.2014.885877
This qualitative study gives insight into the experiences instructors have when working with publishers to integrate electronic content and technology into their courses.
Baek, E., & Monaghan, J. (2013). Journey to Textbook Affordability: An Investigation of Students’ Use of eTextbooks at Multiple Campuses. International Review Of Research In Open And Distance Learning, 14(3), 1-26.
the Advisory Committee on Student Financial Assistance (2007) reported that textbook prices represent a significant barrier to students’ accessibility to textbooks. The report concluded that textbooks cost between $700-$1000 per year; textbook prices have risen much faster than other commodities; and that college aid fails to cover textbook expenses. Textbook costs are equivalent to 26% of tuition costs for an average four-year public university student and 72% of tuition costs for an average community college student. In fact, the California State Auditor (2008) reported that textbook costs grew more rapidly than student fees in academic year 2007–08.
the creation of an interactive e-book called “Practical Clinical Chemistry: core concepts” was accomplished using the
Apple Macintosh platform and the iBooks Author software. Digital content, including videos, was developed for the
project and embedded within the final package. In order to limit the size of the final files, some content was uploaded
onto Youtube so that the user could access these via the internet.
The e-book, 200MB in size, was uploaded onto the Apple ITunes site and made available in 51 countries via the
iBooks store. This prototype is the first interactive digital textbook available in clinical chemistry and contains “4-
dimensional” content including digital images, videos, interactive presentations, real-time data generation as well as
review questions with instant feedback and assessment.
Hamedi, M., & Ezaleila, S. (2015). Digital Textbook Program in Malaysia: Lessons from South Korea. Publishing Research Quarterly, 31(4), 244-257. doi:10.1007/s12109-015-9425-4
Joseph, R. (2015). Higher Education Book Publishing-from Print to Digital: A Review of the Literature. Publishing Research Quarterly, 31(4), 264-274. doi:10.1007/s12109-015-9429-0
Taylor, A. K. (2011). Students Learn Equally Well From Digital as From Paperbound Texts. Teaching Of Psychology, 38(4), 278-281. doi:10.1177/0098628311421330
Much of the research related to digital texts has focused ontechnical aspects of readability (see Dillon, 1992, for a review) and limitations of digital media for note-taking, underlining, or highlighting text (Brown, 2001). However, the important—and unanswered—question from a teaching perspective is, ‘‘Can students learn as well from digital texts as from paperbound textbooks?’’ Few published studies have addressed this ques-tion directly, and even fewer studies have examined this ques-tion among college students.
Murray, M. C., & Pérez, J. (2011). E-Textbooks Are Coming: Are We Ready?. Issues In Informing Science & Information Technology, 849-60.
Pilot projects that can help build institutional expertise
Address how and where insights gained from pilot projects will be collected and
made available
People resources (e.g., instructional designers) that will be needed to assist
instructors to use this technology
ICWL (Conference) (13th : 2014 : Tallinn, E., & Cao, Y. (2014). New horizons in web based learning: ICWL 2014 international workshops, SPeL, PRASAE, IWMPL, OBIE, and KMEL, FET, Tallinn, Estonia, August 14-17, 2014, revised selected papers. Cham: Springer.
++++++++++++++++++++
MnSCU will by as Content Authoring Tool – SoftChalk. Here is a promo from Softchalk (my bold):
NEW SoftChalk Create 10 and SoftChalk Cloud eBook publishing features will arrive on April 25th! Come check out the latest enhancements at our upcoming webinars!
Sleek Designer Headers and Callout Boxes – Add some new pizazz to your SoftChalk lessons!
Three New Quiz Types – Test your students’ understanding with Sentence Completion, Multiple Blanks and Feedback Questions.
Polished New QuizPopper and Activity displays – With an enhanced interface for instructors and students.
Accessibility enhancements – Make your lessons available to everyone with even more accessibility enhancements.
NEW SoftChalk Cloud eBook creation and publishing – Includes a totally re-vamped, easier eBook creation and management. New SoftChalk eReader apps available for free download in the iOS, Android, Chromebook and Windows app stores. (Cloud Only)
are any faculty really going digital? Which content distributors will thrive? What are the implementation concerns? And when will going digital really happen?
Instruction and Liaison Librarian, University of Northern Iowa
games and gamification. the semantics are important. using the right terms can be crucial in the next several years.
gamification for the enthusiasm. credit course with buffet. the pper-to-peer is very important
gaming types
affordability; east to use; speed to create.
assessment. if you want heavy duty, SPSS kind of assessment, use polldaddy or polleverywhere.
Kahoot only Youtube, does not allow to upload own video or use Kaltura AKA Medispace, text versus multimedia
Kahoot is replacing Voicethread at K12, use the wave
Kahoot allows to share the quizzes and surveys
Kahoot is not about assessment, it is not about drilling knowledge, it is about conversation starter. why do we read an article? there is no shame in wrong answer.
the carrot: when they reach the 1000 points, they can leave the class
Kahoot music can be turned off, how short, the answers are limited like in Twitter
Quizlet
screenshot their final score and reach 80%
gravity is hard, scatter start with. auditory output
drill game
Teach Challenge.
1st day is Kahoot, second day is Team challange and test
embed across the curriculum
gaming toolkit for campus
what to take home: have students facing students from differnt library
In the age of Big Data, there is an abundance of free or cheap data sources available to libraries about their users’ behavior across the many components that make up their web presence. Data from vendors, data from Google Analytics or other third-party tracking software, and data from user testing are all things libraries have access to at little or no cost. However, just like many students can become overloaded when they do not know how to navigate the many information sources available to them, many libraries can become overloaded by the continuous stream of data pouring in from these sources. This session will aim to help librarians understand 1) what sorts of data their library already has (or easily could have) access to about how their users use their various web tools, 2) what that data can and cannot tell them, and 3) how to use the datasets they are collecting in a holistic manner to help them make design decisions. The presentation will feature examples from the presenters’ own experience of incorporating user data in decisions related to design the Bethel University Libraries’ web presence.
data tools: user testing, google analytics, click trakcer vendor data
user testing, free, no visualization, cross-domain, easy to use, requires scripts
qualitative q/s : why people do what they do and how will users think about your content
3 versions: variables: options on book search and order/wording of the sections in the articles tab
Findings: big difference between tabs versus single-page. Lil difference btw single-page options. Take-aways it won’t tell how to fix the problem, how to be empathetic how the user is using the page
Like to do in the future: FAQ and Chat. Problem: low use. Question how to make it be used (see PPT details)
Crazy Egg – Click Trackers. not a free tool, lowest tier, less $10/m.
see PPT for details>
interaction with the pates, clicks and scrollings
scroll analytics
not easy to use, steep learning curve
“blob” GAnalytics recognize the three different domains that r clicked through as one.
vendor data: springshare
chat and FAQ
Libguides
questions:
is there a dashboard tool that can combine all these tools?
optimal workshop: reframe, but it is more about qualitative data.
how long does it take to build this? about two years in general, but in the last 6 months focused.
Here is a preliminary plan. We will not follow it strictly; it is just an idea about the topics we would like to cover. Shall there be points of interest, please feel free to contribute prior and during the session.
Keeping in mind the ED 610 Learning Goals and Objectives, namely:
Understand and demonstrate how to write literature review in the field of the C&I research
Understand the related research methods in both quantitative and qualitative perspectives from the explored research articles
Understand how to use searching engine to find meaningful articles
Interpret and do critical thinking in C&I research articles
lets review our search and research skills:
How do we search?
Google and Google Scholar (more focused, peer reviewed, academic content)
SCSU Library search, Google, Professional organization, (NASSP), Stacks of magazines, csu library info, but need to know what all of the options mean on that page
+++++++++++++
PICO framework to structure a question:
Population, Patient, Problem
Intervention
Comparison
Outcome
Subject Guides
Please locate theEducation (Elementary) Education (Secondary) Educational Administration and Leadership (Doctoral) Educational Administration and Leadership (Masters)
at the LRS web page: http://lrts.stcloudstate.edu/library/default.asp
Look for “Research Assistance” and scroll to
Educational Administration and Leadership or any of the four links related to education http://research.stcloudstate.edu/rqs.phtml?subject_id=122
Electronic Journals & the DOI System
What is a DOI? A Digital Object Identifier (DOI) is assigned to electronic journal articles (and selected other online content) to specifically and permanently identify and access that article. Most of the standard academic citation formats now require the inclusion of DOIs within a citation when available.
How to find a DOI: Most current academic journal articles include a DOI (usually listed on the first page of the article). Most library databases list a DOI with the record for recent academic journal articles. Most non-academic articles (including magazine and newspaper articles) as well as many older academic journal articles do not have a DOI. Crossref.org provides a DOI Lookup service that will search for a DOI based on citation information (author’s last name, journal name, article title, etc.).
How to access an article via a DOI: Use the CSU Stanislaus Library DOI Look-up for options provided by the library, including access to the full-text via the publisher’s site or a library database service when available. Other, general DOI look-up systems (CrossRef & DOI.org) usually link to the article’s “homepage” on the publisher’s site (which usually include a free abstract but full-text access is restricted to subscribers).
Ways to find research specific to doctoral student needs (ie: Ways to find dissertations, peer reviewed research sources, research-related information, etc.)
Understand the responsibilities of authorship including copyright, intellectual property, and discipline-based expectations
Malone, K. (2007). The bubble‐wrap generation: children growing up in walled gardens. Environmental Education Research, 13(4), 513–527. http://doi.org/10.1080/13504620701581612http://www.tandfonline.com/doi/abs/10.1080/13504620701581612
some of the changes in childhood environmental behaviours I explore children and parent relationships, in particular, the phenomena of ‘bubble‐wrapping’ children to appease the anxieties of some middle class parents.
Ivanova, A., & Ivanova, G. (2009). Net-generation Learning Style: A Challenge for Higher Education. In Proceedings of the International Conference on Computer Systems and Technologies and Workshop for PhD Students in Computing (pp. 72:1–72:6). New York, NY, USA: ACM. http://doi.org/10.1145/1731740.1731818http://dl.acm.org/citation.cfm?id=1731818
Lynch, K., & Hogan, J. (2012). How Irish Political Parties are Using Social Networking Sites to Reach Generation Z: an Insight into a New Online Social Network in a Small Democracy. Irish Communication Review, 13. Retrieved from http://arrow.dit.ie/cgi/viewcontent.cgi?article=1124&context=buschmarart
Parker, K., Czech, D., Burdette, T., Stewart, J., Biber, D., Easton, L., … McDaniel, T. (2012). The Preferred Coaching Styles of Generation Z Athletes: A Qualitative Study. Journal of Coaching Education, 5(2), 5–97.
Greydanus, D. E., & Greydanus, M. M. (2012). Internet use, misuse, and addiction in adolescents: current issues and challenges. International Journal of Adolescent Medicine and Health, 24(4), 283–289. http://doi.org/10.1515/ijamh.2012.041
7th Qualitative and Quantitative Methods in Libraries International Conference (QQML2015) 26-29 May 2015, IUT-Descartes University, Paris, France
Dear Colleagues and Friends,
It is our pleasure to invite you in Paris (IUT-Descartes University) for the 7th Qualitative and Quantitative Methods in Libraries International Conference (QQML2015, http://www.isast.org) which is organized under the umbrella of ISAST (International Society for the Advancement of Science and Technology).
This is the seventh year of the conference which brings together different disciplines on library and information science; it is a multi–disciplinary conference that covers the Library and Information Science topics in conjunction to other disciplines (e.g. innovation and economics, management and marketing, statistics and data analysis, information technology, human resources, museums, archives, special librarianship, etc).
The conference invites special and contributed sessions, oral communications, workshops and posters.
Target Group
The target group and the audience are library and archives professionals in a more general sense: professors, researchers, students, administrators, stakeholders, librarians, technologists, museum scientists, archivists, decision makers and managers.
Main topics
The emphasis is given to the models and the initiatives that run under the budget restrictions, such as the Information Management and the innovation, the crisis management, the long-term access, the synergies and partnership, the open access movement and technological development.
The conference will consider, but not be limited to, the following indicative themes:
1.Information and Knowledge Management
2.Synergies, Organizational Models and Information Systems
3.Open Data, Open Access, Analysis and Applications
You may send proposals for Special Sessions (4-6 papers) or Workshops (more than 2 sessions) including the title and a brief description at:secretar@isast.org or from the electronic submission at the web page: http://www.isast.org/abstractsubmission.html
You may also send Abstracts/Papers to be included in the proposed sessions, to new sessions or as contributed papers at the web page: http://www.isast.org/abstractsubmission.html
Contributions may be realized through one of the following ways
a. structured abstracts (not exceeding 500 words) and presentation;
b. full papers (not exceeding 7,000 words);
c. posters (not exceeding 2,500 words);
In all the above cases at least one of the authors ought to be registered in the conference.
Abstracts and full papers should be submitted electronically within the timetable provided in the web page: http://www.isast.org/.
The abstracts and full papers should be in compliance to the author guidelines: http://www.isast.org/
All abstracts will be published in the Conference Book of Abstracts and in the website of the Conference. The papers of the conference will be published in the website of the conference, after the permission of the author(s).
Student submissions
Professors and Supervisors are encouraged to organize conference sessions of Postgraduate theses and dissertations.
Please direct any questions regarding the QQML 2015 Conference and Student Research Presentations to: the secretariat of the conference at: secretar@isast.org
Important dates:
First call of proposals: 29th of September 2014
Deadline of abstracts submitted: 20 December 2014
Reviewer’s response: in 3 weeks after submission
Early registration: 30th of March 2015
Paper and Presentation Slides: 1st of May 2015
Conference dates: 26-29 May 2015
Paper contributors have the opportunity to be published in the QQML e- Journal, which continues to retain the right of first choice, however in addition they have the chance to be published in other scientific journals.
QQML e- Journalis included in EBSCOhost and DOAJ (Directory of Open Access Journals).
Submissions of abstracts to special or contributed sessions could be sent directly to the conference secretariat at secretar@isast.org. Please refer to the Session Number, as they are referred at the conference website to help the secretariat to classify the submissions.
For more information and Abstract/Paper submission and Special Session Proposals please visit the conference website at: http://www.isast.orgor contact the secretary of the conference at : secretar@isast.org
Looking forward to welcoming you in Paris,
With our best regards,
On behalf of the Conference Committee
Dr. Anthi Katsirikou, Conference Co-Chair
University of Piraeus Library Director
Head, European Documentation Center
Board Member of the Greek Association of Librarians and Information Professionals
Google’s chief executive has expressed concern that we don’t trust big companies with our data – but may be dismayed at Facebook’s latest venture into manipulation
The field of learning analytics isn’t just about advancing the understanding of learning. It’s also being applied in efforts to try to influence and predict student behavior.
Learning analytics has yet to demonstrate its big beneficial breakthrough, its “penicillin,” in the words of Reich. Nor has there been a big ethical failure to creep lots of people out.
“There’s a difference,” Pistilli says, “between what we can do and what we should do.”
Higher Education institutions use course evaluations for a variety of purposes. They factor in retention analysis for adjuncts, tenure approval or rejection for full-time professors, even in salary bonuses and raises. But, are the results of course evaluations an objective measure of high quality scholarship in the classroom?
Associate Professor of Molecular Biology at Winston-Salem State University
I feel they measure student satisfaction, more like a customer service survey, than they do teaching effectiveness. Teachers students think are easy get higher scores than tough ones, though the students may have learned less from the former.
How can you measure teachers’ effectiveness.
That is how much students learn?
If there is a method to measure how much we learn , I would appreciate to learn .
From what I recall, the research indicates that student evaluations have some value as a proxy and rough indicator of teacher effectiveness. We would expect that bad teachers will often get bad ratings, and good teachers will often get good ratings. Ratings for individual teachers should always be put in context, IMHO, for precisely the reasons that Daniel outlines.
Aggregated ratings for teachers in departments or institutions can even out some of these factors, especially if you combine consideration with other indicators, such as progress rates.The hardest indicators however are drop-out rates and completion rates. When students vote with their feet this can flag significant problems. We have to bear in mind that students often drop out for personal reasons, but if your college’s drop-out rate is higher than your peers, this is worth investigating.
Technical educator looking for a new opportunity or career direction
I agree with what Michael says – to a point. Unfortunately student evaluations have also been used as a venue for disgruntled students, acting alone or in concert – a popularity contest of sorts. Even more unfortunately college administrations (especially for-profits) tend to rate Instructor effectiveness on the basis of student evaluations.
IMHO, student evaluation questions need to be carefully crafted in order to be as objective as possible in order to eliminate the possibility of responses of an unprofessional nature. To clarify – a question like “Would you recommend this teacher to other students?” has the greatest potential for counter-productivity.
2013-2015 Peter Lang Publishing, Inc. (New York) Founding Book Series Editor: Higher Education Theory, Policy, & Praxis
This is not a Cartesian question in that the answer is neither yes nor no; it’s not about flipping a coin. One element that may make it more likely that student achievement is a result of teacher effectiveness is the comparison of cumulative or summative student achievement against incoming achievement levels. Another variable is the extent to which individual students are sufficiently resourced (such as having enough food, safety, shelter, sleep, learning materials) to benefit from the teacher’s beneficence.
Overall, I think students are the best judge of a teacher’s effective pedagogy methods. Although there may be students with different learning difficulties (as there usually is in a class), their understanding of the concepts/principles and application of the subject matter in exam questions, etc. depends on how the teacher imparts such knowledge in a rather simplified and easy manner to enhance analytical and critical thinking in them. Of course, there are students too who give a bad review of a teacher’s teaching mode out of spite just because the said teacher has reprimanded him/her in class for being late, for example, or for even being rude. In such a case, it would not be a true reflection of the teacher’s method of teaching. A teacher tries his/her best to educate and inculcate values by imparting the required knowledge and ensuring a 2-way teaching-learning process. It is the students who will be the best judge to evaluate and assess the success of the efforts undertaken by the teacher because it is they who are supposed to benefit at the end of the teaching exercise.
In some cases, I think evaluations (and negative ones in particular) can offer a good perspective on the course, especially if an instructor is willing to review them with an open mind. Of course, there are always the students who nitpick and, as Rina said, use the eval as a chance to vent. But when an entire class complains about how an instructor has handled a course (as I once saw happen with a tutoring student whose fellow classmates were in agreement about the problems in the course), I think it should be taken seriously. But I also agree with Daniel about how evaluations should be viewed like a customer service survey for student satisfaction. Evals are only useful up to a point.
I definitely agree about the way evaluations are worded, though, to make sure that it’s easier to recognize the useful information and weed out the whining.
I am director of studies and students in continuing education are making evaluation of the teaching effectiveness. Because I am in an ISO process, I must take in account those measurements. It might be very difficult sometimes because the number of students does not reach the level required for the sample to be valid (in a statistic meaning). But in the meantime, I believe in the utility of such measurements. The hard job is for me when I have to discuss with the teacher who is under the required score.
Senior Tutor – CeTTL – Student Learning & Digital/Technology Coach (U of W – Faculty of Education)
I’m currently ‘filling in’ as the administrator in our Teaching Development Unit – Appraisals and I have come to appreciate that the evaluation tool of choice is only that – a tool. How the tool is used in terms of the objective for collecting ‘teaching effectiveness’ information, question types developed to gain insight of, and then how that info is acted upon to inform future teaching and learning will in many ways denote the quality of the teaching itself !
Student voice is not just about keeping our jobs, ‘bums on seats’ or ‘talking with their feet’ (all part of it of course) but should be about whether or not we really care about learning. Student voice in the form of evaluating teachers’ effectiveness is critically essential if we want our teaching to model learning that affects positive change – Thomas More’s educational utopia comes to mind…
Consultant and Professor of International Education
Alas, I think they are weak indicators of teaching effectiveness, yet they are used often as the most important indicators of the same. And in the pursuit of high response rate, they are too often given the last day of class, when they cannot measure anything significant — before the learning has “sunk in.” Ask better questions, and ask the questions after students have had a chance to reflect on the learning.
Lecturer (Teaching and Learning), and Belly Dance teacher
I’m just wrapping up a very large project at my university that looked at policy, processes, systems and the instrument for collecting student feedback (taking a break from writing the report to write this comment). One thing that has struck me very clearly is that we need to reconceptualise SETs. de Vellis, in Scale Development, talks about how a scale generally has a higher validity if the respondent is asked to talk about their own experiences.
Yet here we are asking students to not only comment on, but evaluate their teachers. What we really want students to do in class in concentrate on their learning – not on what the teacher is doing. If they are focussing on what the teacher is doing then something is not going right. The way we ask now seems even crazier when we consider the most sophisticated conception of teaching is to help students learn. So why aren’t we asking students about their learning?
The standard format has something to do with it – it’s extremely difficult to ask interesting questions on learning when the wording must align with a 5 point Likert response scale. Despite our best efforts, I do not believe it is possible to prepare a truly student centred and learning centred questionnaire using this format.
An alternate format I came across that I really liked (Modified PLEQ Devlin 2002, An Improved Questionnaire for Gathering Student Perceptions of Teaching and Learning), but no commercial evaluation software (which we are required to purchase) can do it. A few overarching questions sets the scene for the nature of the class, but the general question format goes: In [choose from drop down list] my learning was [helped/hindered] when [fill in the blank] because [fill in the blank]. The drop down list would include options such as lectures, seminars/tutorials, a private study situation, preparing essays, labs, field trip, etc. After completing one question the student has the option to fill in another … and another … and another … for as long as they want.
Think about what information we could actually get on student learning if we we started asking like this! No teacher ratings, all learning. The only number that would emerge would be the #helped and the #hindered.
Keep in mind “Goodhart’s Law” – When a measure becomes a target, it ceases to be a good measure.
For example, if youth unemployment figures become the main measure, governments may be tempted to go for the low hanging fruit, the short term (eg. a work for the dole stick to steer unemployed people into study or the army).
I totally agree with most of the comments here. I find student evaluations to be virtually meaningless as measures of a teachers’ effectiveness. They are measures of student perception NOT of learning. Yet university administrators eg Deans, Dept chairs, persist in using them to evaluate faculty performance in the classroom to the point where many instructors have had their careers torn apart. Its an absolute disgrace!! But no one seems to care! That’s the sick thing about it!
Satisfaction cannot be simply correlated with teaching quality. The evidence is that students are most “satisfied” with courses that support a surface learning approach – what the student “needs to know” to pass the course. Where material and delivery is challenging, this generates less crowd approval but, conversely, is more likely to be “good teaching” as this supports deep learning.
Our challenge is to achieve deep learning and still generate rave satisfaction reviews. If any reader has the magic recipe, I would be pleased to learn of it.
Maybe it is about time we started calling it what it is and got Michelin to develop the star rating system for our universities.
Nevertheless I appreciate everyone’s thoughtful comments. Muvaffak, I agree with you about the importance and also the difficulty of measuring student learning. Cathryn, thank you for taking a break from your project to give us an overview.
My story: the best professor and mentor in my life (I spent a total of 21 years as a student in higher education), the professor from whom I learned indispensable and enduring habits of thought that have become more important with each passing year, was one whom the other graduate students in my first term told me–almost unanimously– to avoid at all costs.
Former Provost and Vice Chancellor for Academic Affairs at Winston Salem State University & President of HigherEd SC.
I am not sure that course evaluations based on one snap shot measure “teacher effectiveness”. For various reasons, some ineffective teachers get good ratings by pandering to the lowest level of intellectual laziness. However, consistently looking at comments and some other measures may yield indicators of teachers who are unprepared, do not provide feedback, do not adhere to a syllabus of record, and do not respect students in general. I think part of that information is based how questions are crafted.
I believe that a self evaluation of instructor over a period of a semester could yield invaluable information. Using a camera and other devices, ask the instructor to take snap shots of their teaching/ learning in the classroom over a period of time and then ask for a self-evaluation. For the novice teacher that information could be evaluated by senior faculty and assist the junior faculty to improve his/her delivery. Many instructors are experts in their field but lack exposure to different methods of instructional delivery. I would like to see a taxonomy of a scale that measures the instructor’s ability using lecture as the base of instruction and moving up to levels of problem based learning, service learning, undergraduate research by gauging the different pedagogies (pedagogy, androgogy heutagogy, paragogy etc. that engage students in active learning.
I wanted to piggyback on Cathryn’s comment above, and align myself with how many of you seem to feel about student evaluations. The quantitative part of student evals are problematic, for all of the reasons mentioned already. But the open-ended feedback that is (usually) a part of student evaluations is where I believe some real value can be gained, both for administrative purposes and for instructor development.
When allowed to speak freely, what are students saying? Are they lamenting a particular aspect of the course/instructor? Is that one area coloring their response across all questions? These are all important considerations, and provide a much richer source of information for all involved.
Sadly, the quantitative data is what most folks gravitate to, simply because it’s standardized and “easy”. I don’t believe that student evaluations are a complete waste of time, but I do think that we tend to focus on the wrong information. And, of course, this ignores the issues of timing and participation rates that are probably another conversation altogether!
‘What the Student Does: teaching for enhanced learning’ by John Biggs in Higher Education Research & Development, Vol. 18, No. 1, 1999.
“The deep approach refers to activities that are appropriate to handling the task so that an appropriate outcome is achieved. The surface approach is therefore to be discouraged, the deep approach encouraged – and that is my working definition of good teaching. Learning is thus a way of interacting with the world. As we learn, our conceptions of phenomena change, and we see the world differently. The acquisition of information in itself does not bring about such a change, but the way we structure that information and think with it does. Thus, education is about conceptual change, not just the acquisition of information.” (p. 60)
This is the approach higher education is trying adapt to at the moment, as far as I’m aware.
My Human Resource students will focus on this issue in a class debate “Should student evaluation data significantly impact faculty tenure and promotion decisions?” One side will argue “yes, it provides credible data that should be one of the most important elements” and the other group will argue against this based on much of what has been said above. They will say student evaluations are basically a popularity contest and faculty may actually be dumbing down their classes in order to get higher ratings.
To what extent is student data used in faculty tenure and promotion decisions at your institutions?
Associate Professor at Institute of Education, IIUM
Cindy; it is used in promotion decision in my university, but only a small percentage of the total points. Yet this issue is still a thorny one for some faculty
How open are we? Is learning about the delivery of a subject only or bulding on soft skills as well?So if we as teachers are facilitating learning in a conducive manner ,would it not lead to an average TE at the least &thus indicate our teaching effectiveness at the base level. Indeed qualitative approach would be far better an approach, if we intend to accomplish the actual purpose of TE i.e Reflection for continual improvement.More and more classrooms are becoming learner centered and to accomplish this learners ‘say’ is vital.
Some students using these as platforms for personal whims, must not be a concern for many, since the TE are averaged out .Of course last but not the least TEs are like dynamites and must be handled by experts.These are one of the means of assessing the gaps,if any, between the teaching and learning strategies. These must not be used for performance evaluation.If at all, then all the other factors such as the number of students,absenteeism,pass rate rather HD & D rates over a period of minimum three terms must also be included alongside.
Teaching colleague at Ben Gurion University of the Negev
I implement a semester long self evaluation process in all my mathematics courses. Students gets 3 points (out of 100) for anonymously filling an online questionnaire online every week . They rate (1-5) their personal class experience (I was bored -I was fascinated, I understood nothing- I understood everything, The tutorials sessions didn’t-did help, I visited Lecturer’s/TA’s office hours, I spent X hours of self learning this week). They can also add verbal comments.
I started it 10 years ago when I built a new special course, to help me “hear” the students (80-100 in each class) and to better adjust myself and the content to my new students. I used to publish a weekly respond to the verbal comments, accepting some and rejecting others while making sure to explain and justify any decision of mine.
Not only that it helped me improve my teaching and the course but it turned out that it actually created a very solid perception of me as a caring teacher. I always was a very caring teacher (some of my colleagues accuse me of being over caring…) but it seems that “forcing” my student to give feedback along all the semester kind of “brought it out” to the open.
I am still using long-semester feedback in all my courses and I consider both quantitative and qualitative responds. It helps me see that the majority of students understand me in class. I ignore those who choose “I understand nothing” – obviously if they were indeed understanding “nothing” they would have not come to class… (they can choose “I didn’t participate” or “I don’t wont to answer”)
I ignore all verbal comments that aim to “punish” me and I change things when I think students r right.
Finally, being a math lecturer for non-major students is extremely hard, both academically and emotionally. Most students are not willing to do what is needed in order to understand the abstract/complicated concepts and processes.
Only few (“courageous “) students will attribute their lack of understanding to the fact that they did not attend all classes, or that they weren’t really focused on learning, (probably they spend a lot of time in “Facebook” during class..), or that they didn’t go over class notes at home and come to office hours when they didn’t understand something etc.
I am encouraged by the fact that about 2/3 of the students that attend classes state they “understood enough” and above (3-5) all semester long. This is especially important as only 40-50% of the students fill the formal end of the semester SE and I bet u can guess how the majority of of them will rate my performance. Students fill SE before the final exam but (again) u can guess how 2 midterms with about 24% failures will influence their evaluation of my teaching.
I think it’s important to avoid defensive responses to the question. Most participants have assumed that we are talking about individual teachers being assessed through questionnaires, and I share everyone’s reservations about that. I entirely agree that deep learning is what we need to go for, but given the huge amounts of public money that are poured into our institutions, we need to have some way of evaluating whether what we are doing is effective or whether it isn’t.
I’m not impressed by institutions that are obsessed only with evaluation by numbers. However, there is some merit in monitoring aggregated statistics over time and detecting statistically significant variations. If average satisfaction rates in Engineering have gone down every year for five years shouldn’t we try and find out why? If satisfaction rates in Architecture have gone up every year for five years wouldn’t it be interesting to know if they have been doing something to bring that about that might be worthwhile? It might turn out to be a statistical artifact, but we need to inquire into it, and bring the same arts of critical inquiry to bear on the evidence that we use in our scholarship and research.
But I always encourage faculties and institutions to supplement this by actually getting groups of students together and talking to them about their student experience as well. Qualitative responses can be more valuable than quantitative surveys. We might actually learn something!
Associate Professor at UNESP – São Paulo State University
As everyone here I also think that these evaluation forms do not truly measure teaching effectiveness. This is a quite hard thing to evaluate, since the effect of learning will be felt several years later, while performing their job duties.
Besides that, some observations made by students are interesting for our own growth. I usually get these through informal talks with the class or even some students.
In another direction, some of the previous comments are addressing deep/surface learning basically stating that deep learning is the right way to go. I have to disagree with this for some of the contents that have to be taught. In my case (teaching to computer science majors) it is important, for example, that every student have a surface knowledge about operating systems design, but those who are going to work as database analysts do not need to know the deep concepts involved with that (the same is true for database concepts for a network analyst…). So, surface learning has also its relevance in the professional formation.
Senior Consultant and Lecturer at university of nicosia
The usefulness of Student evaluations, like all similar surveys, is closely linked to the particular questions they are asked to answer. There are the objective-type/factual questions such as “Does he start class on time” or “does he speak clearly” and the very personal questions such as “does he give fair grades”… The effectiveness of a Teacher could be more appropriately linked to suitably phrased question, such as “has he motivated you to learn” and “how much have you bebnefited from the course”. The responses to these questions could, also, be further assessed by comparison with the final grades given to that particular course with the performance of the class in the other courses they have taken..during that semester. So, for assessing Teacher Effectiveness, one needs to ask relevant questions. and perform the appropriate evaluations..
Michael has an excellent point that some accountability of institutions and programs is appropriate, and that aggregated data or qualitative results can be useful in assessing whether the teaching in a particular program is accomplishing what it sets out to do. Many outcomes studies are set up to measure the learning in an aggregated way.
We may want to remember that our present conventions of teaching evaluation had their roots in the 1970s (in California, if I remember correctly), partly as a response to a system in which faculty, both individually and collectively, were accountable to no one. I recall my student days when a professor in a large public research institution would consider it an intrusion and a personal affront to be asked to supply a course syllabus.
As the air continues to leak out of the USA’s higher education bubble, as the enrollments drop and the number of empty seats rises, it seems inevitable that institutions will feel the pressure to offer anything to make the students perceive their experience as positive. It may be too hard to make learning–often one of the most uncomfortable experiences in life–the priority. Faculty respond defensively because we are continually put in the position of defending ourselves, often by poorly-designed quantitative instruments that address every kind of feel-good hotel concierge aspect of classroom management while overlooking learning.
The evaluation of faculty by the students is welcome. The statistics of information can be looked into to a certain degree of objectivity. An instructor strict with his/her students may be ranked low in spite of being an asset to the department. A ‘free-lance’ teacher with students may be placed higher despite being a poor teacher. At any rate the HoD’s duty is to observe the quality of all teachers and his objective evaluation is final. The parents feed-back is also to be taken. Actually
teaching is a multi-dimensional task and students evaluation is just one co-ordinate only.
Associate Professor at University of Wisconsin, Stevens Point
Student evaluations are a terrible tool for measuring teacher effectiveness. They do measure student satisfaction, and to some extent the measure student *perception* of teacher effectiveness. But the effectiveness of a teaching method or of an instructor is poorly correlated with student satisfaction: while there are positive linkages between the two concepts, students are generally MORE satisfied by an easy course that makes them feel good than by a hard course that makes them have to really think and work (and learn).
Students like things that are flashy, and things that are easy more than they like things that require a lot of work or things that force them to rethink their core values. Certainly there are students who value a challenge, but even those students may not recognize which teacher gave them a better course.
Student evaluations can be used effectively to help identify very poor teaching. But it is useless to distinguish between adequate and good teaching practices.
ex Vicerrector Administrativo en Universidad Nacional de San Cristóbal de Huamanga
César S. Granados
Retired Professor from The National University of San Cristóbal de Huamanga
Ayacucho, PERÚ
Since teaching effectiveness is a function of teacher competencies, an effective teacher is able to use the existing competencies to achieve the desired student´s results; but, student´s performance mainly depends of his commitment to achieve competencies.
The student evaluations I’ve seen are more like customer satisfaction surveys, and in this respect, there is less helpful information for the instructor to improve his or her craft and instead more feedback about whether or not the student liked the experience. Shouldn’t their learning and/or improving skills be at least as important? I’m not arguing that these concepts are mutually exclusive, but the evaluations are often written to privilege one over the other.
There are other problems. Using the same evaluation tool for very different kinds of courses (lecture versus workshop, for instance) doesn’t make a lot of sense. Evaluation language is often vague and puzzling in what it rewards (one evaluation form asks “Was the instructor enthusiastic?” Would an instructor bursting with smiles and enthusiasm but who is disorganized and otherwise less effective be privileged over one who is low-key but nonetheless covers the material effectively?). The “halo effect” can distort findings, where, among other things, more attractive instructors can get higher marks.
Given how many times I’ve heard from students about someone being their favorite instructor because he or she was easy, I question the criteria students may use when evaluating. Instructors are also told that evaluations are for their benefit to improve teaching ability, but then chairs and administrators use them in promotion and hiring decisions.
I think if the evaluation tool is sound, it can be useful to helping instructors. But, lastly, I think of my own experiences as a student, where I may have disliked or even resented some instructors because they challenged me or pushed me out of my comfort zone to learn new skills or paradigms. I may have evaluated them poorly at the time, only to come to learn a few years later with greater maturity that they not only taught me well, but taught me something invaluable, and perhaps more so than the instructors I liked. In this respect, it would be more fair to those instructors for me to fill out an evaluation a few years later to accurately describe their teaching.
Adjunct Professor of Writing at Niagara University
Wow, there are so many valid points raised; so many considerations. In general, I tend to agree with those who believe it gauges student satisfaction more than learning, though there is a correlation between the two. After 13 years as an adjunct at a relatively small, private college, I have found that engagement really is what many students long for. It seems far less about the final grades earned and more about the tools they’ve acquired. It should be mentioned that I teach developmental level composition, and while almost no student earns an A, most feel they have learned much:)
Former director, center for the advancement of teaching at Tel Aviv University
Student ratings of instruction (SRI) do not measure teaching effectiveness but rather student satisfaction from instruction (as some previous comments on this list suggest). However there is a substantial research evidence for the relationships between SRIs and some agreed-upon measures of good teaching and of student learning. This research is summarized in much detail in my recent book:
Student Ratings of Instruction: A Practical Approach to Designing, Operating, and Reporting (220 pp.) https://www.createspace.com/4065544
ISBN-13:978-1481054331
Associate Professor at Notre Dame University – Louaize
Evaluation, in all its forms, is a complex exercise that needs both knowledge and skill. Further, evaluation can best be achieved through a variety of instruments. We know all of this as teachers. Question is how knowledgeable are our students regarding the teaching/learning process. More, how knowledgeable are our administrators in translating information collected from questionnaires (some of which are validity-questionable) into plausible data-based decisions. I agree that students should have a say in how their courses are being conducted. But to use their feedback, quantitatively, to evaluate university professors… I fear that I must hold a very skeptical stand towards such evaluation.
Quite an interesting topic, and I’m reminded of the ancient proverb, “Parts is not parts.” OK, maybe that was McDonalds. This conversation would make a very thoughtful manuscript.
Courses is not courses. Which course will be more popular, “Contemporary Music” or “General Chemistry?”
Search any university using the following keywords “really easy course [university].” Those who teach these courses are experts at what they do, and what they do is valuable, however the workload for the student is minimal.
The major issues: (1) popularity is inversely proportional to workload; and (2) the composition of the questions appearing on course and professor evaluations (CAPEs).
“What grade do you expect in this class? Instructor explains course material well? Lectures hold your attention?”
If Sally gets to listen to Nickleback in class and then next period learn quantum mechanics, which course does one suppose best held her attention?
A person about to receive a C- in General Chemistry is probably receiving that C- because s/he was never able to understand the material for lack of striving, and probably hates the subject. That person is very likely to have never visited the professor during office hours for help. Logically one might expect low approval ratings from such a scenario.
A person about to receive an A in General Chemistry is getting that A because s/he worked his/her tail off. S/he was able to comprehend mostly everything the professor said, and most probably liked the course. Even more, s/he probably visited the professor during office hours several times for feedback.
One might argue that the laws of statistics will work in favor of reality, however that’s untrue when only 20% of students respond to CAPEs. Those who respond either love the professor or hate the professor. There’s usually no middle ground. Add this to internet anonymity, and the problem is compounded. I am aware of multiple studies conducted by universities indicating high correlation between written CAPEs and electronic CAPEs, however I’d like to bring up one point.
Think of the last time you raised your voice to a customer service rep on the phone. Would you have raised your voice to that rep in person?
There’s not enough space to comment on all the variables involved in CAPE numerical responses. As of last term I stopped paying attention to the numbers and focused exclusively on the comments. There’s a lot of truth in most of the comments.
I would like to see the following experiment performed. Take a group of 10,000 students. Record their CAPE responses prior to receiving their final grade. Three weeks later, have them re-CAPE. One year later, have them re-CAPE again. Two years. Three years. Finally, have them re-CAPE after getting a job.
Many students don’t know what a professor did for them until semesters or years down the road. They’re likely to realize how good of a teacher the professor was by their performance in future courses in the same subject requiring cumulative mastery.
Do I think student evaluations measure teaching effectiveness? CAPEs is not CAPEs.
Owner of AREND.co, a professional learning community for educators
No, it does not. Efficiency in class room should be measured by the results of students, their attitude towards students and the quality of their preparation. I worked with a man who told a story about the different hats and learning and thought that was a new way of looking at learning. To my utmost shock my colleague, who sat because he had to say something, told me that he did it exactly the same, same jokes, etc, when he did the course five years ago. For real – nothing changed, no new technology, no new insights. no learning happened over a period of five years, nothing? And he is rated very high – head of a new wing. Who rated him? How? And why did it not effect his teaching at all?
Chief Executive at Institut Sains @ Teknologi Darul Takzim ( INSTEDT)
If we are looking for quality, we have to get information about our performance.in the lecture room. There are 6 elements normally being practice. They are: 1.Teaching Plan of lecture contents 2.Teaching Delivery 3.Fair and systematic of evaluation on student’s work 4. Whether the Teaching follows the semester plan.5. Whether the lecturer follows the T-Table and always on time of their lecturer hours and lastly is the Relationship between lecturer and students.
Do we need to be reminded that educators were students at one time or the other? So why not have students evaluate the performance of a teacher? After all, the students are contributing to their own investment in what is significant for survival; and whether it is effective towards career development to attain their full potential as a human sentient being towards the greater good of humanity; anything else falls short of human progress in a tiny rotating planet cycling through the solar system with destination unknown! Welcome to the ‘Twilight Zone.”
Would you rather educate a student to make a wise decision to accept 10 gallons of water in a desert? Or accept a $1 million check that further creates mirages and illusory dreams of success?
I think what my students say about me is important. I’m most interested in the comments they make and have used these to pilot other ideas or adjust my approach.
I’ve had to learn to not beat myself up about a few bad comments or get carried away with a few good ones.
I also use the assessment results to see if the adjustments made have had the intended impact. I use the VLE logs as well to see how engaged the students are with the materials and what tools they use and when.
I find the balance keeps me grounded. I want my students to do well and have fun. The dashboard on your car has multiple measures. Why should teaching be different? Like the car I listen for strange noises and look out the window to make sure I’m still on the road.
I think that most student evaluations are only reaction measures and not true evaluation of learning outcome or teaching effectiveness – and often evaluations are tainted if the student get a lower mark than anticipated
I think these types of evaluation are only indicative — and should not really be used to measure teacher/teaching effectiveness – and should not be allowed to affect teachers’ careers
I note Stephen’s point about multiple measures — unfortunately most evaluations are quick and dirty — and certainly do not provide multiple measures
No, students’evaluations cannot fully measure teaching effectiveness.
However,for the relationship to be mutually beneficial, you have to accept their judgement on the matter, Unfortunately a Unique teacher for all categories (types) of students does not exist in our dynamic world.
Professor, Executive Dean, Faculty of Health, Federation University Australia
Student evaluations are merely popularity contests, they tempt academics to ‘ dumb down’ the content in order to be liked and evaluated positively…this is a dangerous and slippery slope then can result in graduates being ill-prepared for the professions and industries they seek to enter.
PRINCE 2 Registered Practitioner at Higher Colleges of Technology
In my opinion the student-teacher evaluations are measuring popularity as others suggested but the problem is that some of the questions and intentions of assesing are not fulfilled due to the use of wrong questioning. I have never seen in the instruments a question asking students of their expectations from the teacher and the course as such. To me that is more important than to ask if the student likes the teaching style which students do not know anyway. Teachers who give any test before the assessment are likely to get low ratings than those who give tests soon after the evaluation.
I agree with other contributors. The evaluations are akin to a satisfaction survey. Personally, if, for example, I stay at an hotel, I only fill in the satisfaction survey if something is wrong. If the service is as I expect, I don’t bother with the survey.
I feel also that students rate the courses or modules on a popularity basis. A module on a course may be enjoyable, or fun, but not necessarily better taught than another subject with a less entertaining subject.
Unfortunately, everyone seems to think that the student evaluations are the main criteria by which to judge a course.
First of all, it would help if we stop referring to them as “student” or “course” evaluations. Students are not qualified to evaluate. That is what administrators are paid to do. However, students are qualified to provide feedback to instructors and administrators about their perceptions of what occurred in the class and of how much they believe they learned. How can that not be valuable information, especially for developmental purposes about how to teach more effectively? Evaluation is not an event that happens at the end of a course–it is an ongoing process that requires multiple indicators of effectiveness (e.g., student ratings of the course, peer evaluations, administrator evaluations, course design, student products). By triangulating that combination of evidence, administrators and faculty can then make informed judgments and evaluate.
The student / teacher relationship around the subject matter is a ‘triangle.’ The character of the triangle has a lot to do with a student’s reception of the of the material and the teacher.
The Student:
The well-prepared student and the intrinsically motivated student can more readily thrive in the relationship. If s/he is thriving s/he may be more inclined to rate the teacher highly. The poorly prepared student or the student who requires motivation from ‘outside’ is much less likely to thrive and more likely to rate a teacher poorly.
The Teacher:
The well-prepared teacher and the intrinsically motivated teacher can more readily thrive in the relationship. If s/he is thriving students may be more inclined to rate the teacher highly. The poorly prepared teacher or the teacher who requires motivation from ‘outside’ is much less likely to thrive and more likely to achieve poor teacher ratings.
The Subject Matter:
The content and form of the subject matter are crucial, especially in their relation to the student and teacher.
Student evaluations do not measure teaching effectiveness. I have been told I walk on water and am the worst teacher ever. The major difference was the level of student participation. The more they participated the better I was.
What I use them for is a learning tool. I take the comments apart looking for snippets that I can use to improve my teaching.
I have been involved in a portfolio program the past two years. One consist is the better the measured outcomes, the worse the student reviews.
Former Provost and Vice Chancellor for Academic Affairs at Winston Salem State University & President of HigherEd SC.
Steve,
Have you ever been part of a tenure or promotion committee evaluation process? In my 35 years of experience, faculty members do not operate in that ideal smooth linear trajectory that you have described. On the contrary, they partition evaluations into categories and look at student course evaluations as the evidence of an instructor’s ability to teach. However, faculty can choose which evaluations they can submit and what comments they want to include as part of the record. I have never seen “negative comments” as evidence of “ineffective teaching”. The five point scale is used and whenever that falls below a 3.50, it becomes a great concern for our colleagues!
There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just
There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just
There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just
There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just
There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just
There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just
There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just
There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just
There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just
Amazing how things work…I’m actually in the process of framing out a research project related to this very question. Does anyone have any suggestions for specific papers I should look at i.e. literature related to the topic?
With respect to your question, I believe the answer depends on the questions that get asked.
The school-derived questionnaires nearly always ask the wrong questions, for one.
I’ve always thought students should wait some years (3-20) before providing feedback, because the final day of class is too recent to do a good assessment.
Open University Coursework Consultant, Research Methods
I’m quite late to the topic here, and much of what I think has been said by others. There is a difference between the qualitative and quantitative aspects of student evaluations – I am always fascinated to find out what my students (and peers, of course, though that is a different topic) do/do not think I am doing well so I can learn and adapt my teaching. For this reason, I prefer a more continuous student evaluation than the questionnaire at the end of the course – if I need to adapt to a particular group, I need the information sooner rather than later.
However, the quantitative side means nothing unless it is tied back to hard data on how the students did in their assessments – an unpopular teacher can still be a *good* teacher of the subject at hand! And the subject matter counts a lot – merely teaching an unpopular but compulsory subject (public law, for instance!) tends to make the teacher initially unpopular in the minds of students – a type of shooting the messenger.
Teaching isn’t a beauty contest – these metrics need to be used in the right way, and combined with other data if they are to say anything about the teaching.
I wrote a paper about this issue a few years ago. Briefly, the thrust of my argument is that student opinions should not be used as the basis for evaluating teaching effectiveness because these aggregated opinions are invalid measures of quality teaching, provide no empirical evidence in this regard, are incomparable across different courses and different faculty members, promote faculty gaming and competition, tend to distract all participants and observers from the learning mission of the university, and insure the sub-optimization and further decline of the higher education system. Using student opinions to evaluate, compare and subsequently rank faculty members represents a severe form of a problem Deming referred to as a deadly disease of western style management. The theme of the alternative approach is that learning on a program-wide basis should be the primary consideration in the evaluation of teaching effectiveness. Emphasis should shift from student opinion surveys to the development and assessment of program-wide learning outcomes. To achieve this shift in emphasis, the university performance measurement system needs to be redesigned to motivate faculty members to become part of an integrated learning development and assessment team, rather than a group of independent contractors competing for individual rewards.
Martin, J. R. 1998. Evaluating faculty based on student opinions: Problems, implications and recommendations from Deming’s theory of management perspective. Issues in Accounting Education (November): 1079-1094. http://maaw.info/ArticleSummaries/ArtSumMartinSet98.htm
Just to add my own two cents (two and a half Canadian cents at this point), I think students have much of value to tell us about their experience in our courses and classes, information that we can use to improve their learning and become more effective teachers. They are also able to inform academic administrators of the degree to which teachers fulfill their basic duties and perform the elementary tasks they are assigned. They have far less to tell us about the value of what they’re learning to their future, their professions … and they are perhaps not the best qualified to identify effective learning and teaching techniques and methods. Those sorts of things are better assessed by knowledgeable, expert professional and academic peers.
Member of Academic committees of some Universities & Retd.Prof.,Dept.of Botany,University of Rajasthan,Jaipur.
Student rating system may not necessarily be a reliable method to assess the teaching
effeciveness,because it depends upon individual grasping/understanding power, intelligence
and study tendency A teacher does his/her job well, but how many students understand
it well. It is reflected invariably in the marks obtained by them.
how does it help faculty? hi end lecture capture. Collaboration for two experts, they can use the green screen. Use the background.
How decisions are made. Is faculty involved. This center is one time deal, money spent on production. Innovative technology for $40K. It might be more. No time to survey people what they want. There are other technologies which people can try out and then expand on them.
Bunch of smart boards, but not sure if people are. Using them. Software and apps only here at the CETL, not on the. Rest of the campus. People will try but get stuck with that technology Only.
web page and linkedin are the social media they are using
the CETL is housing people with different bosses. Closes collaboration is technology and CETL, not research yet. D2l specialist and hardware people are coming to CETL. StarID conversion is hosted in CETL. Library had to give up spaceto CEyl and like at Scsu problematic.
Assessment certificate. Sustainability and budget.
Summer money for class redesign. Cohort of people who can focus on that. flipped classroom study abroad etc as themes.
New provost wants decisions to be data driven. Is there an office like institutional research. Use only quantitative data but thinking about qualitative interviews.