Searching for "note taking"

survey for mobiles

https://smaudience.surveymonkey.com/webinar-google-mobile-surveys.html

Join Mario Callegaro, Senior Survey Research Scientist at Google UK, and one of own survey research scientists, Sarah Cho, on February 24 at 10 am PT / 1 pm ET for our webinar, Market research surveys gone mobile: Optimizing for better results.

Mario Callegaro

Senior Survey Research Scientist

Quantitative Marketing Team, Google UK

 

Sarah Cho

Survey Research Scientist

SurveyMonkey

.My notes from the Webinar.

Surveys uncover the WHY. Big Data,

why mobile matters. tablet and smart phone penetration: around 60-80% in Europe. According to Pew In the US, 68% smartphone and 45% tablet

faster reaction but longer questionnaire completion time on smartphones = device effects

survey design device vs. survey take device – mismatch. When there is a mismatch, questions are asked.
5 strategies to handle mobile phone respondents: 1. do nothing
surveym0nkey: do all surveys have to be mobile optimized? no, so make sure you think about the context in which you are sending out

2. discourage the use of mobile phones for answering 3. optimize the web questionnaire for mobile browsers 4. mobile app

design considerations for multiple devices surveys. two “actors”: survey designer and survey platform

confounds when interpreting findings across devices: use homogeneous population (e.g students)

difference between mouse vs fingers as input devices

what about tablets: as long as flash is not used, tablet is very much the same as laptop/desktop. phablets (iPhone growth of the screen)

mobile survey design tips (Sarah)

multiple choice: ok to use, but keep wording short, format response vertically instead of horizontally.

open-ended q type: hard to type (but no word on voice recognition???)

logo

multimedia: images, clarity, video, avoid (bandwidth constrains), use Youtube, so every device can play it, versus Flash, Java Script etc

testing and length: as usual

URL: as short as possible. consider QR code

growth of survey taking on mobile devices

growth of survey taking on mobile devices

 

 

handbook of mobile learning

Routledge. (n.d.). Handbook of Mobile Learning (Hardback) – Routledge [Text]. Retrieved May 27, 2015, from http://www.routledge.com/books/details/9780415503693/

Crompton, H. (2013). A historical overview of mobile learning: Toward learner-centered education. Retrieved June 2, 2015, from https://www.academia.edu/5601076/A_historical_overview_of_mobile_learning_Toward_learner-centered_education

Crompton, Muilenburg and Berge’s definition for m-learning is “learning across multiple contexts, through social and content interactions, using personal electronic devices.”
The “context”in this definition encompasses m-learnng that is formalself-directed, and spontaneous learning, as well as learning that is context aware and context neutral.
therefore, m-learning can occur inside or outside the classroom, participating in a formal lesson on a mobile device; it can be self-directed, as a person determines his or her own approach to satisfy a learning goal; or spontaneous learning, as a person can use the devices to look up something that has just prompted an interest (Crompton, 2013, p. 83). (Gaming article Tallinn)Constructivist Learnings in the 1980s – Following Piage’s (1929), Brunner’s (1996) and Jonassen’s (1999) educational philosophies, constructivists proffer that knowledge acquisition develops through interactions with the environment. (p. 85). The computer was no longer a conduit for the presentation of information: it was a tool for the active manipulation of that information” (Naismith, Lonsdale, Vavoula, & Sharples, 2004, p. 12)Constructionist Learning in the 1980s – Constructionism differed from constructivism as Papert (1980) posited an additional component to constructivism: students learned best when they were actively involved in constructing social objects. The tutee position. Teaching the computer to perform tasks.Problem-Based learning in the 1990s – In the PBL, students often worked in small groups of five or six to pool knowledge and resources to solve problems. Launched the sociocultural revolution, focusing on learning in out of school contexts and the acquisition of knowledge through social interaction

Socio-Constructivist Learning in the 1990s. SCL believe that social and individual processes are independent in the co-construction of knowledge (Sullivan-Palinscar, 1998; Vygotsky, 1978).

96-97). Keegan (2002) believed that e-learning was distance learning, which has been converted to e-learning through the use of technologies such as the WWW. Which electronic media and tools constituted e-learning: e.g., did it matter if the learning took place through a networked technology, or was it simply learning with an electronic device?

99-100. Traxler (2011) described five ways in which m-learning offers new learning opportunities: 1. Contingent learning, allowing learners to respond and react to the environment and changing experiences; 2. Situated learning, in which learning takes place in the surroundings applicable to the learning; 3. Authentic learning;

Diel, W. (2013). M-Learning as a subfield of open and distance education. In: Berge and Muilenburg (Eds.). Handbook of Mobile Learning.

  1. 15) Historical context in relation to the field of distance education (embedded librarian)
  2. 16 definition of independent study (workshop on mlearning and distance education
  3. 17. Theory of transactional distance (Moore)

Cochrane, T. (2013). A Summary and Critique of M-Learning Research and Practice. In: Berge and Muilenburg (Eds.). Handbook of Mobile Learning.
( Galin class, workshop)

P 24

According to Cook and Sharples (2010) the development of M learning research has been characterized by three general faces a focus upon Devices Focus on learning outside the classroom He focus on the mobility of the learner

  1. 25

Baby I am learning studies focus upon content delivery for small screen devices and the PDA capabilities of mobile devices rather than leveraging the potential of mobile devices for collaborative learning as recommended by hope Joyner Mill Road and sharp P. 26 Large scale am learning project Several larger am learning projects have tended to focus on specific groups of learners rather than developing pedagogical strategies for the integration of am mlearning with him tertiary education in general

27

m learning research funding

In comparison am learning research projects in countries with smaller population sizes such as Australia and New Zealand are typiclly funded on a shoe string budget

28

M-learning research methodologies

I am learning research has been predominantly characterized by short term case studies focused upon The implementation of rapidly changing technologies with early adopters but with little evaluation reflection or emphasis on mainstream tertiary-education integration

 

p. 29 identifying the gaps in M learning research

 

lack of explicit underlying pedagogical theory Lack of transferable design frameworks

 

Cochrane, T. (2011).Proceedings ascilite 2011 Hobart:Full Paper 250 mLearning: Why? What? Where? How? http://www.ascilite.org/conferences/hobart11/downloads/papers/Cochrane-full.pdf
(Exploring mobile learning success factors http://files.eric.ed.gov/fulltext/EJ893351.pdf
https://prezi.com/kr94rajmvk9u/mlearning/
https://thomcochrane.wikispaces.com/MLearning+Praxis

Pachler, N., Bachmair, B., and Cook, J. (2013). A Sociocultural Ecological Frame for Mobile Learning. In: Berge and Muilenburg (Eds.). Handbook of Mobile Learning.
(Tom video studio)

35 a line of argumentation that defines mobile devices such as mobile phones as cultural resources. Mobile cultural resources emerge within what we call a “bile complex‘, which consist of specifics structures, agency and cultural practices.

36 pedagogy looks for learning in the context of identify formation of learners within a wider societal context However at the beginning of the twentieth first century and economy oriented service function of learning driven by targets and international comparisons has started to occupy education systems and schools within them Dunning 2000 describes the lengthy transformation process from natural assets Land unskilled labor to tangible assets machinery to intangible created assets such as knowledge and information of all kinds Araya and Peters 2010 describe the development of the last 20 years in terms of faces from the post industrial economy to d information economy to the digital economy to the knowledge economy to the creative economy Cultural ecology can refer to the debate about natural resources we argue for a critical debate about the new cultural resources namely mobile devices and the services for us the focus must not be on the exploitation of mobile devices and services for learning but instead on the assimilation of learning with mobiles in informal contacts of everyday life into formal education

37

Ecology comes into being is there exists a reciprocity between perceiver and environment translated to M learning processes this means that there is a reciprocity between the mobile devices in the activity context of everyday life and the formal learning

45

Rather than focusing on the acquisition of knowledge in relation to externally defined notions of relevance increasingly in a market-oriented system individual faces the challenge of shape his/her knowledge out of his/her own sense of his/her world information is material which is selected by individuals to be transformed by them into knowledge to solve a problem in the life world

Crompton, H. (2013). A Sociocultural Ecological Frame for Mobile Learning. In: Berge and Muilenburg (Eds.). Handbook of Mobile Learning.

p. 47 As philosophies and practice move toward learner-centered pedagogies, technology in a parallel move, is now able to provide new affordances to the learner, such as learning that is personalized, contextualized, and unrestricted by temporal and spatial constrains.

The necessity for m-learning to have a theory of its own, describing exactly what makes m-learning unique from conventional, tethered electronic learning and traditional learning.

48 . Definition and devices. Four central constructs. Learning pedagogies, technological devices, context and social interactions.

“learning across multiple contexts, through social and content interactions, using personal electronic devices.”

It is difficult, and ill advisable, to determine specifically which devices should be included in a definition of m-learning, as technologies are constantly being invented or redesigned. (my note against the notion that since D2L is a MnSCU mandated tool, it must be the one and only). One should consider m-learning as the utilization of electronic devices that are easily transported and used anytime and anywhere.

49 e-learning does not have to be networked learning: therefore, e-learnng activities could be used in the classroom setting, as the often are.

Why m-learning needs a different theory beyond e-learning. Conventional e-learning is tethered, in that students are anchored to one place while learning. What sets m-learning apart from conventional e-learning is the very lack of those special and temporal constrains; learning has portability, ubiquitous access and social connectivity.

50 dominant terms for m-learning should include spontaneous, intimate, situated, connected, informal, and personal, whereas conventional e-learning should include the terms computer, multimedia, interactive, hyperlinked, and media-rich environment.

51 Criteria for M-Learning
second consideration is that one must be cognizant of the substantial amount of learning taking place beyond the academic and workplace setting.

52 proposed theories

Activity theory: Vygotsky and Engestroem

Conversation theory: Pask 1975, cybernetic and dialectic framework for how knowledge is constructed. Laurillard (2007) although conversation is common for all forms of learning, m-learning can build in more opportunities for students to have ownership and control over what they are learning through digitally facilitated, location-specific activities.

53 multiple theories;

54 Context is central construct of mobile learning. Traxler (2011) described the role of context in m-learning as “context in the wider context”, as the notion of context becomes progressively richer. This theme fits with Nasimith et al situated theory, which describes the m-learning activities promoting authentic context and culture.

55. Connectivity
unlike e-learning, the learner is not anchored to a set place. it links to Vygotsky’s sociocultural approach.
Learning happens within various social groups and locations, providing a diverse range of connected  learning experiences. furthermore, connectivity is without temporal restraints, such as the schedules of educators.

55. Time
m-larning as “learning dispersed in time”

55. personalization
my note student-centered learning

Moura, A., Carvalho, A. (2013). Framework For Mobile Learning Integration Into Educational Contexts. In: Berge and Muilenburg (Eds.). Handbook of Mobile Learning.

p. 58 framework is based on constructivist approach, Activity theory, and the attention, relevance and confidence satisfaction (ARCS) model http://www.arcsmodel.com/#!
http://torreytrust.com/images/ITH_Trust.pdf

to set a didacticmodel that can be applied to m-learning requires looking at the characteristics of specific devi

https://www.researchgate.net/profile/Nadire_Cavus/publication/235912545_Basic_elements_and_characteristics_of_mobile_learning/links/02e7e526c1c0647142000000.pdf
https://eleed.campussource.de/archive/9/3704

Instructional Design

7 Things You Should Know About Developments in Instructional Design

http://www.educause.edu/library/resources/7-things-you-should-know-about-developments-instructional-design

Please read the entire EducCause article here: eli7120

discussion of IMS with faculty:

  • pedagogical theories
  • learning outcome
  • design activities
  • students’ multimedia assignments, which lead to online resources
  • collaboration with other departments for the students projects
  • moving the class to online environment (even if kept hybrid)

What is it?

the complexity of the learning environment is turning instructional design into a more dynamic activity, responding to changing educational models and expectations. Flipped classrooms, makerspaces, and competency-based learning are changing how instructors work with students, how students work with course content, and how mastery is verified. Mobile computing, cloud computing, and data-rich repositories have altered ideas about where and how learning takes place.

How does it work?

One consequence of these changes is that designers can find themselves filling a variety of roles. Today’s instructional designer might work with subject-matter experts, coders, graphic designers, and others. Moreover, the work of an instructional designer increasingly continues throughout the duration of a course rather than taking place upfront.

Who’s doing it?

The responsibility for designing instruction traditionally fell to the instructor of a course, and in many cases it continues to do so. Given the expanding role and landscape of technology—as well as the growing body of knowledge about learning and about educational activities and assessments— dedicated instructional designers are increasingly common and often take a stronger role.

Why is it significant?

The focus on student-centered learning, for example, has spurred the creation of complex integrated learning environments that comprise multiple instructional modules. Competency-based learning allows students to progress at their own pace and finish assignments, courses, and degree plans as time and skills permit. Data provided by analytics systems can help instructional designers predict which pedagogical approaches might be most effective and tailor learning experiences accordingly. The use of mobile learning continues to grow, enabling new kinds of learning experiences.

What are the downsides?

Given the range of competencies needed for the position, finding and hiring instructional designers who fit well into particular institutional cultures can be challenging to the extent that instructors hand over greater amounts of the design process to instructional designers, some of those instructors will feel that they are giving up control, which, in some cases, might appear to be simply the latest threat to faculty authority and autonomy. My note: and this is why SCSU Academic Technology is lead by faculty not IT staff. 

Where is it going?

In some contexts, instructional designers might work more directly with students, teaching them lifelong learning skills. Students might begin coursework by choosing from a menu of options, creating their own path through content, making choices about learning options, being more hands-on, and selecting best approaches for demonstrating mastery. Educational models that feature adaptive and personalized learning will increasingly be a focus of instructional design. My note: SCSU CETL does not understand instructional design tendencies AT ALL. Instead of grooming faculty to assume the the leadership role and fill out the demand for instructional design, it isolates and downgrades (keeping traditional and old-fashioned) instructional design to basic tasks of technicalities done by IT staff.

What are the implications for teaching and learning?

By helping align educational activities with a growing understanding of the conditions,
tools, and techniques that enable better learning, instructional designers can help higher education take full advantage of new and emerging models of education. Instructional
designers bring a cross-disciplinary approach to their work, showing faculty how learning activities used in particular subject areas might be effective in others. In this way, instructional
designers can cultivate a measure of consistency across courses and disciplines in how educational strategies and techniques are incorporated. Designers can also facilitate the
creation of inclusive learning environments that offer choices to students with varying strengths and preferences.

More on instructional design in this IMS blog:

https://blog.stcloudstate.edu/ims/2014/10/13/instructional-design/

LMS and embedded librarianship

Tumbleson, B. E., & Burke, J. (. J. (2013). Embedding librarianship in learning management systems: A how-to-do-it manual for librarians. Neal-Schuman, an imprint of the American Library Association.

Embedding librarianship in learning management systems:

https://scsu.mplus.mnpals.net/vufind/Record/007650037

see also:

Kvenild, C., & Calkins, K. (2011). Embedded Librarians: Moving Beyond One-Shot Instruction – Books / Professional Development – Books for Academic Librarians – ALA Store. ACRL. Retrieved from http://www.alastore.ala.org/detail.aspx?ID=3413

p. 20 Embedding Academic and Research Libraries in the Curriculum: 2014-nmc-horizon-report-library-EN

xi. the authors are convinced that LMS embedded librarianship is becoming he primary and most productive method for connecting with college and university students, who are increasingly mobile.

xii. reference librarians engage the individual, listen, discover what is wanted and seek to point the stakeholder in profitable directions.
Instruction librarians, in contrast, step into the classroom and attempt to lead a group of students in new ways of searching wanted information.
Sometimes that instruction librarian even designs curriculum and teaches their own credit course to guide information seekers in the ways of finding, evaluating, and using information published in various formats.
Librarians also work in systems, emerging technologies, and digital initiatives in order to provide infrastructure or improve access to collections and services for tend users through the library website, discovery layers, etc. Although these arenas seemingly differ, librarians work as one.

xiii. working as an LMS embedded librarian is both a proactive approach to library instruction using available technologies and enabling a 24/7 presence.

1. Embeddedness involves more that just gaining perspective. It also allows the outsider to become part of the group through shared learning experiences and goals. 3. Embedded librarianship in the LMS is all about being as close as possible to where students are receiving their assignments and gaining instruction and advice from faculty members. p. 6 When embedded librarians provide ready access to scholarly electronic collections, research databases, and Web 2.0 tools and tutorials, the research experience becomes less frustrating and more focused for students. Undergraduate associate this familiar online environment with the academic world.

p. 7 describes embedding a reference librarian, which LRS reference librarians do, “partnership with the professor.” However, there is room for “Research Consultations” (p. 8). While “One-Shot Library Instruction Sessions” and “Information Literacy Credit Courses” are addressed (p. 809), the content of these sessions remains in the old-fashioned lecturing type of delivering the information.

p. 10-11. The manuscript points out clearly the weaknesses of using a Library Web site. The authors fail to see that the efforts of the academic librarians must go beyond Web page and seek how to easy the information access by integrating the power of social media with the static information residing on the library web page.

p. 12 what becomes disturbingly clear is that faculty focus on the mechanics of the research paper over the research process. Although students are using libraries, 70 % avoid librarians. Urging academic librarians to “take an active role and initiate the dialogue with faculty to close a divide that may be growing between them and faculty and between them and students.”
Four research context with which undergraduates struggle: big picture, language, situational context and information gathering.

p. 15 ACRL standards One and Three: librarians might engage students who rely on their smartphones, while keeping in mind that “[s]tudents who retrieve information on their smartphones may also have trouble understanding or evaluating how the information on their phone is ‘produced, organized, and disseminated’ (Standard One).
Standard One by its definition seems obsolete. If information is formatted for desktops, it will be confusing when on smart phones, And by that, it is not mean to adjust the screen size, but change the information delivery from old fashioned lecturing to more constructivist forms. e.ghttp://web.stcloudstate.edu/pmiltenoff/bi/

p. 15 As for Standard Two, which deals with effective search strategies, the LMS embedded librarian must go beyond Boolean operators and controlled vocabulary, since emerging technologies incorporate new means of searching. As unsuccessfully explained to me for about two years now at LRS: hashtag search, LinkedIn groups etc, QR codes, voice recognition etc.

p. 16. Standard Five. ethical and legal use of information.

p. 23 Person announced in 2011 OpenClass compete with BB, Moodle, Angel, D2L, WebCT, Sakai and other
p. 24 Common Features: content, email, discussion board, , synchronous chat and conferencing tools (Wimba and Elluminate for BB)

p. 31 information and resources which librarians could share via LMS
– post links to dbases and other resources within the course. LIB web site, LibGuides or other subject-related course guidelines
– information on research concepts can be placed in a similar fashion. brief explanation of key information literacy topics (e.g difference between scholarly and popular periodical articles, choosing or narrowing research topics, avoiding plagiarism, citing sources properly whining required citations style, understanding the merits of different types of sources (Articles book’s website etc)
– Pertinent advice the students on approaching the assignment and got to rheank needed information
– Tutorials on using databases or planning searches step-by-step screencast navigating in search and Candida bass video search of the library did you a tour of the library

p. 33 embedded librarian being copied on the blanked emails from instructor to students.
librarian monitors the discussion board

p. 35 examples: students place specific questions on the discussion board and are assured librarian to reply by a certain time
instead of F2F instruction, created a D2L module, which can be placed in any course. videos, docls, links to dbases, links to citation tools etc. Quiz, which faculty can use to asses the the students

p. 36 discussion forum just for the embedded librarian. for the students, but faculty are encouraged to monitor it and provide content- or assignment-specific input
video tutorials and searching tips
Contact information email phone active IM chat information on the library’s open hours

p. 37 questions to consider
what is the status of the embedded librarian: T2, grad assistant

p. 41 pilot program. small scale trial which is run to discover and correct potential problems before
One or two faculty members, with faculty from a single department
Pilot at Valdosta State U = a drop-in informatil session with the hope of serving the information literacy needs of distance and online students, whereas at George Washington U, librarian contacted a distance education faculty member to request embedding in his upcoming online Mater’s course
p. 43 when librarians sense that current public services are not being fully utilized, it may signal that a new approach is needed.
pilots permit tinkering. they are all about risk-taking to enhance delivery

p. 57 markeing LMS ebedded Librarianship

library collections, services and facilities because faculty may be uncertain how the service benefits their classroom teaching and learning outcomes.
my note per
“it is incumbent upon librarians to promote this new mode of information literacy instruction.” it is so passe. in the times when digital humanities is discussed and faculty across campus delves into digital humanities, which de facto absorbs digital literacy, it is shortsighted for academic librarians to still limit themselves into “information literacy,” considering that lip service is paid for for librarians being the leaders in the digital humanities movement. If academic librarians want to market themselves, they have to think broad and start with topics, which ARE of interest for the campus faculty (digital humanities included) and then “push” their agenda (information literacy). One of the reasons why academic libraries are sinking into oblivion is because they are sunk already in 1990-ish practices (information literacy) and miss the “hip” trends, which are of interest for faculty and students. The authors (also paying lip services to the 21st century necessities), remain imprisoned to archaic content. In the times, when multi (meta) literacies are discussed as the goal for library instruction, they push for more arduous marketing of limited content. Indeed, marketing is needed, but the best marketing is by delivering modern and user-sought content.
the stigma of “academic librarians keep doing what they know well, just do it better.” Lip-services to change, and life-long learning. But the truth is that the commitment to “information literacy” versus the necessity to provide multi (meta) literacites instruction (Reframing Information Literacy as a metaliteracy) is minimizing the entire idea of academic librarians reninventing themselves in the 21st century.
Here is more: NRNT-New Roles for New Times

p. 58 According to the Burke and Tumbleson national LMS embedded librarianship survey, 280 participants yielded the following data regarding embedded librarianship:

  • traditional F2F LMS courses – 69%
  • online courses – 70%
  • hybrid courses – 54%
  • undergraduate LMS courses 61%
  • graduate LMS courses 42%

of those respondents in 2011, 18% had the imitative started for four or more years, which place the program in 2007. Thus, SCSU is almost a decade behind.

p. 58 promotional methods:

  • word of mouth
  • personal invitation by librarians
  • email by librarians
  • library brochures
  • library blogs

four years later, the LRS reference librarians’ report https://magic.piktochart.com/output/5704744-libsmart-stats-1415 has no mentioning of online courses, less to say embedded librarianship

my note:
library blog
was offered numerous times to the LRS librarians and, consequently to the LRS dean, but it was brushed away, as were brushed away the proposals for modern institutional social media approach (social media at LRS does not favor proficiency in social media but rather sees social media as learning ground for novices, as per 11:45 AM visit to LRS social media meeting of May 6, 2015). The idea of the blog advantages to static HTML page was explained in length, but it was visible that the advantages are not understood, as it is not understood the difference of Web 2.0 tools (such as social media) and Web 1.0 tools (such as static web page). The consensus among LRS staff and faculty is to keep projecting Web 1.0 ideas on Web 2.0 tools (e.g. using Facebook as a replacement of Adobe Dreamweaver: instead of learning how to create static HTML pages to broadcast static information, use Facebook for fast and dirty announcement of static information). It is flabbergasting to be rejected offering a blog to replace Web 1.0 in times when the corporate world promotes live-streaming (http://www.socialmediaexaminer.com/live-streaming-video-for-business/) as a way to  promote services (academic librarians can deliver live their content)

p. 59 Marketing 2.0 in the information age is consumer-oriented. Marketing 3.0 in the values-driven era, which touches the human spirit (Kotler, Katajaya, and Setiawan 2010, 6).
The four Ps: products and services, place, price and promotion. Libraries should consider two more P’s: positioning and politics.

Mathews (2009) “library advertising should focus on the lifestyle of students. the academic library advertising to students today needs to be: “tangible, experiential, relatebale, measurable, sharable and surprising.” Leboff (2011, p. 400 agrees with Mathews: the battle in the marketplace is not longer for transaction, it is for attention. Formerly: billboards, magazines, newspapers, radio, tv, direct calls. Today: emphasize conversation, authenticity, values, establishing credibility and demonstrating expertise and knowledge by supplying good content, to enhance reputation (Leboff, 2011, 134). translated for the embedded librarians: Google goes that far; students want answers to their personal research dillemas and questions. Being a credentialed information specialist with years of experience is no longer enough to win over an admiring following. the embedded librarian must be seen as open and honest in his interaction with students.
p. 60  becoming attractive to end-users is the essential message in advertising LMS embedded librarianship. That attractivness relies upon two elements: being noticed and imparting values (Leboff, 2011, 99)

p. 61 connecting with faculty

p. 62 reaching students

  • attending a synchronous chat sessions
  • watching a digital tutorial
  • posting a question in a discussion board
  • using an instant messaging widget

be careful not to overload students with too much information. don’t make contact too frequently and be perceived as an annoyance and intruder.

p. 65. contemporary publicity and advertising is incorporating storytelling. testimonials differ from stories

p. 66 no-cost marketing. social media

low-cost marketing – print materials, fliers, bookmarks, posters, floor plans, newsletters, giveaways (pens, magnets, USB drives), events (orientations, workshops, contests, film viewings), campus media, digital media (lib web page, blogs, podcasts, social networking cites

p. 69 Instructional Content and Instructional Design
p. 70 ADDIE Model

ADDIE model ADDIE model

Analysis: the requirements for the given course, assignments.
Ask instructors expectations from students vis-a-vis research or information literacy activities
students knowledge about the library already related to their assignments
which are the essential resources for this course
is this a hybrid or online course and what are the options for the librarian to interact with the students.
due date for the research assignment. what is the timeline for completing the assignment
when research tips or any other librarian help can be inserted

copy of the syllabus or any other assignment document

p. 72 discuss the course with faculty member. Analyze the instructional needs of a course. Analyze students needs. Create list of goals. E.g.: how to find navigate and use the PschInfo dbase; how to create citations in APA format; be able to identify scholarly sources and differentiate them from popular sources; know other subject-related dbases to search; be able to create a bibliography and use in-text citations in APA format

p. 74 Design (Addie)
the embedded component is a course within a course. Add pre-developed IL components to the broader content of the course. multiple means of contact information for the librarians and /or other library staff. link to dbases. link to citation guidance and or tutorial on APA citations. information on how to distinguish scholarly and popular sources. links to other dbases. information and guidance on bibliographic and in-text citations n APA either through link, content written within the course a tutorial or combination. forum or a discussion board topic to take questions. f2f lib instruction session with students
p. 76 decide which resources to focus on and which skills to teach and reinforce. focus on key resources

p. 77 development (Addie).
-building content;the “landing” page at LRS is the subject guides page.  resources integrated into the assignment pages. video tutorials and screencasts

-finding existing content; google search of e.g.: “library handout narrowing topic” or “library quiz evaluating sources,” “avoiding plagiarism,” scholarly vs popular periodicals etc

-writing narrative content. p. 85

p. 87 Evaluation (Addie)

formative: to change what the embedded librarian offers to improve h/er services to students for the reminder of the course
summative at the end of the course:

p. 89  Online, F2F and Hybrid Courses

p. 97 assessment impact of embedded librarian.
what is the purpose of the assessment; who is the audience; what will focus on; what resources are available
p. 98 surveys of faculty; of students; analysis of student research assignments; focus groups of students and faculty

p. 100 assessment methods: p. 103/4 survey template
https://www.ets.org/iskills/about
https://www.projectsails.org/ (paid)
http://www.trails-9.org/
http://www.library.ualberta.ca/augustana/infolit/wassail/
p. 106 gathering LMS stats. Usability testing
examples: p. 108-9, UofFL : pre-survey and post-survey of studs perceptions of library skills, discussion forum analysis and interview with the instructor

p. 122 create an LMS module for reuse (standardized template)
p. 123 subject and course LibGuides, digital tutorials, PPTs,
research mind maps, charts, logs, or rubrics
http://creately.com/blog/wp-content/uploads/2012/12/Research-Proposal-mind-map-example.png
http://www.library.arizona.edu/help/tutorials/mindMap/sample.php  (excellent)
or paper-based if needed: Concept Map Worksheet
Productivity Tools for Graduate Students: MindMapping http://libguides.gatech.edu/c.php

rubrics:
http://www.cornellcollege.edu/LIBRARY/faculty/focusing-on-assignments/tools-for-assessment/research-paper-rubric.shtml
http://gvsu.edu/library/instruction/research-guidance-rubric-for-assignment-design-4.htm
Creating Effective Information Literacy Assignments http://www.lib.jmu.edu/instruction/assignments.aspx

course handouts
guides on research concepts http://library.olivet.edu/subject-guides/english/college-writing-ii/research-concepts/
http://louisville.libguides.com/c.php
Popular versus scholar http://www.library.arizona.edu/help/tutorials/scholarly/guide.html

list of frequently asked q/s:
blog posts
banks of reference q/s

p. 124. Resistance or Receptivity

p. 133 getting admin access to LMS for the librarians.

p. 136 mobile students, dominance of born-digital resources

 

 

 

———————-

Summey T, Valenti S. But we don’t have an instructional designer: Designing online library instruction using isd techniques. Journal Of Library & Information Services In Distance Learning [serial online]. January 1, 2013;Available from: Scopus®, Ipswich, MA. Accessed May 11, 2015.
http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dedselc%26AN%3dedselc.2-52.0-84869866367%26site%3deds-live%26scope%3dsite

instructional designer library instruction using ISD techniques

Shank, J. (2006). The blended librarian: A job announcement analysis of the newly emerging position of instructional design librarian. College And Research Libraries, 67(6), 515-524.
http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dedselc%26AN%3dedselc.2-52.0-33845291135%26site%3deds-live%26scope%3dsite

The Blended Librarian_ A Job Announcement Analysis of the Newly Emerging Position of Instructional Design Librarian

Macklin, A. (2003). Theory into practice: Applying David Jonassen’s work in instructional design to instruction programs in academic libraries. College And Research Libraries, 64(6), 494-500.
http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dedselc%26AN%3dedselc.2-52.0-7044266019%26site%3deds-live%26scope%3dsite

Theory into Practice_ Applying David Jonassen_s Work in Instructional Design to Instruction Programs in Academic Libraries

Walster, D. (1995). Using Instructional Design Theories in Library and Information Science Education. Journal of Education for Library and Information Science, (3). 239.
http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dedsjsr%26AN%3dedsjsr.10.2307.40323743%26site%3deds-live%26scope%3dsite

Using Instructional Design Theories in Library and Information Science Education

Mackey, T. )., & Jacobson, T. ). (2011). Reframing information literacy as a metaliteracy. College And Research Libraries, 72(1), 62-78.
http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dedselc%26AN%3dedselc.2-52.0-79955018169%26site%3deds-live%26scope%3dsite

Reframing Information Literacy as a metaliteracy

Nichols, J. (2009). The 3 directions: Situated information literacy. College And Research Libraries, 70(6), 515-530.
http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dedselc%26AN%3dedselc.2-52.0-73949087581%26site%3deds-live%26scope%3dsite

The 3 Directions_ Situated literacy

 

—————

Journal of Library & Information Services in Distance Learning (J Libr Inform Serv Dist Learn)

https://www.researchgate.net/journal/1533-290X_Journal_of_Library_Information_Services_in_Distance_Learning

http://conference.acrl.org/

http://www.loex.org/conferences.php

http://www.ala.org/lita/about/igs/distance/lit-igdl

————

https://magic.piktochart.com/output/5704744-libsmart-stats-1415

5 Tips for eLearning Voice Recording

5 Tips for eLearning Voice Recording

http://elearningindustry.com/5-tips-for-elearning-voice-recording

These are the top 5 frequent e-Learning voice recording situations that I’ve come across:

  1. A.C.R.O.N.Y.M.S.
    Is this pronounced A-C-R-O-N-Y-M-S or ‘acronyms’? Is it read as letters or read as a word? A lot of scripts do have acronyms related to company or industry jargon. Define this in the script to avoid confusion and save re-records! You can use ALL CAPS but that may not be enough. Periods or dashes between letters (A-C-R-O-N-Y-M-S) generally indicate the word to be read as individual letters. But to be safe, put explanation notes in the margin or at the top of the script defining correct pronunciation, to reduce risk.
  2. Audio file – technical specifications
    If you hire a voice talent to record for you, usually you ask for either mp3 or wav audio files back from her. But are you also specifying the bit rate? 16 bit resolution is the gold standard. If you get 24 bit, your audio may sound garbled but only after it’s embedded into your program. Save time and trouble upfront by stating your audio tech specs!
  3. Attitude or Point Of View
    What kind of attitude do you want to hear in the voice recording? Think about the end listener. What will peak their interest and attentiveness more? By taking the small amount of time to define the “who is talking” and “to whom”, you can help the person recording to provide a POV (point of view) with the right attitude. Plus, it’s a great way to provide impact and underscore the project for the client. This is a gem – often unused! For example, is this a co-worker talking to her peers or (differently) is she showing a new person the ropes? Is this an SME (subject matter expert) sharing expert information? To whom – Top management or research engineers? If your project is required information, like an annual safety review or similar, it can often be very dry material. Taking a couple minutes to think about the role of who delivers such information can energize dry material. Some more general examples of attitude can be: Strong and Authoritative. Caring and Conversational. Casual like a co-worker. Blue collar vs white collar.
  4. Proximity
    Another gem of a different color! A voice recording can be done further or closer to the microphone. We call that ‘proximity’. This can change or impact the way a listener responds. Compare whispering vs talking at a cubicle vs presenting to a room of people. Changing ‘proximity’ can create poignant moments that listeners will notice. Let your clients know about this technique as well. Used sparingly = high impact!
  5. Script Writing flow – or Writing with listening in mind
    After all the information is written, review the script for a flow of words that, when read aloud, are easy to comprehend and will engage the listener. This may be hard to find time for, depending on your client’s budget – but it is one of those quality elements that can win you a client’s loyalty. When I see a line or two in a script that I think can be phrased to flow more conversationally, I might offer it as an alternate.

Academic Libraries and Social Media – bibliography

  1. Zohoorian-Fooladi, N., & Abrizah, A. A. (2014). Academic librarians and their social media presence: a story of motivations and deterrents. Information Development30(2), 159-171.
    http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dllf%26AN%3d95801671%26site%3deds-live%26scope%3dsite
    Librarians also believed that social media tools are suitable not only to communicate with users but also
    to facilitate the interaction of librarians with each other by creating librarian groups.
    Librarians also believed that social media tools are suitable not only to communicate with users but also
    to facilitate the interaction of librarians with each other by creating librarian groups. (p. 169)
  2. Collins, G., & Quan-Haase, A. (2014). Are Social Media Ubiquitous in Academic Libraries? A Longitudinal Study of Adoption and Usage Patterns. Journal Of Web Librarianship8(1), 48-68. doi:10.1080/19322909.2014.873663
    http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3drzh%26AN%3d2012514657%26site%3deds-live%26scope%3dsite
  3. Reynolds, L. M., Smith, S. E., & D’Silva, M. U. (2013). The Search for Elusive Social Media Data: An Evolving Librarian-Faculty Collaboration. Journal Of Academic Librarianship39(5), 378-384. doi:10.1016/j.acalib.2013.02.007
    http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3daph%26AN%3d91105305%26site%3deds-live%26scope%3dsite
  4. Chawner, B., & Oliver, G. (2013). A survey of New Zealand academic reference librarians: Current and future skills and competencies. Australian Academic & Research Libraries44(1), 29-39. doi:10.1080/00048623.2013.773865
    http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3daph%26AN%3d94604489%26site%3deds-live%26scope%3dsite
  5. Lilburn, J. (2012). Commercial Social Media and the Erosion of the Commons: Implications for Academic Libraries. Portal: Libraries And The Academy12(2), 139-153.
    http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3deric%26AN%3dEJ975615%26site%3deds-live%26scope%3dsite
    The general consensus emerging to date is that the Web 2.0 applications now widely used in academic libraries have given librarians new tools for interacting with users, promoting services, publicizing events and teaching information literacy skills. We are, by now, well versed in the language of Web 2.0. The 2.0 tools – wikis, blogs, microblogs, social networking sites, social bookmarking sites, video or photo sharing sites, to name just a few – are said to be open, user-centered, and to increase user engagement, interaction, collaboration, and participation. Web 2.0 is said to “empower creativity, to democratize media production, and to celebrate the individual while also relishing the power of collaboration and social networks.”4 All of this is in contrast with what is now viewed as the static, less interactive, less empowering pre-Web 2.0 online environment. (p. 140)
    Taking into account the social, political, economic, and ethical issues associated with Web 2.0, other scholars raise questions about the generally accepted understanding of the benefits of Web 2.0. p. 141
  6. The decision to integrate commercial social media into existing library services seems almost inevitable, if not compulsory. Yet, research that considers the short- and long-term implications of this decision remains lacking. As discussed in the sections above, where and how institutions choose to establish a social media presence is significant. It confers meaning. Likewise, the absence of a presence can also confer meaning, and future p. 149
  7. Nicholas, D., Watkinson, A., Rowlands, I., & Jubb, M. (2011). Social Media, Academic Research and the Role of University Libraries. Journal Of Academic Librarianship37(5), 373-375. doi:10.1016/j.acalib.2011.06.023
    http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dedselc%26AN%3dedselc.2-52.0-80052271818%26site%3deds-live%26scope%3dsite
  8. BROWN, K., LASTRES, S., & MURRAY, J. (2013). Social Media Strategies and Your Library. Information Outlook,17(2), 22-24.
    http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dkeh%26AN%3d89594021%26site%3deds-live%26scope%3dsite
    Establishing an open leadership relationship with these stakeholders necessitates practicing five rules of open leadership: (1) respecting the power that your patrons and employees have in their relationship with you and others, (2) sharing content constantly to assist in building trust, (3) nurturing curiosity and humility in yourself as well as in others, (4) holding openness accountable, and (5) forgiving the failures of others and yourself. The budding relationships that will flourish as a result of applying these rules will reward each party involved.
    Whether you intend it or not, your organization’s leaders are part of your audience. As a result, you must know your organization’s policies and practices (in addition to its people) if you hope to succeed with social media. My note: so, if one defines a very narrow[sided] policy, then the entire social media enterprise is….
    Third, be a leader and a follower. My note: not a Web 1.0 – type of control freak, where content must come ONLY from you and be vetoed by you
    !
    All library staff have their own login accounts and are expected to contribute to and review
  9. Dority Baker, M. L. (2013). Using Buttons to Better Manage Online Presence: How One Academic Institution Harnessed the Power of Flair. Journal Of Web Librarianship7(3), 322-332. doi:10.1080/19322909.2013.789333
    http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dlxh%26AN%3d90169755%26site%3deds-live%26scope%3dsite
    his project was a partnership between the Law College Communications Department, Law College Administration, and the Law Library, involving law faculty, staff, and librarians.
  10. Van Wyk, J. (2009). Engaging academia through Library 2.0 tools : a case study : Education Library, University of Pretoria.
    http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dedsoai%26AN%3dedsoai.805419868%26site%3deds-live%26scope%3dsite
  11. Paul, J., Baker, H. M., & Cochran, J. (2012). Effect of online social networking on student academic performance.Computers In Human Behavior28(6), 2117-2127. doi:10.1016/j.chb.2012.06.016
    http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dkeh%26AN%3d79561025%26site%3deds-live%26scope%3dsite
    #SocialMedia and  students place a higher value on the technologies their instructors use effectively in the classroom. a negative impact of social media usage on academic performance. rather CONSERVATIVE conclusions.
    Students should be made aware of the detrimental impact of online social networking on their potential academic performance. In addition to recommending changes in social networking related behavior based on our study results, findings with regard to relationships between academic performance and factors such as academic competence, time management skills, attention span, etc., suggest the need for academic institutions and faculty to put adequate emphasis on improving the student’s ability to manage time efficiently and to develop better study strategies. This could be achieved via workshops and seminars that familiarize and train students to use new and intuitive tools such as online calendars, reminders, etc. For example, online calendars are accessible in many devices and can be setup to send a text message or email reminder of events or due dates. There are also online applications that can help students organize assignments and task on a day-to-day basis. Further, such workshops could be a requirement of admission to academic programs. In the light of our results on relationship between attention span and academic performance, instructors could use mandatory policies disallowing use of phones and computers unless required for course purposes. My note: I completely disagree with the this decision: it can be argued that instructors must make their content delivery more engaging and thus, electronic devices will not be used for distraction
  12. MANGAN, K. (2012). Social Networks for Academics Proliferate, Despite Some Doubts. Chronicle Of Higher Education58(35), A20.
    http://eds.b.ebscohost.com/eds/detail?vid=5&sid=bbba2c7a-28a6-4d56-8926-d21572248ded%40sessionmgr114&hid=115&bdata=JnNpdGU9ZWRzLWxpdmUmc2NvcGU9c2l0ZQ%3d%3d#db=f5h&AN=75230216
    Academia.edu
    While Mendeley’s users tend to have scientific backgrounds, Zotero offers similar technical tools for researchers in other disciplines, including many in the humanities. The free system helps researchers collect, organize, share, and cite research sources.
    “After six years of running Zotero, it’s not clear that there is a whole lot of social value to academic social networks,” says Sean Takats, the site’s director, who is an assistant professor of history at George Mason University. “Everyone uses Twitter, which is an easy way to pop up on other people’s radar screens without having to formally join a network.
  13. Beech, M. (2014). Key Issue – How to share and discuss your research successfully online. Insights: The UKSG Journal27(1), 92-95. doi:10.1629/2048-7754.142
    http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3dlxh%26AN%3d94772771%26site%3deds-live%26scope%3dsite
    the dissemination of academic research over the internet and presents five tenets to engage the audience online. It comments on targeting an audience for the research and suggests the online social networks Twitter,LinkedIn, and ResearchGate as venues. It talks about the need to relate work with the target audience and examines the use of storytelling and blogs. It mentions engaging in online discussions and talks about open access research

Libraries social media from James Neal

Social media in libraries from Ecobibl Marianne

Social Media, Libraries, and Web 2.0: How American Libraries are Using New Tools for Public Relations and to Attract New Users from Curtis Rogers

Social Media usage in libraries in Europe – survey findings from EBSCO Information Services

Using Social Media in Canadian Academic Libraries, a 2010 CARL ABRC Libraries Survey from Dean Giustini

Social media adoption, policy and development by Daniel Hooker from Dean Giustini

Do student evaluations measure teaching effectiveness?

Do student evaluations measure teaching effectiveness?Manager’s Choice

Assistant Professor in MISTop Contributor

Higher Education institutions use course evaluations for a variety of purposes. They factor in retention analysis for adjuncts, tenure approval or rejection for full-time professors, even in salary bonuses and raises. But, are the results of course evaluations an objective measure of high quality scholarship in the classroom?

—————————-

  • Daniel WilliamsDaniel

    Daniel Williams

    Associate Professor of Molecular Biology at Winston-Salem State University

    I feel they measure student satisfaction, more like a customer service survey, than they do teaching effectiveness. Teachers students think are easy get higher scores than tough ones, though the students may have learned less from the former.

    Maria P.John S. and 17 others like this

  • Muvaffak

    Muvaffak GOZAYDIN

    Founder at Global Digital University

    Top Contributor

    How can you measure teachers’ effectiveness.
    That is how much students learn?
    If there is a method to measure how much we learn , I would appreciate to learn .

    Simphiwe N.Laura G. and 4 others like this

  • Michael TomlinsonMichael

    Michael Tomlinson

    Senior Director at TEQSA

    From what I recall, the research indicates that student evaluations have some value as a proxy and rough indicator of teacher effectiveness. We would expect that bad teachers will often get bad ratings, and good teachers will often get good ratings. Ratings for individual teachers should always be put in context, IMHO, for precisely the reasons that Daniel outlines.

    Aggregated ratings for teachers in departments or institutions can even out some of these factors, especially if you combine consideration with other indicators, such as progress rates.The hardest indicators however are drop-out rates and completion rates. When students vote with their feet this can flag significant problems. We have to bear in mind that students often drop out for personal reasons, but if your college’s drop-out rate is higher than your peers, this is worth investigating.

    phillip P.J.B. W. and 12 others like this

  • Rina SahayRina

    Rina Sahay

    Technical educator looking for a new opportunity or career direction

    I agree with what Michael says – to a point. Unfortunately student evaluations have also been used as a venue for disgruntled students, acting alone or in concert – a popularity contest of sorts. Even more unfortunately college administrations (especially for-profits) tend to rate Instructor effectiveness on the basis of student evaluations.

    IMHO, student evaluation questions need to be carefully crafted in order to be as objective as possible in order to eliminate the possibility of responses of an unprofessional nature. To clarify – a question like “Would you recommend this teacher to other students?” has the greatest potential for counter-productivity.

    Maria P.phillip P. and 6 others like this

  • Robert WhippleRobert

    Robert Whipple

    Chair, English Department at Creighton University

    No.

    Rina S.Elizabeth T. and 7 others like this

  • Dr. Virginia Stead, Ed.D.Dr. Virginia

    Dr. Virginia Stead, Ed.D.

    2013-2015 Peter Lang Publishing, Inc. (New York) Founding Book Series Editor: Higher Education Theory, Policy, & Praxis

    This is not a Cartesian question in that the answer is neither yes nor no; it’s not about flipping a coin. One element that may make it more likely that student achievement is a result of teacher effectiveness is the comparison of cumulative or summative student achievement against incoming achievement levels. Another variable is the extent to which individual students are sufficiently resourced (such as having enough food, safety, shelter, sleep, learning materials) to benefit from the teacher’s beneficence.

    Bridget K.Simphiwe N. and 4 others like this

  • Barbara

    Barbara Celia

    Assistant Clinical Professor at Drexel University

    Depends on how the evaluation tool is developed. However, overall I do not believe they are effective in measuring teacher effectiveness.

    Jeremy W.Ronnie S. and 1 other like this

  • Sri YogamalarSri

    Sri Yogamalar

    Lecturer at MUSC, Malaysia

    Overall, I think students are the best judge of a teacher’s effective pedagogy methods. Although there may be students with different learning difficulties (as there usually is in a class), their understanding of the concepts/principles and application of the subject matter in exam questions, etc. depends on how the teacher imparts such knowledge in a rather simplified and easy manner to enhance analytical and critical thinking in them. Of course, there are students too who give a bad review of a teacher’s teaching mode out of spite just because the said teacher has reprimanded him/her in class for being late, for example, or for even being rude. In such a case, it would not be a true reflection of the teacher’s method of teaching. A teacher tries his/her best to educate and inculcate values by imparting the required knowledge and ensuring a 2-way teaching-learning process. It is the students who will be the best judge to evaluate and assess the success of the efforts undertaken by the teacher because it is they who are supposed to benefit at the end of the teaching exercise.

    Chunli W.Simphiwe N. and 2 others like this

  • Paul S HickmanPaul S

    Paul S Hickman

    Member of the Council of Trustees & Distinguished Mentor at Warnborough College, Ireland & UK

    No! No!

    Anne G.Maria P. and 2 others like this

  • Bonnie FoxBonnie

    Bonnie Fox

    Higher Education Copywriter

    In some cases, I think evaluations (and negative ones in particular) can offer a good perspective on the course, especially if an instructor is willing to review them with an open mind. Of course, there are always the students who nitpick and, as Rina said, use the eval as a chance to vent. But when an entire class complains about how an instructor has handled a course (as I once saw happen with a tutoring student whose fellow classmates were in agreement about the problems in the course), I think it should be taken seriously. But I also agree with Daniel about how evaluations should be viewed like a customer service survey for student satisfaction. Evals are only useful up to a point.

    I definitely agree about the way evaluations are worded, though, to make sure that it’s easier to recognize the useful information and weed out the whining.

    Maria P.Pierre H. and 4 others like this

  • Pierre HENONPierre

    Pierre HENON

    university teacher (professeur agrege)

    I am director of studies and students in continuing education are making evaluation of the teaching effectiveness. Because I am in an ISO process, I must take in account those measurements. It might be very difficult sometimes because the number of students does not reach the level required for the sample to be valid (in a statistic meaning). But in the meantime, I believe in the utility of such measurements. The hard job is for me when I have to discuss with the teacher who is under the required score.

    Simphiwe N.Maria P. like this

  • Maria PerssonMaria

    Maria Persson

    Senior Tutor – CeTTL – Student Learning & Digital/Technology Coach (U of W – Faculty of Education)

    I’m currently ‘filling in’ as the administrator in our Teaching Development Unit – Appraisals and I have come to appreciate that the evaluation tool of choice is only that – a tool. How the tool is used in terms of the objective for collecting ‘teaching effectiveness’ information, question types developed to gain insight of, and then how that info is acted upon to inform future teaching and learning will in many ways denote the quality of the teaching itself !

    Student voice is not just about keeping our jobs, ‘bums on seats’ or ‘talking with their feet’ (all part of it of course) but should be about whether or not we really care about learning. Student voice in the form of evaluating teachers’ effectiveness is critically essential if we want our teaching to model learning that affects positive change – Thomas More’s educational utopia comes to mind…

    Simphiwe N.Pierre H. and 4 others like this

  • David ShallenbergerDavid

    David Shallenberger

    Consultant and Professor of International Education

    Alas, I think they are weak indicators of teaching effectiveness, yet they are used often as the most important indicators of the same. And in the pursuit of high response rate, they are too often given the last day of class, when they cannot measure anything significant — before the learning has “sunk in.” Ask better questions, and ask the questions after students have had a chance to reflect on the learning.

    Barbara C.Pierre H. and 9 others like this

  • Cathryn McCormackCathryn

    Cathryn McCormack

    Lecturer (Teaching and Learning), and Belly Dance teacher

    I’m just wrapping up a very large project at my university that looked at policy, processes, systems and the instrument for collecting student feedback (taking a break from writing the report to write this comment). One thing that has struck me very clearly is that we need to reconceptualise SETs. de Vellis, in Scale Development, talks about how a scale generally has a higher validity if the respondent is asked to talk about their own experiences.

    Yet here we are asking students to not only comment on, but evaluate their teachers. What we really want students to do in class in concentrate on their learning – not on what the teacher is doing. If they are focussing on what the teacher is doing then something is not going right. The way we ask now seems even crazier when we consider the most sophisticated conception of teaching is to help students learn. So why aren’t we asking students about their learning?

    The standard format has something to do with it – it’s extremely difficult to ask interesting questions on learning when the wording must align with a 5 point Likert response scale. Despite our best efforts, I do not believe it is possible to prepare a truly student centred and learning centred questionnaire using this format.

    An alternate format I came across that I really liked (Modified PLEQ Devlin 2002, An Improved Questionnaire for Gathering Student Perceptions of Teaching and Learning), but no commercial evaluation software (which we are required to purchase) can do it. A few overarching questions sets the scene for the nature of the class, but the general question format goes: In [choose from drop down list] my learning was [helped/hindered] when [fill in the blank] because [fill in the blank]. The drop down list would include options such as lectures, seminars/tutorials, a private study situation, preparing essays, labs, field trip, etc. After completing one question the student has the option to fill in another … and another … and another … for as long as they want.

    Think about what information we could actually get on student learning if we we started asking like this! No teacher ratings, all learning. The only number that would emerge would be the #helped and the #hindered.

    Maria P.Pierre H. and 6 others like this

  • Hans TilstraHans

    Hans Tilstra

    Senior Coordinator, Learning and Teaching

    Keep in mind “Goodhart’s Law” – When a measure becomes a target, it ceases to be a good measure.

    For example, if youth unemployment figures become the main measure, governments may be tempted to go for the low hanging fruit, the short term (eg. a work for the dole stick to steer unemployed people into study or the army).

    Punita S.Laura G. and 2 others like this

  • robert easterbrookrobert

    robert easterbrook

    Education Management Professional

    Nope.

    Catherine W.Anne G. like this

  • John StanburyJohn

    John Stanbury

    Professor at Singapore Institute of Management

    I totally agree with most of the comments here. I find student evaluations to be virtually meaningless as measures of a teachers’ effectiveness. They are measures of student perception NOT of learning. Yet university administrators eg Deans, Dept chairs, persist in using them to evaluate faculty performance in the classroom to the point where many instructors have had their careers torn apart. Its an absolute disgrace!! But no one seems to care! That’s the sick thing about it!

    Ronnie S.Maria P. and 4 others like this

  • Simon YoungSimon

    Simon Young

    Programme Coordinator, Pharmacy

    Satisfaction cannot be simply correlated with teaching quality. The evidence is that students are most “satisfied” with courses that support a surface learning approach – what the student “needs to know” to pass the course. Where material and delivery is challenging, this generates less crowd approval but, conversely, is more likely to be “good teaching” as this supports deep learning.

    Our challenge is to achieve deep learning and still generate rave satisfaction reviews. If any reader has the magic recipe, I would be pleased to learn of it.

    joe O.Maria P. and 4 others like this

  • Laura GabigerLaura

    Laura Gabiger

    Professor at Johnson & Wales University

    Top Contributor

    Maybe it is about time we started calling it what it is and got Michelin to develop the star rating system for our universities.

    Nevertheless I appreciate everyone’s thoughtful comments. Muvaffak, I agree with you about the importance and also the difficulty of measuring student learning. Cathryn, thank you for taking a break from your project to give us an overview.

    My story: the best professor and mentor in my life (I spent a total of 21 years as a student in higher education), the professor from whom I learned indispensable and enduring habits of thought that have become more important with each passing year, was one whom the other graduate students in my first term told me–almost unanimously– to avoid at all costs.

    Jeremy W.Maria P. and 1 other like this

  • Dr. Pedro L. MartinezDr. Pedro L.

    Dr. Pedro L. Martinez

    Former Provost and Vice Chancellor for Academic Affairs at Winston Salem State University & President of HigherEd SC.

    I am not sure that course evaluations based on one snap shot measure “teacher effectiveness”. For various reasons, some ineffective teachers get good ratings by pandering to the lowest level of intellectual laziness. However, consistently looking at comments and some other measures may yield indicators of teachers who are unprepared, do not provide feedback, do not adhere to a syllabus of record, and do not respect students in general. I think part of that information is based how questions are crafted.

    I believe that a self evaluation of instructor over a period of a semester could yield invaluable information. Using a camera and other devices, ask the instructor to take snap shots of their teaching/ learning in the classroom over a period of time and then ask for a self-evaluation. For the novice teacher that information could be evaluated by senior faculty and assist the junior faculty to improve his/her delivery. Many instructors are experts in their field but lack exposure to different methods of instructional delivery. I would like to see a taxonomy of a scale that measures the instructor’s ability using lecture as the base of instruction and moving up to levels of problem based learning, service learning, undergraduate research by gauging the different pedagogies (pedagogy, androgogy heutagogy, paragogy etc. that engage students in active learning.

    Dvora P.Maria P. and 1 other like this

  • Steve CharlierSteve

    Steve Charlier

    Assistant Professor at Quinnipiac University

    I wanted to piggyback on Cathryn’s comment above, and align myself with how many of you seem to feel about student evaluations. The quantitative part of student evals are problematic, for all of the reasons mentioned already. But the open-ended feedback that is (usually) a part of student evaluations is where I believe some real value can be gained, both for administrative purposes and for instructor development.

    When allowed to speak freely, what are students saying? Are they lamenting a particular aspect of the course/instructor? Is that one area coloring their response across all questions? These are all important considerations, and provide a much richer source of information for all involved.

    Sadly, the quantitative data is what most folks gravitate to, simply because it’s standardized and “easy”. I don’t believe that student evaluations are a complete waste of time, but I do think that we tend to focus on the wrong information. And, of course, this ignores the issues of timing and participation rates that are probably another conversation altogether!

    Dvora P.Sonu S. and 4 others like this

  • robert easterbrookrobert

    robert easterbrook

    Education Management Professional

    ‘What the Student Does: teaching for enhanced learning’ by John Biggs in Higher Education Research & Development, Vol. 18, No. 1, 1999.

    “The deep approach refers to activities that are appropriate to handling the task so that an appropriate outcome is achieved. The surface approach is therefore to be discouraged, the deep approach encouraged – and that is my working definition of good teaching. Learning is thus a way of interacting with the world. As we learn, our conceptions of phenomena change, and we see the world differently. The acquisition of information in itself does not bring about such a change, but the way we structure that information and think with it does. Thus, education is about conceptual change, not just the acquisition of information.” (p. 60)

    This is the approach higher education is trying adapt to at the moment, as far as I’m aware.

    Jeremy W.Adrian M. like this

  • Cindy KenkelCindy

    Cindy Kenkel

    Northwest Missouri State University

    My Human Resource students will focus on this issue in a class debate “Should student evaluation data significantly impact faculty tenure and promotion decisions?” One side will argue “yes, it provides credible data that should be one of the most important elements” and the other group will argue against this based on much of what has been said above. They will say student evaluations are basically a popularity contest and faculty may actually be dumbing down their classes in order to get higher ratings.

    To what extent is student data used in faculty tenure and promotion decisions at your institutions?

  • yasir

    yasir hayat

    Faculty member at institute of management sciences,peshawar

    NO

  • yasir

    yasir hayat

    Faculty member at institute of management sciences,peshawar

    NO

  • joe othmanjoe

    joe othman

    Associate Professor at Institute of Education, IIUM

    Agree with Pierre, when the number of students responding is not what is expected; then what?

  • joe othmanjoe

    joe othman

    Associate Professor at Institute of Education, IIUM

    Cindy; it is used in promotion decision in my university, but only a small percentage of the total points. Yet this issue is still a thorny one for some faculty

  • Sonu SardaSonu

    Sonu Sarda

    Lecturer at University of Southern Queensland

    How open are we? Is learning about the delivery of a subject only or bulding on soft skills as well?So if we as teachers are facilitating learning in a conducive manner ,would it not lead to an average TE at the least &thus indicate our teaching effectiveness at the base level. Indeed qualitative approach would be far better an approach, if we intend to accomplish the actual purpose of TE i.e Reflection for continual improvement.More and more classrooms are becoming learner centered and to accomplish this learners ‘say’ is vital.
    Some students using these as platforms for personal whims, must not be a concern for many, since the TE are averaged out .Of course last but not the least TEs are like dynamites and must be handled by experts.These are one of the means of assessing the gaps,if any, between the teaching and learning strategies. These must not be used for performance evaluation.If at all, then all the other factors such as the number of students,absenteeism,pass rate rather HD & D rates over a period of minimum three terms must also be included alongside.

  • Dvora PeretsDvora

    Dvora Perets

    Teaching colleague at Ben Gurion University of the Negev

    I implement a semester long self evaluation process in all my mathematics courses. Students gets 3 points (out of 100) for anonymously filling an online questionnaire online every week . They rate (1-5) their personal class experience (I was bored -I was fascinated, I understood nothing- I understood everything, The tutorials sessions didn’t-did help, I visited Lecturer’s/TA’s office hours, I spent X hours of self learning this week). They can also add verbal comments.
    I started it 10 years ago when I built a new special course, to help me “hear” the students (80-100 in each class) and to better adjust myself and the content to my new students. I used to publish a weekly respond to the verbal comments, accepting some and rejecting others while making sure to explain and justify any decision of mine.
    Not only that it helped me improve my teaching and the course but it turned out that it actually created a very solid perception of me as a caring teacher. I always was a very caring teacher (some of my colleagues accuse me of being over caring…) but it seems that “forcing” my student to give feedback along all the semester kind of “brought it out” to the open.

    I am still using long-semester feedback in all my courses and I consider both quantitative and qualitative responds. It helps me see that the majority of students understand me in class. I ignore those who choose “I understand nothing” – obviously if they were indeed understanding “nothing” they would have not come to class… (they can choose “I didn’t participate” or “I don’t wont to answer”)
    I ignore all verbal comments that aim to “punish” me and I change things when I think students r right.
    Finally, being a math lecturer for non-major students is extremely hard, both academically and emotionally. Most students are not willing to do what is needed in order to understand the abstract/complicated concepts and processes.
    Only few (“courageous “) students will attribute their lack of understanding to the fact that they did not attend all classes, or that they weren’t really focused on learning, (probably they spend a lot of time in “Facebook” during class..), or that they didn’t go over class notes at home and come to office hours when they didn’t understand something etc.
    I am encouraged by the fact that about 2/3 of the students that attend classes state they “understood enough” and above (3-5) all semester long. This is especially important as only 40-50% of the students fill the formal end of the semester SE and I bet u can guess how the majority of of them will rate my performance. Students fill SE before the final exam but (again) u can guess how 2 midterms with about 24% failures will influence their evaluation of my teaching.

    Cathryn M.Steve C. and 3 others like this

  • Michael TomlinsonMichael

    Michael Tomlinson

    Senior Director at TEQSA

    I think it’s important to avoid defensive responses to the question. Most participants have assumed that we are talking about individual teachers being assessed through questionnaires, and I share everyone’s reservations about that. I entirely agree that deep learning is what we need to go for, but given the huge amounts of public money that are poured into our institutions, we need to have some way of evaluating whether what we are doing is effective or whether it isn’t.

    I’m not impressed by institutions that are obsessed only with evaluation by numbers. However, there is some merit in monitoring aggregated statistics over time and detecting statistically significant variations. If average satisfaction rates in Engineering have gone down every year for five years shouldn’t we try and find out why? If satisfaction rates in Architecture have gone up every year for five years wouldn’t it be interesting to know if they have been doing something to bring that about that might be worthwhile? It might turn out to be a statistical artifact, but we need to inquire into it, and bring the same arts of critical inquiry to bear on the evidence that we use in our scholarship and research.

    But I always encourage faculties and institutions to supplement this by actually getting groups of students together and talking to them about their student experience as well. Qualitative responses can be more valuable than quantitative surveys. We might actually learn something!

    Laura G.yasir H. and 2 others like this

  • Aleardo

    Aleardo Manacero

    Associate Professor at UNESP – São Paulo State University

    As everyone here I also think that these evaluation forms do not truly measure teaching effectiveness. This is a quite hard thing to evaluate, since the effect of learning will be felt several years later, while performing their job duties.

    Besides that, some observations made by students are interesting for our own growth. I usually get these through informal talks with the class or even some students.

    In another direction, some of the previous comments are addressing deep/surface learning basically stating that deep learning is the right way to go. I have to disagree with this for some of the contents that have to be taught. In my case (teaching to computer science majors) it is important, for example, that every student have a surface knowledge about operating systems design, but those who are going to work as database analysts do not need to know the deep concepts involved with that (the same is true for database concepts for a network analyst…). So, surface learning has also its relevance in the professional formation.

    Jeremy W.Sonu S. like this

  • George ChristodoulidesGeorge

    George Christodoulides

    Senior Consultant and Lecturer at university of nicosia

    The usefulness of Student evaluations, like all similar surveys, is closely linked to the particular questions they are asked to answer. There are the objective-type/factual questions such as “Does he start class on time” or “does he speak clearly” and the very personal questions such as “does he give fair grades”… The effectiveness of a Teacher could be more appropriately linked to suitably phrased question, such as “has he motivated you to learn” and “how much have you bebnefited from the course”. The responses to these questions could, also, be further assessed by comparison with the final grades given to that particular course with the performance of the class in the other courses they have taken..during that semester. So, for assessing Teacher Effectiveness, one needs to ask relevant questions. and perform the appropriate evaluations..

  • Laura GabigerLaura

    Laura Gabiger

    Professor at Johnson & Wales University

    Top Contributor

    Michael has an excellent point that some accountability of institutions and programs is appropriate, and that aggregated data or qualitative results can be useful in assessing whether the teaching in a particular program is accomplishing what it sets out to do. Many outcomes studies are set up to measure the learning in an aggregated way.

    We may want to remember that our present conventions of teaching evaluation had their roots in the 1970s (in California, if I remember correctly), partly as a response to a system in which faculty, both individually and collectively, were accountable to no one. I recall my student days when a professor in a large public research institution would consider it an intrusion and a personal affront to be asked to supply a course syllabus.

    As the air continues to leak out of the USA’s higher education bubble, as the enrollments drop and the number of empty seats rises, it seems inevitable that institutions will feel the pressure to offer anything to make the students perceive their experience as positive. It may be too hard to make learning–often one of the most uncomfortable experiences in life–the priority. Faculty respond defensively because we are continually put in the position of defending ourselves, often by poorly-designed quantitative instruments that address every kind of feel-good hotel concierge aspect of classroom management while overlooking learning.

    John S. likes this

  • Sethuraman JambunathaSethuraman

    Sethuraman Jambunatha

    Dean (I & E) at Vinayaka Mission

    The evaluation of faculty by the students is welcome. The statistics of information can be looked into to a certain degree of objectivity. An instructor strict with his/her students may be ranked low in spite of being an asset to the department. A ‘free-lance’ teacher with students may be placed higher despite being a poor teacher. At any rate the HoD’s duty is to observe the quality of all teachers and his objective evaluation is final. The parents feed-back is also to be taken. Actually
    teaching is a multi-dimensional task and students evaluation is just one co-ordinate only.

  • Edwin

    Edwin Herman

    Associate Professor at University of Wisconsin, Stevens Point

    Student evaluations are a terrible tool for measuring teacher effectiveness. They do measure student satisfaction, and to some extent the measure student *perception* of teacher effectiveness. But the effectiveness of a teaching method or of an instructor is poorly correlated with student satisfaction: while there are positive linkages between the two concepts, students are generally MORE satisfied by an easy course that makes them feel good than by a hard course that makes them have to really think and work (and learn).

    Students like things that are flashy, and things that are easy more than they like things that require a lot of work or things that force them to rethink their core values. Certainly there are students who value a challenge, but even those students may not recognize which teacher gave them a better course.

    Student evaluations can be used effectively to help identify very poor teaching. But it is useless to distinguish between adequate and good teaching practices.

    John S. likes this

  • Cesar GranadosCesar

    Cesar Granados

    ex Vicerrector Administrativo en Universidad Nacional de San Cristóbal de Huamanga

    César S. Granados
    Retired Professor from The National University of San Cristóbal de Huamanga
    Ayacucho, PERÚ

    Since teaching effectiveness is a function of teacher competencies, an effective teacher is able to use the existing competencies to achieve the desired student´s results; but, student´s performance mainly depends of his commitment to achieve competencies.

  • Steve KaczmarekSteve

    Steve Kaczmarek

    Professor at Columbus State Community College

    The student evaluations I’ve seen are more like customer satisfaction surveys, and in this respect, there is less helpful information for the instructor to improve his or her craft and instead more feedback about whether or not the student liked the experience. Shouldn’t their learning and/or improving skills be at least as important? I’m not arguing that these concepts are mutually exclusive, but the evaluations are often written to privilege one over the other.

    There are other problems. Using the same evaluation tool for very different kinds of courses (lecture versus workshop, for instance) doesn’t make a lot of sense. Evaluation language is often vague and puzzling in what it rewards (one evaluation form asks “Was the instructor enthusiastic?” Would an instructor bursting with smiles and enthusiasm but who is disorganized and otherwise less effective be privileged over one who is low-key but nonetheless covers the material effectively?). The “halo effect” can distort findings, where, among other things, more attractive instructors can get higher marks.

    Given how many times I’ve heard from students about someone being their favorite instructor because he or she was easy, I question the criteria students may use when evaluating. Instructors are also told that evaluations are for their benefit to improve teaching ability, but then chairs and administrators use them in promotion and hiring decisions.

    I think if the evaluation tool is sound, it can be useful to helping instructors. But, lastly, I think of my own experiences as a student, where I may have disliked or even resented some instructors because they challenged me or pushed me out of my comfort zone to learn new skills or paradigms. I may have evaluated them poorly at the time, only to come to learn a few years later with greater maturity that they not only taught me well, but taught me something invaluable, and perhaps more so than the instructors I liked. In this respect, it would be more fair to those instructors for me to fill out an evaluation a few years later to accurately describe their teaching.

  • Diane

    Diane Halm

    Adjunct Professor of Writing at Niagara University

    Wow, there are so many valid points raised; so many considerations. In general, I tend to agree with those who believe it gauges student satisfaction more than learning, though there is a correlation between the two. After 13 years as an adjunct at a relatively small, private college, I have found that engagement really is what many students long for. It seems far less about the final grades earned and more about the tools they’ve acquired. It should be mentioned that I teach developmental level composition, and while almost no student earns an A, most feel they have learned much:)

    Pierre H. likes this

  • Nira HativaNira

    Nira Hativa

    Former director, center for the advancement of teaching at Tel Aviv University

    Student ratings of instruction (SRI) do not measure teaching effectiveness but rather student satisfaction from instruction (as some previous comments on this list suggest). However there is a substantial research evidence for the relationships between SRIs and some agreed-upon measures of good teaching and of student learning. This research is summarized in much detail in my recent book:
    Student Ratings of Instruction: A Practical Approach to Designing, Operating, and Reporting (220 pp.) https://www.createspace.com/4065544
    ISBN-13:978-1481054331

    Michael T.Diane H. and 1 other like this

  • robert easterbrookrobert

    robert easterbrook

    Education Management Professional

    Learning is not about what the teacher does, it is about what the learner does.

    Do not confuse the two.

    Learning is what the learner does with what the teacher teaches.

    If you think that learning is all about what the teacher does, then the SRI will mislead and deceive.

    Adrian M.David Shallenberger and 1 other like this

  • Sami SamraSami

    Sami Samra

    Associate Professor at Notre Dame University – Louaize

    Evaluation, in all its forms, is a complex exercise that needs both knowledge and skill. Further, evaluation can best be achieved through a variety of instruments. We know all of this as teachers. Question is how knowledgeable are our students regarding the teaching/learning process. More, how knowledgeable are our administrators in translating information collected from questionnaires (some of which are validity-questionable) into plausible data-based decisions. I agree that students should have a say in how their courses are being conducted. But to use their feedback, quantitatively, to evaluate university professors… I fear that I must hold a very skeptical stand towards such evaluation.

     

  • Top Contributor

    Quite an interesting topic, and I’m reminded of the ancient proverb, “Parts is not parts.” OK, maybe that was McDonalds. This conversation would make a very thoughtful manuscript.

    Courses is not courses. Which course will be more popular, “Contemporary Music” or “General Chemistry?”

    Search any university using the following keywords “really easy course [university].” Those who teach these courses are experts at what they do, and what they do is valuable, however the workload for the student is minimal.

    The major issues: (1) popularity is inversely proportional to workload; and (2) the composition of the questions appearing on course and professor evaluations (CAPEs).

    “What grade do you expect in this class? Instructor explains course material well? Lectures hold your attention?”

    If Sally gets to listen to Nickleback in class and then next period learn quantum mechanics, which course does one suppose best held her attention?

    A person about to receive a C- in General Chemistry is probably receiving that C- because s/he was never able to understand the material for lack of striving, and probably hates the subject. That person is very likely to have never visited the professor during office hours for help. Logically one might expect low approval ratings from such a scenario.

    A person about to receive an A in General Chemistry is getting that A because s/he worked his/her tail off. S/he was able to comprehend mostly everything the professor said, and most probably liked the course. Even more, s/he probably visited the professor during office hours several times for feedback.

    One might argue that the laws of statistics will work in favor of reality, however that’s untrue when only 20% of students respond to CAPEs. Those who respond either love the professor or hate the professor. There’s usually no middle ground. Add this to internet anonymity, and the problem is compounded. I am aware of multiple studies conducted by universities indicating high correlation between written CAPEs and electronic CAPEs, however I’d like to bring up one point.

    Think of the last time you raised your voice to a customer service rep on the phone. Would you have raised your voice to that rep in person?

    There’s not enough space to comment on all the variables involved in CAPE numerical responses. As of last term I stopped paying attention to the numbers and focused exclusively on the comments. There’s a lot of truth in most of the comments.

    I would like to see the following experiment performed. Take a group of 10,000 students. Record their CAPE responses prior to receiving their final grade. Three weeks later, have them re-CAPE. One year later, have them re-CAPE again. Two years. Three years. Finally, have them re-CAPE after getting a job.

    Many students don’t know what a professor did for them until semesters or years down the road. They’re likely to realize how good of a teacher the professor was by their performance in future courses in the same subject requiring cumulative mastery.

    Do I think student evaluations measure teaching effectiveness? CAPEs is not CAPEs.

    Ronnie S.Sonu S. like this

  • Anne GardnerAnne

    Anne Gardner

    Senior Lecturer at University of Technology Sydney

    No, of course they don’t.

  • Christa van StadenChrista

    Christa van Staden

    Owner of AREND.co, a professional learning community for educators

    No, it does not. Efficiency in class room should be measured by the results of students, their attitude towards students and the quality of their preparation. I worked with a man who told a story about the different hats and learning and thought that was a new way of looking at learning. To my utmost shock my colleague, who sat because he had to say something, told me that he did it exactly the same, same jokes, etc, when he did the course five years ago. For real – nothing changed, no new technology, no new insights. no learning happened over a period of five years, nothing? And he is rated very high – head of a new wing. Who rated him? How? And why did it not effect his teaching at all?

  • Mat Jizat AbdolMat Jizat

    Mat Jizat Abdol

    Chief Executive at Institut Sains @ Teknologi Darul Takzim ( INSTEDT)

    If we are looking for quality, we have to get information about our performance.in the lecture room. There are 6 elements normally being practice. They are: 1.Teaching Plan of lecture contents 2.Teaching Delivery 3.Fair and systematic of evaluation on student’s work 4. Whether the Teaching follows the semester plan.5. Whether the lecturer follows the T-Table and always on time of their lecturer hours and lastly is the Relationship between lecturer and students.

  • orlando mcallisterorlando

    orlando mcallister

    Department Head – Communications/Mathematics

    Do we need to be reminded that educators were students at one time or the other? So why not have students evaluate the performance of a teacher? After all, the students are contributing to their own investment in what is significant for survival; and whether it is effective towards career development to attain their full potential as a human sentient being towards the greater good of humanity; anything else falls short of human progress in a tiny rotating planet cycling through the solar system with destination unknown! Welcome to the ‘Twilight Zone.”

    Would you rather educate a student to make a wise decision to accept 10 gallons of water in a desert? Or accept a $1 million check that further creates mirages and illusory dreams of success?

  • Stephen RobertsonStephen

    Stephen Robertson

    Lecturer at Edinburgh Napier University

    I think what my students say about me is important. I’m most interested in the comments they make and have used these to pilot other ideas or adjust my approach.

    I’ve had to learn to not beat myself up about a few bad comments or get carried away with a few good ones.

    I also use the assessment results to see if the adjustments made have had the intended impact. I use the VLE logs as well to see how engaged the students are with the materials and what tools they use and when.

    I find the balance keeps me grounded. I want my students to do well and have fun. The dashboard on your car has multiple measures. Why should teaching be different? Like the car I listen for strange noises and look out the window to make sure I’m still on the road.

    Jeremy W. likes this

  • Allan SheppardAllan

    Allan Sheppard

    Lecturer/tutor/PhD student at Griffith University

    I think that most student evaluations are only reaction measures and not true evaluation of learning outcome or teaching effectiveness – and often evaluations are tainted if the student get a lower mark than anticipated
    I think these types of evaluation are only indicative — and should not really be used to measure teacher/teaching effectiveness – and should not be allowed to affect teachers’ careers
    I note Stephen’s point about multiple measures — unfortunately most evaluations are quick and dirty — and certainly do not provide multiple measures

    Jeremy W.John S. like this

  • Allan SheppardAllan

    Allan Sheppard

    Lecturer/tutor/PhD student at Griffith University

    interestingly most student evaluations are anonymous – so the student can say what he/she likes and not have to face scrutiny

    George C. likes this

  • Olga

    Olga Kuznetsova

    No, students’evaluations cannot fully measure teaching effectiveness.
    However,for the relationship to be mutually beneficial, you have to accept their judgement on the matter, Unfortunately a Unique teacher for all categories (types) of students does not exist in our dynamic world.

    George C. likes this

  • Penny PaliadelisPenny

    Penny Paliadelis

    Professor, Executive Dean, Faculty of Health, Federation University Australia

    Student evaluations are merely popularity contests, they tempt academics to ‘ dumb down’ the content in order to be liked and evaluated positively…this is a dangerous and slippery slope then can result in graduates being ill-prepared for the professions and industries they seek to enter.

    Kathleen C.John S. like this

  • Robson Chiambiro (MBA, MSc, MEd.)Robson

    Robson Chiambiro (MBA, MSc, MEd.)

    PRINCE 2 Registered Practitioner at Higher Colleges of Technology

    In my opinion the student-teacher evaluations are measuring popularity as others suggested but the problem is that some of the questions and intentions of assesing are not fulfilled due to the use of wrong questioning. I have never seen in the instruments a question asking students of their expectations from the teacher and the course as such. To me that is more important than to ask if the student likes the teaching style which students do not know anyway. Teachers who give any test before the assessment are likely to get low ratings than those who give tests soon after the evaluation.

  • Chris GarbettChris

    Chris Garbett

    Principal Lecturer Leeds Metropolitan University

    I agree with other contributors. The evaluations are akin to a satisfaction survey. Personally, if, for example, I stay at an hotel, I only fill in the satisfaction survey if something is wrong. If the service is as I expect, I don’t bother with the survey.

    I feel also that students rate the courses or modules on a popularity basis. A module on a course may be enjoyable, or fun, but not necessarily better taught than another subject with a less entertaining subject.

    Unfortunately, everyone seems to think that the student evaluations are the main criteria by which to judge a course.

    Olga K. likes this

  • Steve BentonSteve

    Steve Benton

    Senior Research Officer, The IDEA Center

    First of all, it would help if we stop referring to them as “student” or “course” evaluations. Students are not qualified to evaluate. That is what administrators are paid to do. However, students are qualified to provide feedback to instructors and administrators about their perceptions of what occurred in the class and of how much they believe they learned. How can that not be valuable information, especially for developmental purposes about how to teach more effectively? Evaluation is not an event that happens at the end of a course–it is an ongoing process that requires multiple indicators of effectiveness (e.g., student ratings of the course, peer evaluations, administrator evaluations, course design, student products). By triangulating that combination of evidence, administrators and faculty can then make informed judgments and evaluate.

    Olga K. likes this

  • Eytan FichmanEytan

    Eytan Fichman

    Lecturer at Hanoi Architectural University

    The student / teacher relationship around the subject matter is a ‘triangle.’ The character of the triangle has a lot to do with a student’s reception of the of the material and the teacher.

    The Student:
    The well-prepared student and the intrinsically motivated student can more readily thrive in the relationship. If s/he is thriving s/he may be more inclined to rate the teacher highly. The poorly prepared student or the student who requires motivation from ‘outside’ is much less likely to thrive and more likely to rate a teacher poorly.

    The Teacher:
    The well-prepared teacher and the intrinsically motivated teacher can more readily thrive in the relationship. If s/he is thriving students may be more inclined to rate the teacher highly. The poorly prepared teacher or the teacher who requires motivation from ‘outside’ is much less likely to thrive and more likely to achieve poor teacher ratings.

    The Subject Matter:
    The content and form of the subject matter are crucial, especially in their relation to the student and teacher.

  • Daniel GoecknerDaniel

    Daniel Goeckner

    Education Professonal

    Student evaluations do not measure teaching effectiveness. I have been told I walk on water and am the worst teacher ever. The major difference was the level of student participation. The more they participated the better I was.

    What I use them for is a learning tool. I take the comments apart looking for snippets that I can use to improve my teaching.

    I have been involved in a portfolio program the past two years. One consist is the better the measured outcomes, the worse the student reviews.

    • Dr. Pedro L. MartinezDr. Pedro L.

      Dr. Pedro L. Martinez

      Former Provost and Vice Chancellor for Academic Affairs at Winston Salem State University & President of HigherEd SC.

      Steve,
      Have you ever been part of a tenure or promotion committee evaluation process? In my 35 years of experience, faculty members do not operate in that ideal smooth linear trajectory that you have described. On the contrary, they partition evaluations into categories and look at student course evaluations as the evidence of an instructor’s ability to teach. However, faculty can choose which evaluations they can submit and what comments they want to include as part of the record. I have never seen “negative comments” as evidence of “ineffective teaching”. The five point scale is used and whenever that falls below a 3.50, it becomes a great concern for our colleagues!

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Sethuraman JambunathaSethuraman

      Sethuraman Jambunatha

      Dean (I & E) at Vinayaka Mission

      There are many other ways of asserting the faculty by the peer group. There can be a weekly seminar and faculty members are expected to give a seminar and other faculty members and students are the audience. This measures how much interest a faculty has in some chosen areas. The Chair (HoD) can talk to some selected students (chosen as representing highly motivated/average/take easy) and reach a decision for tenure-track. As I said earlier the students’ evaluation can be one of many aspects. In my own experience other (senior) faculty evaluation is many times detrimental to the progress of junior faculty. But one ask the HoD is the senior most: but one thing is clear, the chair of the ‘Chair’ has some ‘vision’ and transcends discrimination and partisan feelings. In India we call: “(Sar)Panch me Parameshwar rahtha hai”, meaning: On the position of Judge, God dwells (sits). Think of Becket and the King Henry II. As archbishop, Rev. Thomas Becket was a completely changes person fully submerged in divinity order. So the Chair is supremo. Students evaluation is just

    • Susan WrightSusan

      Susan Wright

      Assistant Professor at Clarkson University

      Amazing how things work…I’m actually in the process of framing out a research project related to this very question. Does anyone have any suggestions for specific papers I should look at i.e. literature related to the topic?

      With respect to your question, I believe the answer depends on the questions that get asked.

    • Sarah LowengardSarah

      Sarah Lowengard

      Researcher, Writer, Editor, Consultant (history, technology, art, sciences)

      I fall on the “no” side too.

      The school-derived questionnaires nearly always ask the wrong questions, for one.

      I’ve always thought students should wait some years (3-20) before providing feedback, because the final day of class is too recent to do a good assessment.

      David Shallenberger likes this

    • Jeremy

      Jeremy Wickins

      Open University Coursework Consultant, Research Methods

      I’m quite late to the topic here, and much of what I think has been said by others. There is a difference between the qualitative and quantitative aspects of student evaluations – I am always fascinated to find out what my students (and peers, of course, though that is a different topic) do/do not think I am doing well so I can learn and adapt my teaching. For this reason, I prefer a more continuous student evaluation than the questionnaire at the end of the course – if I need to adapt to a particular group, I need the information sooner rather than later.

      However, the quantitative side means nothing unless it is tied back to hard data on how the students did in their assessments – an unpopular teacher can still be a *good* teacher of the subject at hand! And the subject matter counts a lot – merely teaching an unpopular but compulsory subject (public law, for instance!) tends to make the teacher initially unpopular in the minds of students – a type of shooting the messenger.

      Teaching isn’t a beauty contest – these metrics need to be used in the right way, and combined with other data if they are to say anything about the teaching.

    • Dr. James R. MartinDr. James R.

      Dr. James R. Martin

      Professor Emeritus

      I wrote a paper about this issue a few years ago. Briefly, the thrust of my argument is that student opinions should not be used as the basis for evaluating teaching effectiveness because these aggregated opinions are invalid measures of quality teaching, provide no empirical evidence in this regard, are incomparable across different courses and different faculty members, promote faculty gaming and competition, tend to distract all participants and observers from the learning mission of the university, and insure the sub-optimization and further decline of the higher education system. Using student opinions to evaluate, compare and subsequently rank faculty members represents a severe form of a problem Deming referred to as a deadly disease of western style management. The theme of the alternative approach is that learning on a program-wide basis should be the primary consideration in the evaluation of teaching effectiveness. Emphasis should shift from student opinion surveys to the development and assessment of program-wide learning outcomes. To achieve this shift in emphasis, the university performance measurement system needs to be redesigned to motivate faculty members to become part of an integrated learning development and assessment team, rather than a group of independent contractors competing for individual rewards.

      Martin, J. R. 1998. Evaluating faculty based on student opinions: Problems, implications and recommendations from Deming’s theory of management perspective. Issues in Accounting Education (November): 1079-1094. http://maaw.info/ArticleSummaries/ArtSumMartinSet98.htm

      Barbara C. likes this

    • Joseph Lennox, Ph.D.The next logical step in the discussion would appear to be, “How would you effectively measure teacher effectiveness?”

      With large enrollment classes, one avenue is here:

      http://www.insidehighered.com/views/2013/10/11/way-produce-more-information-about-instructors-effectiveness-essay

      So, how should teacher effectiveness be measured?” data-li-editable=”false” data-li-edit-sec-left=”900″ data-li-time=”” />

      There appears to be general agreement that the answer to the proposed question is “No.”

      The next logical step in the discussion would appear to be, “How would you effectively measure teacher effectiveness?”

      With large enrollment classes, one avenue is here:

      http://www.insidehighered.com/views/2013/10/11/way-produce-more-information-about-instructors-effectiveness-essay

      So, how should teacher effectiveness be measured?

      Jeremy W.Olga K. like this

    • Ron MelchersRon

      Ron Melchers

      Professor of Criminology, University of Ottawa

      Top Contributor

      To inform this discussion, I would highly recommend this research review done for the Higher Education Quality Council of Ontario. It’s a pretty balanced and well-informed treatment of student course (and teacher) evaluations:http://www.heqco.ca/SiteCollectionDocuments/Student%20Course%20Evaluations_Research,%20Models%20and%20Trends.pdf

      Joseph L.Ken R. like this

    • Ron MelchersRon

      Ron Melchers

      Professor of Criminology, University of Ottawa

      Top Contributor

      Just to add my own two cents (two and a half Canadian cents at this point), I think students have much of value to tell us about their experience in our courses and classes, information that we can use to improve their learning and become more effective teachers. They are also able to inform academic administrators of the degree to which teachers fulfill their basic duties and perform the elementary tasks they are assigned. They have far less to tell us about the value of what they’re learning to their future, their professions … and they are perhaps not the best qualified to identify effective learning and teaching techniques and methods. Those sorts of things are better assessed by knowledgeable, expert professional and academic peers.

      David Shallenberger likes this

    • Barbara

      Barbara Celia

      Assistant Clinical Professor at Drexel University

      Thank you, Ron. A great deal of info but worth reading and analyzing.

    • Prof. Ravindra Kumar

      Prof. Ravindra Kumar Raghuvanshi

      Member of Academic committees of some Universities & Retd.Prof.,Dept.of Botany,University of Rajasthan,Jaipur.

      Student rating system may not necessarily be a reliable method to assess the teaching
      effeciveness,because it depends upon individual grasping/understanding power, intelligence
      and study tendency A teacher does his/her job well, but how many students understand
      it well. It is reflected invariably in the marks obtained by them.

Tablets (iPADs) in the Classroom

From: Perry Bratcher [mailto:bratcher@nku.edu]
Sent: Wednesday, November 06, 2013 9:01 AM
To: ‘lita-l@ala.org’
Cc: Michael Providenti; Michael Wells; Millie Mclemore; Perry Bratcher; Stephen Moon
Subject: [lita-l] RE: Classroom iPads

All – Thanks to each of you for your responses to my email regarding classroom use of iPads (see email at the bottom).  Listed below are is a summary of the comments I received.  I cut/pasted and have reconfigured these comments for this email, so some may be taken out of context.  NOTE: My systems staff have adamantly opposed using the Microsoft Surface.  We have a campus “tech bar” where student/staff can check out new devices for experimentation.  My staff said that the Surface doesn’t work in our particular situation for a variety of reasons and they prefer the iPad tablet option (if we go the tablet route).

Before deciding on implementation of PCs vs. laptops vs. tablet for use in a classroom setting, one needs to consider the motivation for doing so.  Space? Portability? Availability of apps?  Is there a demand for using personal devices for research, etc?  What type of portable device to use (iPad, Microsoft Surface, etc.)

Pros for using iPad/tablets:

  • Keep a few in there to provide examples of how to search on mobile devices.
  • The amount of apps and types of apps out there. Great education apps exist that do not exist elsewhere online or on other platforms (Android or Windows).
  • The iPad is flexible and allows you to regain that floor space you lose with computers and give the user privacy.
  • If setup correctly, the devices can be erased when they are returned so any private data is wiped.
  • Users can download additional apps, even purchase apps if you allow them.
  • They hold a charge much longer then any laptop or ChromeBook on the market.
  • Apple sold 94% of its iPads into education – the reason being that it’s a great education and research tool.
    • Another advantage that I can see boot up time. The iPad is instantly on and connected to the network. Perhaps this most applicable to last-minute library instruction or ad hoc group research?   However, if I had the choice, I would equip a classroom with MacBook Air SSDs
    • Understand how they need to be configured and the tools needed to do so. I created a kit for this not long ago for public libraries: http://www.macprofessionals.com/new-library-ipad-checkout-solution/   Thank you Chris Ross, Macprofessionals
    • UVA has been using iPads for instruction for about 2 years.  They have been very pleased with the results.
    • Our electronic classroom is very small, so we purchased 30 iPads over a year ago to allow teaching in our larger meeting room. There are definitely distinct advantages: flexibility, mobility, lack of technical infrastructure needed (wires, ports, etc.), and the myriad possibilities of apps.

 

Cons for using iPad/tablets:

  • Most mobile devices have not become “workhorse” devices as of yet, so much of the students’ research will still need to be done on a computer.
  • We haven’t seen any advantage to having them either – but our librarians use them sporadically for instruction.
  • Charging, syncing, configuring, Apple ID’s, erasing, cases, restrictions, printing, presenting, etc. For example if you want to present with these, you will need an Apple TV or an adapter. If you want to print you will need AirPrint supported printers or software. If you want to configure and erase you will need a Mac.
  • The challenge I have found is trying to use an inherently personal device in the typical one shot classroom environment. There are lots of things you need to consider. How will they access the wireless? What about taking notes? What about apps that require login? And much more.
  • Someone on staff is equipped and has the time to manage them.
  • We have a pool of 30 loan laptops, recently we have supplemented this with 11 loan iPads. The iPads have generally been very popular but wouldn’t work as a substitute for laptops. As many have mentioned when it comes to getting real work done they are inferior to laptops and people have commented as such.
  • As a complement to laptops though they are great – they are more portable and our nursing students love being able to carry them around and quickly access medical apps, take notes, check calculations etc. I definitely see them as being a valuable resource but if it’s an either/or proposition then I would go on the side of laptops.
  • My personal opinion is that it’s not a bad idea as a supplement to existing systems, but I’d be wary of  replacing more flexible with more limited ones, and am particularly wary of committing to one operating system/vendor (particularly one that tends to charge half-again to twice as much as their competitors with only limited advantages).
  • In a classroom setting (e.g. instruction room) I see little advantage of tablets;  their sole advantage from I can figure out is their portability.   Why force people into a limited device if it is only going to be in one room anyway?

 

The MOOC Is Dead! Long Live Open Learning!

http://diyubook.com/2013/07/the-mooc-is-dead-long-live-open-learning/

We’re at a curious point in the hype cycle of educational innovation, where the hottest concept of the past year–Massive Open Online Courses, or MOOCs–is simultaneously being discovered by the mainstream media, even as the education-focused press is declaring them dead. “More Proof MOOCs are Hot,” and “MOOCs Embraced By Top Universities,” said the Wall Street Journal and USA Today last week upon the announcement that Coursera had received a $43 million round of funding to expand its offerings;
“Beyond MOOC Hype” was the nearly simultaneous headline in Inside Higher Ed.

Can MOOCs really be growing and dying at the same time?

The best way to resolve these contradictory signals is probably to accept that the MOOC, itself still an evolving innovation, is little more than a rhetorical catchall for a set of anxieties around teaching, learning, funding and connecting higher education to the digital world. This is a moment of cultural transition. Access to higher education is strained. The prices just keep rising. Questions about relevance are growing. The idea of millions of students from around the world learning from the worlds’ most famous professors at very small marginal cost, using the latest in artificial intelligence and high-bandwidth communications, is a captivating one that has drawn tens of millions in venture capital. Yet, partnerships between MOOC platforms and public institutions like SUNY and the University of California to create self-paced blended courses and multiple paths to degrees look like a sensible next step for the MOOC, but they are far from that revolutionary future. Separate ideas like blended learning and plain old online delivery seem to be blurring with and overtaking the MOOC–even Blackboard is using the term.

The time seems to be ripe for a reconsideration of the “Massive” impact of “Online” and “Open” learning. TheReclaim Open Learning initiative is a growing community of teachers, researchers and learners in higher education dedicated to this reconsideration. Supporters include the MIT Media Lab and the MacArthur Foundation-supported Digital Media and Learning Research Hub. I am honored to be associated with the project as a documentarian and beater of the drum.

Entries are currently open for our Innovation Contest, offering a $2000 incentive to either teachers or students who have projects to transform higher education in a direction that is connected and creative, is open as in open content and open as in open access, that is participatory, that takes advantage of some of the forms and practices that the MOOC also does but is not beholden to the narrow mainstream MOOC format (referring instead to some of the earlier iterations of student-created, distributed MOOCscreated by Dave Cormier, George Siemens, Stephen Downes and others.)

Current entries include a platform to facilitate peer to peer language learning, a Skype-based open-access seminar with guests from around the world, and a student-created course in educational technology. Go hereto add your entry! Deadline is August 2. Our judges include Cathy Davidson (HASTAC), Joi Ito (MIT), and Paul Kim (Stanford).

Reclaim Open Learning earlier sponsored a hackathon at the MIT Media Lab. This fall, September 27 and 28, our judges and contest winners will join us at a series of conversations and demo days to Reclaim Open Learning at the University of California, Irvine. If you’re interested in continuing the conversation, join us there or check us out online.

July 18, 2013

More Doodling Makes For Better Learning

Doodling is often seen as a sign of distraction. If you’re doodling, you’re not paying attention. If you’re drawing, you’re not taking notes. You’re not listening. You’re not learning.

http://blogs.kqed.org/mindshift/2011/09/more-doodling-makes-for-better-learning/

But research published in the latest edition of the journal Science challenges the anti-doodling stance. It contends that not only can doodling help students learn, but that drawing is an important tool for scientific discovery.

1 4 5 6