Eyal, L. (2012). Digital Assessment Literacy — the Core Role of the Teacher in a Digital Environment. Journal Of Educational Technology & Society, 15(2), 37-49.
Common to all is a view of the level of literacy as a measure of the quality of human capital of a society or a particular area. Literacy develops in interaction with the environment (Vygotsky, 1987).
digital assessment literacy refers to the role of the teacher as an assessor in a technology-rich environment.
Learning Management Systems (LMS) benefits and limitations
Measurement allows quantitative description of a particular characterization of an individual, expressed in numbers.
the combination of assessment and measurement provides a thorough and accurate picture, based upon which practical conclusions can be drawn (Wagner, 1997). A test is a systematic process in which an aspect of student behavior is quantitatively evaluated (Suen & Parkes, 2002).
For several decades this system of assessment has been criticized for a variety of reasons, including the separation between the teaching-learning process and the evaluation process, the relatively low level of thinking required, and the quantitative reporting of results, which does not contribute to students’ progress. In the last decade, the central argument against the tests system is that their predictability is limited to the field and context in which the students are tested, and that they do not predict student problem solving ability, teamwork, good work habits and honesty.
teachers mistakenly believe that repeating lessons will improve students’ achievements.
To evaluate how well the goals were achieved, objective measurement methods are employed (Black, et al., 2004).
Eshet- Alkalai (2004) offered a detailed conceptual framework for the term ‘digital literacy’ that includes: photo-visual thinking; reproduction thinking; branching thinking; information thinking; and socio-emotional thinking.
Eshet-Alkalai, Y. (2004). Digital literacy: A conceptual framework for survival skills in the digital era. Journal of Educational Multimedia and Hypermedia, 13(1), 93–106.
Eshet-Alkalai, Y., & Chajut, E. (2009). Changes Over Time in Digital Literacy. Cyberpsychology & Behavior, 12(6), 713-715. doi:10.1089/cpb.2008.0264
two major patterns of change over time: (a) closing the gap between younger and older participants in the tasks that emphasize profi- ciency and technical control and (b) widening the gap between younger and older participants in tasks that emphasize creativity and critical thinking. Based on the comparison with the matched control groups, we suggest that experience with technology, and not age, accounts for the observed lifelong changes in digital literacy skills
Eshet-Alkalai, Y., & Soffer, O. (2012). Guest Editorial – Navigating in the Digital Era: Digital Literacy: Socio-Cultural and Educational Aspects. Journal Of Educational Technology & Society, 15(2), 1.
a wide range of technological, cognitive and social competences—collectively termed “Digital Literacy.” Users thus must become “digitally literate” in order to cope effectively with the complex sociological, cognitive and pedagogical challenges these technologies pose. These skills include, for example, the ability to operate computers and navigate the net effectively, to cope with large volumes of information, to evaluate the reliability of information, and to critically assess what seem to be natural (and not ideologically biased) technological tools. In a different way from the spirit of modern print, learners construct and consume knowledge in non-linear environments. They need to learn, collaborate and solve problems effectively in virtual (non face-to-face) learning environments, and to communicate effectively in technology-mediated social participation environments.
It is important to note: digital literacy, then, is not limited simply to computer and Internet operation and orientation. It also relates to a variety of epistemological and ethical issues arise due to the unique characteristics of digital technologies and that are often overlapped with trends related to the post-modern and post-structural era. These include questions regarding the authority of knowledge, intellectual property and ownership, copyright, authenticity and plagiarism. Furthermore, issues such as self-representation, virtual group dynamics, and on-line addiction also arise.
more on digital literacy in this IMS blog
NISO Virtual Conference:
Justifying the Library: Using Assessment to Justify Library Investments
April 20, 11:00am – 5:00pm EST – Learn more and register at: http://www.niso.org/news/events/2016/virtual_conference/apr20_virtualconf/
Assessment exercises for institutional libraries are frequently a double-edged sword; they’re as readily used to justify cuts as they are to bolster budgets. This NISO virtual conference provides expert insights into how data gathered in the normal course of activities can be leveraged to demonstrate value to the parent institution. Data represent the raw material for building your case. What data are available? How is their quality? What is the appropriate context for persuasively presenting that data to deans, provosts and other administrators? This virtual conference will address the very hot topic of library assessment in the context of a changing educational environment and features a complete roster of expert speakers, including:
- Steven J. Bell, Associate University Librarian, Temple University
- Nancy Turner, Assessment and Organizational Performance Librarian, Temple University
- Jocelyn Wilk, University Archivist, Columbia University
- Elisabeth Brown, Director of Assessment & Scholarly Communications Librarian, SUNY-Binghamton
- Ken Varnum, Senior Program Manager for Discovery, Delivery, & Learning Analytics, University of Michigan
- Jan Fransen, Service Lead for Researcher and Discovery Systems, University of Minnesota
- Kristi Holmes, Directer, Galter Health Sciences Library, Northwestern University
- Starr Hoffman, Head, Planning & Assessment, University of Nevada – Las Vegas
- Carl Grant, Chief Technology Officer and Associate University Librarian for Knowledge Services, University of Oklahoma
The preliminary agenda and pricing information for this event may be found at:
As a bonus, register for the virtual conference and receive an automatic registration for the follow-up training webinar, Making Assessment Work: Using ORCIDS to Improve Your Institutional Assessments, on Thursday, April 28!
Instructors for that session are Alice Meadows (ORCID), Christopher Erdmann (Harvard University) and Merle Rosenzweig (University of Michigan).
For more information about this event, please contact Jill O’Neill (firstname.lastname@example.org).
Other questions for NISO? Get in touch at:
3600 Clipper Mill Road
Baltimore, MD 21211-1948
More on assessment in this IMS blog:
analytics in education
Could Rubric-Based Grading Be the Assessment of the Future?
I use rubrics and see the positive sides as well as appreciate the structure they bring in assessment. But this article makes me see also the danger of rubrics being applied as a harness, another debacle no different from NCLB and testings scores, which plague this nation’s education in the last two decades. The same “standardizing” as in Quality Matter, which can bring some clarity and structure, but also can stifle any creativity, which steers “out of the norm.” A walk on such path opens the door to another educational assembly line, where adjunct and hourly for-hire instructors will teach pre-done content and assess with the rubrics in a fast-food manner.
a consortium of 59 universities and community colleges in nine states is working to develop a rubric-based assessment system that would allow them to measure these crucial skills within ongoing coursework that students produce.
written communication, critical thinking and quantitative literacy. The faculty worked together to write rubrics (called Valid Assessment of Learning in Undergraduate Education or VALUE rubrics) that laid out what a progression of these skills looks like.
“These rubrics are designed to be cross-disciplinary,” explained Bonnie Orcutt
Parents and teachers are pushing back against blunt assessment instruments like standardized tests, and are looking for a way to hold schools accountable that doesn’t mean taking time away from class work.
Pls have a link to the PDF file
Here some opinions from the comments section:
Formative assessments are only good if you use them to alter your teaching or for students to adjust their learning. Too often, I’ve seen exit tickets used and nothing is done with the results.
Please consider other IMS blog postings on assessment
Communicating Students convey information, describe process, and express ideas in accurate, engaging, and understandable ways.
Researching Students identify and access a variety of resources through which they retrieve and organize data they have determined to be authentic and potentially relevant to their task.
Thinking Critically Students use structured methods to weigh the relevance and impact of their decisions and actions against desired outcomes and adjust accordingly.
Thinking Creatively Students comprehend and employ principles of creative and productive problem solving to understand and mitigate real-world problems.
Keep in mind, however, that standards don’t prepare students for anything. They are a framework of expectations and educational objectives. Without the organization and processes to achieve them, they are worthless.
Significance An instructionally useful assessment measures students’ attainment of a worthwhile curricular aim—for instance, a high-level cognitive skill or a substantial body of important knowledge.
Teachability An instructionally useful assessment measures something teachable. Teachability means that most teachers, if they deliver reasonably effective instruction aimed at the assessment’s targets, can get most of their students to master what the test measures.
Describability A useful assessment provides or is directly based on sufficiently clear descriptions of the skills and knowledge it measures so that teachers can design properly focused instructional activities.
Reportability An instructionally useful assessment yields results at a specific enough level to inform teachers about the effectiveness of the instruction they provide.
Nonintrusiveness In clear recognition that testing time takes away from teaching time, an instructionally useful assessment shouldn’t take too long to administer—it should not intrude excessively on instructional activities.
Three Good Tools for Building Flipped Lessons That Include Assessment Tools
eduCanon is a free service for creating, assigning, and tracking your students’ progress on flipped lessons. eduCanon allows teachers to build flipped lessons using YouTube and Vimeo videos, create questions about the videos, then assign lessons to their students. Teachers can track the progress of their students within eduCanon.
Teachem is a service that uses the TED Ed model of creating lessons based on video. On Teachem teachers can build courses that are composed of a series of videos hosted on YouTube. Teachers can write questions and comments in “flashcards” that are tied to specific parts of each video and display next to each video. Students can take notes while watching the videos using the Teachem SmartNote system.
Knowmia is a website and a free iPad app for creating, sharing, and viewing video lessons. One of the best features of Knowia is a tool that they call the Assignment Wizard. The Knowmia Assignment Wizard allows teachers to design assignments that their students have to complete after watching a video. Students can check their own Knowmia accounts to see the assignments that their teachers have distributed. To aid teachers in assessing their students, Knowmia offers an automatic scoring option. Knowmia’s automatic scoring function works for multiple choice questions and numeric questions.