Today, despite major advances in ways to measure learning, we still don’t have common definitions for project-based learning or performance assessment.
In the absence of agreed-upon definitions for this evolving field, Education Week reporters developed a glossary
Proficiency-based or competency-based learning: These terms are interchangeable. They refer to the practice of allowing students to progress in their learning as they master a set of standards or competencies. Students can advance at different rates. Typically, there is an attempt to build students’ ownership and understanding of their learning goals and often a focus on “personalizing” students’ learning based on their needs and interests.
Project-based learning: Students learn through an extended project, which may have a number of checkpoints or assessments along the way. Key features are inquiry, exploration, the extended duration of the project, and iteration (requiring students to revise and reflect, for example). A subset of project-based learning is problem-based learning, which focuses on a specific challenge for which students must find a solution.
Standards-based grading: This refers to the practice of giving students nuanced and detailed descriptions of their performance against specific criteria or standards, not on a bell curve. It can stand alone or exist alongside traditional letter grading.
Performance assessment: This assessment measures how well students apply their knowledge, skills, and abilities to authentic problems. The key feature is that it requires the student to produce something, such as a report, experiment, or performance, which is scored against specific criteria.
Portfolio: This assessment consists of a body of student work collected over an extended period, from a few weeks to a year or more. This work can be produced in response to a test prompt or assignment but is often simply drawn from everyday classroom tasks. Frequently, portfolios also contain an element of student reflection.
Exhibition: A type of performance assessment that requires a public presentation, as in the sciences or performing arts. Other fields can also require an exhibition component. Students might be required, for instance, to justify their position in an oral presentation or debate.
Performance task: A piece of work students are asked to do to show how well they apply their knowledge, skills, or abilities—from writing an essay to diagnosing and fixing a broken circuit. A performance assessment typically consists of several performance tasks. Performance tasks also may be included in traditional multiple-choice tests.
Overview of Crisis Management SG focused on their learners ’ assessment and evaluation capabilities. This synthesis can help researchers and gamecreators by enlighten the main criteria and techniques for learners ’ assessment andevaluation. The described benefits and limitations of each technique may facilitate thechoice of the most adequate way to evaluate a particular SG.
Open Discussion: Instruments and Methods for Formative Assessment: by invitation of teachers from Plovdiv region | Тема: Инструменти и методи за актуални училищни занятия
Where | Къде: СУ „Димитър Матевски“ https://goo.gl/maps/rojNjE3dk4s and online ( виртуално) When | Кога: 2. май, 2018, 14 часа | May 2, 2018, 2PM local time (Bulgaria) Who | Кой: преподаватели и педагози | teachers and faculty How | Как: използвайте “обратна връзка” за споделяне на вашите идеи | use the following hashtag for backchanneling#BGtechEd
Intro | Представяне – 5мин. Who are we (please share short intro about your professional interests) | Кои сме ние: споделете накратко професионалните си интереси (използвайте “comment” section под този блог) http://web.stcloudstate.edu/pmiltenoff/faculty/
Reality Check (before we do tech) | минута за откровение (преди да започнем с технологии):
who is our audience | кого учим/обучаваме? http://blog.stcloudstate.edu/ims/2018/04/21/in-memoriam-avicii/ http://blog.stcloudstate.edu/ims/2018/04/17/edtech-implementation-fails/
why technology application fails | защо се проваля използването на технологии в обучението?
Understanding Purpose | какъв е смисълът
Insufficient Modeling of Best Practices | недостатъчен или несподелен опит
Bad First Impressions | лоши първи впечатления
Real-World Usability Challenges | ежедневни проблеми
The Right Data to Track Progress | кои данни определят успеха
Share your thoughts for the fails | Сподели твоите мисли за провала
Тема1. Сравняване на Kahoot, Edpuzzle и Apester – 1-1, 1/2 час продължителност Topic 1: A comparison of Kahoot, Apester and EdPuzzle
Дискусия, относно методиката на използване. Споделяне на опит кога и как го използват колегите от България и САЩ (други страни?).
Short demonstration and discussion regarding methodology of use. Sharing experience of use.
Споделяне на опит | ideas and experience exchange.
Comparison to other tools (e.g. flipped classroom advantage to Kahoot; difference from EdPuzzle, similarities to EdPuzzle) | съпоставяне с други инструменти: например, обърната класна стая – предимство пред Кахут; разлики и прилики с ЕдПъзил и тн)
Създаване на акаунт | account creation and building of learning objects
Comparison to other tools (e.g. flipped classroom advantage to Kahoot; difference from EdPuzzle, similarities to EdPuzzle) | съпоставяне с други инструменти: например, обърната класна стая – предимство пред Кахут; разлики и прилики с Еиптстър и тн)
Apеster (https://app.apester.com/): can be played asynchronously (yet, restricted in time). Kahoot is a simultaneous game. EdPuzzle also lke Apester can be asynchronous, but like Kahoot requires an account, whereas Apester can be played by anyone.
The proliferation of mobile devices and the adoption of learning applications in higher education simplifies formative assessment. Professors can, for example, quickly create a multi-modal performance that requires students to write, draw, read, and watch video within the same assessment. Other tools allow for automatic grade responses, question-embedded documents, and video-based discussion.
Multi-Modal Assessments – create multiple-choice and open-ended items that are distributed digitally and assessed automatically. Student responses can be viewed instantaneously and downloaded to a spreadsheet for later use.
Formative (http://www.goformative.com) allows professors to upload charts or graphic organizers that students can draw on with a stylus. Formative also allows professors to upload document “worksheets” which can then be augmented with multiple-choice and open-ended questions.
Nearpod (http://www.nearpod.com) allows professors to upload their digital presentations and create digital quizzes to accompany them. Nearpod also allows professors to share three-dimensional field trips and models to help communicate ideas.
Video-Based Assessments – Question-embedded videos are an outstanding way to improve student engagement in blended or flipped instructional contexts. Using these tools allows professors to identify if the videos they use or create are being viewed by students.
Playposit (http://www.playposit.com) are two leaders in this application category. A second type of video-based assessment allows professors to sustain discussion-board like conversation with brief videos.
Flipgrid (http://www.flipgrid.com), for example, allows professors to posit a video question to which students may respond with their own video responses.
Quizzing Assessments – ools that utilize close-ended questions that provide a quick check of student understanding are also available.
Kahoot (http://www.kahoot.com) are relatively quick and convenient to use as a wrap up to instruction or a review of concepts taught.
Integration of technology is aligned to sound formative assessment design. Formative assessment is most valuable when it addresses student understanding, progress toward competencies or standards, and indicates concepts that need further attention for mastery. Additionally, formative assessment provides the instructor with valuable information on gaps in their students’ learning which can imply instructional changes or additional coverage of key concepts. The use of tech tools can make the creation, administration, and grading of formative assessment more efficient and can enhance reliability of assessments when used consistently in the classroom. Selecting one that effectively addresses your assessment needs and enhances your teaching style is critical.
Another consideration in rebranding assessment would be to emphasize that assessment “draws from multiple sources” of information. This in turn would encourage faculty to think about assessment not as means of judgment, but rather as a process of evidence gathering. In fact, it helps underscore the idea that to really demonstrate effective learning and instruction, we must collect multiple pieces of evidence. As a result, faculty will be more likely to plan and implement multiple and varied assessment methodologies, which in turn will lead to the collection of more evidence and stronger validity of inferences about the extent of student learning.
Common to all is a view of the level of literacy as a measure of the quality of human capital of a society or a particular area. Literacy develops in interaction with the environment (Vygotsky, 1987).
digital assessment literacy refers to the role of the teacher as an assessor in a technology-rich environment.
Learning Management Systems (LMS) benefits and limitations
Measurement allows quantitative description of a particular characterization of an individual, expressed in numbers.
the combination of assessment and measurement provides a thorough and accurate picture, based upon which practical conclusions can be drawn (Wagner, 1997). A test is a systematic process in which an aspect of student behavior is quantitatively evaluated (Suen & Parkes, 2002).
For several decades this system of assessment has been criticized for a variety of reasons, including the separation between the teaching-learning process and the evaluation process, the relatively low level of thinking required, and the quantitative reporting of results, which does not contribute to students’ progress. In the last decade, the central argument against the tests system is that their predictability is limited to the field and context in which the students are tested, and that they do not predict student problem solving ability, teamwork, good work habits and honesty.
teachers mistakenly believe that repeating lessons will improve students’ achievements.
To evaluate how well the goals were achieved, objective measurement methods are employed (Black, et al., 2004).
Eshet- Alkalai (2004) offered a detailed conceptual framework for the term ‘digital literacy’ that includes: photo-visual thinking; reproduction thinking; branching thinking; information thinking; and socio-emotional thinking.
Eshet-Alkalai, Y. (2004). Digital literacy: A conceptual framework for survival skills in the digital era. Journal of Educational Multimedia and Hypermedia, 13(1), 93–106.
Eshet-Alkalai, Y., & Chajut, E. (2009). Changes Over Time in Digital Literacy. Cyberpsychology & Behavior, 12(6), 713-715. doi:10.1089/cpb.2008.0264
two major patterns of change over time: (a) closing the gap between younger and older participants in the tasks that emphasize profi- ciency and technical control and (b) widening the gap between younger and older participants in tasks that emphasize creativity and critical thinking. Based on the comparison with the matched control groups, we suggest that experience with technology, and not age, accounts for the observed lifelong changes in digital literacy skills
Eshet-Alkalai, Y., & Soffer, O. (2012). Guest Editorial – Navigating in the Digital Era: Digital Literacy: Socio-Cultural and Educational Aspects. Journal Of Educational Technology & Society, 15(2), 1.
a wide range of technological, cognitive and social competences—collectively termed “DigitalLiteracy.” Users thus must become “digitally literate” in order to cope effectively with the complex sociological, cognitive and pedagogical challenges these technologies pose. These skills include, for example, the ability to operate computers and navigate the net effectively, to cope with large volumes of information, to evaluate the reliability of information, and to critically assess what seem to be natural (and not ideologically biased) technological tools. In a different way from the spirit of modern print, learners construct and consume knowledge in non-linear environments. They need to learn, collaborate and solve problems effectively in virtual (non face-to-face) learning environments, and to communicate effectively in technology-mediated social participation environments.
It is important to note: digital literacy, then, is not limited simply to computer and Internet operation and orientation. It also relates to a variety of epistemological and ethical issues arise due to the unique characteristics of digital technologies and that are often overlapped with trends related to the post-modern and post-structural era. These include questions regarding the authority of knowledge, intellectual property and ownership, copyright, authenticity and plagiarism. Furthermore, issues such as self-representation, virtual group dynamics, and on-line addiction also arise.
Assessment exercises for institutional libraries are frequently a double-edged sword; they’re as readily used to justify cuts as they are to bolster budgets. This NISO virtual conference provides expert insights into how data gathered in the normal course of activities can be leveraged to demonstrate value to the parent institution. Data represent the raw material for building your case. What data are available? How is their quality? What is the appropriate context for persuasively presenting that data to deans, provosts and other administrators? This virtual conference will address the very hot topic of library assessment in the context of a changing educational environment and features a complete roster of expert speakers, including:
Steven J. Bell, Associate University Librarian, Temple University
Nancy Turner, Assessment and Organizational Performance Librarian, Temple University
Jocelyn Wilk, University Archivist, Columbia University
Elisabeth Brown, Director of Assessment & Scholarly Communications Librarian, SUNY-Binghamton
Ken Varnum, Senior Program Manager for Discovery, Delivery, & Learning Analytics, University of Michigan
Jan Fransen, Service Lead for Researcher and Discovery Systems, University of Minnesota
Kristi Holmes, Directer, Galter Health Sciences Library, Northwestern University
Starr Hoffman, Head, Planning & Assessment, University of Nevada – Las Vegas
Carl Grant, Chief Technology Officer and Associate University Librarian for Knowledge Services, University of Oklahoma
The preliminary agenda and pricing information for this event may be found at:
As a bonus, register for the virtual conference and receive an automatic registration for the follow-up training webinar, Making Assessment Work: Using ORCIDS to Improve Your Institutional Assessments, on Thursday, April 28!