Whether we use synchronous or asynchronous online sessions, whether we call it distance or virtual learning, we’re all challenged to provide meaningful education experiences at a distance as the education world grapples with the impact of Covid-19.
Jones, C., Watkins, F., Williams, J., Lambros, A., Callahan, K., Lawlor, J., … Atkinson, H. (2019). A 360-degree assessment of teaching effectiveness using a structured-videorecorded observed teaching exercise for faculty development. Medical Education Online, 24(1), 1596708. https://doi.org/10.1080/10872981.2019.1596708
enable faculty to receive a detailed 360-degree assessment of their teaching
The faculty in Gerontology and Geriatric Medicine at Wake Forest School of Medicine (WFSM) saw an opportunity to incorporate a focused teaching practicum for faculty within a multiple-specialty faculty development program. 360-degree assessments involve a combination of feedback from subordinates, colleagues and superiors. 360-degree feedback has been considered an essential tool in transformational leadership because the evaluation process avoids bias through diversity of viewpoints represented, and it is rarely applied to teaching assessments. Specifically, we designed a teaching practicum using a Videorecorded Observed Teaching Exercise (VOTE) to provide self-, peer- and learner assessments of teaching
Our design of videorecorded microteaching sessions embedded into a faculty development program presents a feasible, well-received model to provide faculty development in teaching and a robust 360-degree assessment of teaching skills.
Two strengths of our program are that it is feasible and reproducible.
In addition, costs for these sessions were low. VOTE video capture costs ranged from $45 – $90 per session depending on the audiovisual capacity of the room used for recording. Costs for this activity included an audiovisual technician who performed the room setup and videorecording. However, a handheld videorecorder or mobile device could be used for these sessions as well.
Today, despite major advances in ways to measure learning, we still don’t have common definitions for project-based learning or performance assessment.
In the absence of agreed-upon definitions for this evolving field, Education Week reporters developed a glossary
Proficiency-based or competency-based learning: These terms are interchangeable. They refer to the practice of allowing students to progress in their learning as they master a set of standards or competencies. Students can advance at different rates. Typically, there is an attempt to build students’ ownership and understanding of their learning goals and often a focus on “personalizing” students’ learning based on their needs and interests.
Project-based learning: Students learn through an extended project, which may have a number of checkpoints or assessments along the way. Key features are inquiry, exploration, the extended duration of the project, and iteration (requiring students to revise and reflect, for example). A subset of project-based learning is problem-based learning, which focuses on a specific challenge for which students must find a solution.
Standards-based grading: This refers to the practice of giving students nuanced and detailed descriptions of their performance against specific criteria or standards, not on a bell curve. It can stand alone or exist alongside traditional letter grading.
Performance assessment: This assessment measures how well students apply their knowledge, skills, and abilities to authentic problems. The key feature is that it requires the student to produce something, such as a report, experiment, or performance, which is scored against specific criteria.
Portfolio: This assessment consists of a body of student work collected over an extended period, from a few weeks to a year or more. This work can be produced in response to a test prompt or assignment but is often simply drawn from everyday classroom tasks. Frequently, portfolios also contain an element of student reflection.
Exhibition: A type of performance assessment that requires a public presentation, as in the sciences or performing arts. Other fields can also require an exhibition component. Students might be required, for instance, to justify their position in an oral presentation or debate.
Performance task: A piece of work students are asked to do to show how well they apply their knowledge, skills, or abilities—from writing an essay to diagnosing and fixing a broken circuit. A performance assessment typically consists of several performance tasks. Performance tasks also may be included in traditional multiple-choice tests.
Overview of Crisis Management SG focused on their learners ’ assessment and evaluation capabilities. This synthesis can help researchers and gamecreators by enlighten the main criteria and techniques for learners ’ assessment andevaluation. The described benefits and limitations of each technique may facilitate thechoice of the most adequate way to evaluate a particular SG.
Open Discussion: Instruments and Methods for Formative Assessment: by invitation of teachers from Plovdiv region | Тема: Инструменти и методи за актуални училищни занятия
Where | Къде: СУ „Димитър Матевски“ https://goo.gl/maps/rojNjE3dk4s and online ( виртуално) When | Кога: 2. май, 2018, 14 часа | May 2, 2018, 2PM local time (Bulgaria) Who | Кой: преподаватели и педагози | teachers and faculty How | Как: използвайте “обратна връзка” за споделяне на вашите идеи | use the following hashtag for backchanneling#BGtechEd
Intro | Представяне – 5мин. Who are we (please share short intro about your professional interests) | Кои сме ние: споделете накратко професионалните си интереси (използвайте “comment” section под този блог) http://web.stcloudstate.edu/pmiltenoff/faculty/
Reality Check (before we do tech) | минута за откровение (преди да започнем с технологии):
who is our audience | кого учим/обучаваме? https://blog.stcloudstate.edu/ims/2018/04/21/in-memoriam-avicii/ https://blog.stcloudstate.edu/ims/2018/04/17/edtech-implementation-fails/
why technology application fails | защо се проваля използването на технологии в обучението?
Understanding Purpose | какъв е смисълът
Insufficient Modeling of Best Practices | недостатъчен или несподелен опит
Bad First Impressions | лоши първи впечатления
Real-World Usability Challenges | ежедневни проблеми
The Right Data to Track Progress | кои данни определят успеха
Share your thoughts for the fails | Сподели твоите мисли за провала
Тема1. Сравняване на Kahoot, Edpuzzle и Apester – 1-1, 1/2 час продължителност Topic 1: A comparison of Kahoot, Apester and EdPuzzle
Дискусия, относно методиката на използване. Споделяне на опит кога и как го използват колегите от България и САЩ (други страни?).
Short demonstration and discussion regarding methodology of use. Sharing experience of use.
Споделяне на опит | ideas and experience exchange.
Comparison to other tools (e.g. flipped classroom advantage to Kahoot; difference from EdPuzzle, similarities to EdPuzzle) | съпоставяне с други инструменти: например, обърната класна стая – предимство пред Кахут; разлики и прилики с ЕдПъзил и тн)
Създаване на акаунт | account creation and building of learning objects
Comparison to other tools (e.g. flipped classroom advantage to Kahoot; difference from EdPuzzle, similarities to EdPuzzle) | съпоставяне с други инструменти: например, обърната класна стая – предимство пред Кахут; разлики и прилики с Еиптстър и тн)
Apеster (https://app.apester.com/): can be played asynchronously (yet, restricted in time). Kahoot is a simultaneous game. EdPuzzle also lke Apester can be asynchronous, but like Kahoot requires an account, whereas Apester can be played by anyone.
The proliferation of mobile devices and the adoption of learning applications in higher education simplifies formative assessment. Professors can, for example, quickly create a multi-modal performance that requires students to write, draw, read, and watch video within the same assessment. Other tools allow for automatic grade responses, question-embedded documents, and video-based discussion.
Multi-Modal Assessments – create multiple-choice and open-ended items that are distributed digitally and assessed automatically. Student responses can be viewed instantaneously and downloaded to a spreadsheet for later use.
Formative (http://www.goformative.com) allows professors to upload charts or graphic organizers that students can draw on with a stylus. Formative also allows professors to upload document “worksheets” which can then be augmented with multiple-choice and open-ended questions.
Nearpod (http://www.nearpod.com) allows professors to upload their digital presentations and create digital quizzes to accompany them. Nearpod also allows professors to share three-dimensional field trips and models to help communicate ideas.
Video-Based Assessments – Question-embedded videos are an outstanding way to improve student engagement in blended or flipped instructional contexts. Using these tools allows professors to identify if the videos they use or create are being viewed by students.
Playposit (http://www.playposit.com) are two leaders in this application category. A second type of video-based assessment allows professors to sustain discussion-board like conversation with brief videos.
Flipgrid (http://www.flipgrid.com), for example, allows professors to posit a video question to which students may respond with their own video responses.
Quizzing Assessments – ools that utilize close-ended questions that provide a quick check of student understanding are also available.
Kahoot (http://www.kahoot.com) are relatively quick and convenient to use as a wrap up to instruction or a review of concepts taught.
Integration of technology is aligned to sound formative assessment design. Formative assessment is most valuable when it addresses student understanding, progress toward competencies or standards, and indicates concepts that need further attention for mastery. Additionally, formative assessment provides the instructor with valuable information on gaps in their students’ learning which can imply instructional changes or additional coverage of key concepts. The use of tech tools can make the creation, administration, and grading of formative assessment more efficient and can enhance reliability of assessments when used consistently in the classroom. Selecting one that effectively addresses your assessment needs and enhances your teaching style is critical.
Another consideration in rebranding assessment would be to emphasize that assessment “draws from multiple sources” of information. This in turn would encourage faculty to think about assessment not as means of judgment, but rather as a process of evidence gathering. In fact, it helps underscore the idea that to really demonstrate effective learning and instruction, we must collect multiple pieces of evidence. As a result, faculty will be more likely to plan and implement multiple and varied assessment methodologies, which in turn will lead to the collection of more evidence and stronger validity of inferences about the extent of student learning.