Q: The students had to write 3 learning goals in D2L self-assessment that they had for the semester, so they typed those in, and then clicked “send” or “complete” or whatever, and their response just went into the ether? There has to be a way to retrieve the information. Any ideas?
The info is gone. It doesn’t get collected anywhere, that’s not the nature of the tool. You wanted a survey.
Self-Assessment is used rather to temporarily collect information for the student her/him-self and review; if the instructor needs to get hold of that info, the instructor must consider survey
Example for a good use of self-assessment:
math70-feedbacktest – Preview
After you set up the following equation so that it could be solved by using the Quadratic Formula, in this case with polynomial terms on the left-hand side, what are the values of a, b, and c?
Flower Darby, from Northern Arizona University, and Heather Garcia, from Foothill College, presented an eye-catching poster at the Educause Learning Initiative conference this year with the title, “Multiple-choice quizzes don’t work.”
One solution, says Garcia, is for professors to give “more authentic” assignments, like project-based work and other things that students would be more likely to see in a professional environment.
she and her colleague argue that there is a way to assign project-based or other rich assessments without spending late nights holding a red pen
One approach they recommend is called “specification grading,” where professors set a clear rubric for what students need to achieve to complete the assignment, and then score each entry as either meeting those rubrics or not. “It allows faculty to really streamline their grading time,
Linda B. Nilson, who wrote an entire book about the approach and regularly gives workshops on it. The book’s subtitle lays out the approach’s promise: “Restoring Rigor, Motivating Students and Saving Faculty Time.”
For instance, in a math problem involving adding large numbers, a professor could make one of the choices the number that the student would get if they forgot to carry. If professors notice that several students mark that answer, it may be time to go over that concept again. “Even if I’ve got a class of 275, I can learn a lot about what they know and don’t know, and let that guide what I do the next day,” he says.
Since the early days of online instruction, the response of many new instructors has been to figure out how to transfer elements of their face-to-face class into the online format. In response, education technology companies have been quick to create products that attempt to replicate in-person teaching. Some examples include learning management systems, lecture capture tools, and early online meeting systems.
online proctoring systems, such as ProctorU or Proctorio, replicate a practice that isn’t effective in-person. Exams are only good for a few things: managing faculty workload and assessing low level skill and content knowledge. What they aren’t good at is demonstrating student learning or mastery of a topic. As authors Rena Palloff and Keith Pratt discuss in their book “Assessing the Online Learner: Resources and Strategies for Faculty,” online exams typically measure skills that require memorization of facts, whereas learning objectives are often written around one’s ability to create, evaluate and analyze course material.
Authentic assessments, rather than multiple choice or other online exams, is one alternative that could be explored. For example, in a chemistry course, students could make a video themselves doing a set problems and explain the process. This would allow instructors to better understand students’ thinking and identify areas that they are struggling in. Another example could be in a psychology course, where students could curate and evaluate a set of resources on a given topic to demonstrate their ability to find, and critically analyze online information. (see Bryan Alexander‘s take on video assignments here: http://blog.stcloudstate.edu/ims?s=bryan+alexander+video+assignments
Launched in 2000 as a project of the OECD, the PISA is administered every three years to nationally representative samples of students in each OECD country and in a growing number of partner countries and subnational units such as Shanghai. The 74 education systems that participated in the latest PISA study, conducted during 2009, represented more than 85% of the global economy and included virtually all of the United States’ major trading partners, making it a particularly useful source of information on U.S. students’ relative standing.
The United States’ historical advantage in terms of educational attainment has long since eroded, however. U.S. high-school graduation rates peaked in 1970 at roughly 80% and have declined slightly since, a trend often masked in official statistics by the growing number of students receiving alternative credentials, such as a General Educational Development (GED) certificate.
in many respects the U.S. higher education system remains the envy of the world. Despite recent concerns about rapidly increasing costs, declining degree completion rates, and the quality of instruction available to undergraduate students, U.S. universities continue to dominate world rankings of research productivity. The 2011 Academic Rankings of World Universities, an annual publication of the Shanghai Jiao Tong University, placed eight U.S. universities within the global top 10, 17 within the top 20, and 151 within the top 500. A 2008 RAND study commissioned by the U.S. Department of Defense found that 63% of the world’s most highly cited academic papers in science and technology were produced by researchers based in the United States. Moreover, the United States remains the top destination for graduate students studying outside of their own countries, attracting 19% of all foreign students in 2008. This rate is nine percentage points higher than the rate of the closest U.S. competitor, the United Kingdom.
Abel, H. (1959). Polytechnische Bildung und Berufserziehung in internationaler Sicht. International Review of Education / Internationale Zeitschrift für Erziehungswissenschaft / Revue Internationale de l’Education, 5(4), 369–382. https://doi.org/10.1007/BF01417254
At one time it was left to teachers and administrators to decide exactiy what level of math proficiency should be expected of students. But, increasingly, states, and the federal government itself, have established proficiency levels that students are asked to reach. A national proficiency standard was set by the board that governs the National Assessment of Educational Progress (NAEP), which is administered by the U.S. Department of Education and generally known as the nation’s report card.
a crosswalk between NAEP and PISA. The crosswalk is made possible by the fact that representative (but separate) samples of the high-school graduating Class of 2011 took the NAEP and PISA math and reading examinations. NAEP tests were taken in 2007 when the Class of 2011 was in 8th grade and PISA tested 15-year-olds in 2009, most of whom are members of the Class of 2011. Given that NAEP identified 32 percent of U.S. 8th-grade students as proficient in math, the PISA equivalent is estimated by calculating the minimum score reached by the top-performing 32 percent of U.S. students participating in the 2009 PISA test. (See methodological sidebar for further details.)
++++++++++ dissertations ++++++++++++++
CAO perspectives: The role of general education objectives in career and technical programs in the United States and Europe
by Schanker, Jennifer Ballard, Ed.D., National-Louis University, 2011, 162; 3459884
Badges are a mechanism to award ‘micro-credits’ online. They are awarded by an organization for an individual user, and can be either internal to a website or online community, or use open standards and shared repositories.
In open online learning settings, badges are used to provide incentives for individuals to use our resources and to participate in discussion threads.
The IBM skills gateway is an example of how open badges can be leveraged to document professional development. EDUCAUSE microcredentialing program offers 108 digital badges in five categories (community service, expertise development, presentation and facilitation, leadership development, awards).
Open Badge Initiative and “Digital Badges for Lifelong Learning” became the theme of the fourth Digital Meaning & Learning competition, in which over 30 innovative badge systems and 10 research studies received over $5 million in funding between 2012 and 2013.
Standardization is the key to creating transferability and recognition across contexts
badges awarded for participation are valued less meaningful than skill-based badges. For skill-based badges, evidence of mastery must be associated with the badge along with the evaluation criteria. Having a clear purpose, ensuring transferability, and specifying learning objectives were noted by the interviewees as the top priorities when implementing badge offerings in higher education contexts.
Sheryl Grant is a senior researcher on user experience at OpenWorks Group, a company that focuses on supporting educational web applications and mobile tools, including credentialing services. Prior to her current position, Dr. Grant was Director of Alternative Credentialing and Badge Research at HASTAC. She was part of the team that organized the ‘Badges for Lifelong Learning Competition’.
advice o offer for the design and implementation of digital badges. She stressed that badge systems need to be designed in a participatory manner together with the target audience who is supposed to receive them. This will allow for fair, realistic and transparent criteria. Another crucial aspect is the assessment portion: Who will make verify that the badge credentials are issued correctly? While badges can offer additional motivation, they can also diminish motivation and create a ‘race to the bottom’ if they are obtained too easily. Specifically, Dr. Grant advised to use badges to reward exceptional activities, and acknowledge students who want to go above and beyond. She also gave guidelines on when to avoid issuing badges, i.e., activities that are already graded and activities that are required.
All current UNC badging pilots used the platform cred.ly for issuing badges. An alternative is the Mozilla Open Badge backpack follow-up Badgr. The European platform Badgecraft is another repository with a fairly broad user base. The badge wiki project offers a comprehensive list with implementation details for each platform: Badge Platforms (Badge Wiki). (23 platforms)
Designing Effective Digital Badges (https://www.amazon.com/Designing-Effective-Digital-Badges-Applications/dp/1138306134) is a hands-on guide to the principles, implementation, and assessment of digital badging systems. Informed by the fundamental concepts and research-based characteristics of effective badge design, this book uses real-world examples to convey the advantages and challenges of badging and showcases its application across a variety of contexts.
In fall 2007, Larry Berger, CEO of Wireless Generation (now Amplify) was invited to submit a paper to an “Entrepreneurship in Education”
As education entrepreneurs know, growth in K-12 comes hard. Sometimes very hard. We were living Marc Andreessen’s startup mantra: “You only ever experience two emotions: euphoria and terror.”
The edtech boom of the past two decades promised efficacy and new instructional models. Many teachers instead experience it as “clutter.” But poorly integrated standards, curriculum, assessment, and intervention materials have always been a problem.
When it comes to instruction, the work consists of four segments: core curriculum, supplemental (intervention, test prep, little books) curriculum, assessment, and technology (hardware, infrastructure and connectivity). Each of these workstreams are run by separate teams, using independent funding streams, only rarely coordinating. Schools rely—as they always have—on the hero in the classroom, who has to somehow synthesize everything for a roomful of children, every single day.
Twelve Years Later: How the K-12 Industry and Investment Landscape Has Shifted (Part 2)
Twelve years ago, Amplify CEO Larry Berger and I wrote about the “pareto distribution” of companies in the K-12 sector.
The “oligopoly” was the natural outcome of a highly decentralized system and fragmented demand. To serve 15,000-plus districts and more than 100,000 school buildings, a company needed huge sales and service teams; to afford them, the company needed a bookbag full of products across content areas, grade ranges, and use cases. The structure of demand created the “Big Three”—McGraw-Hill, Houghton Mifflin Harcourt and Pearson.
Meanwhile, the number of small players—further right on the pareto distribution—has grown dramatically. Online distribution and freemium business models have enabled companies like Flocabulary, Newsela, Nearpod, and others
few alternative models to consider:
companies like Remind, ClassDojo, and Edmodo, who all adopted a “West Coast” approach: collect active users now, with plans to monetize later.
The second includes the “platform” players—Schoology, itslearning, Canvas, and other LMS-like platforms. They have set out to do something differently, only possible by means of technology—to be the search, storage and distribution platform for instructional content. Google Classroom has instead emerged as the de facto standard platform, fueled by the runaway adoption of Chromebooks.
The third includes “policy responsive” players—companies like Panorama, Ellevation or Wireless Generation. hese companies help school systems meet a new policy requirement—social-emotional learning, English Language Learning, and reading assessment, respectively.
But we’re not “decluttering” our classrooms or in our schools. What would it take for the private and public sectors to work shoulder-to-shoulder?
a catch-22: so long as buying is fragmented, it’s hard to justify the integrated product investment; so long as products are fragmented, it’s hard for a district to create an integrated instructional model.
An interactive discussion on the Innovating Pedagogy 2019 report from The Open University
About the Guest
Rebecca is a senior lecturer in the Institute of Educational Technology (IET) at The Open University in the UK and a senior fellow of the Higher Education Academy. Her primary research interests are educational futures, and how people learn together online and I supervise doctoral students in both these areas.
Rebecca worked for several years as a researcher and educator on the Schome project, which focuses on educational futures, and was also the research lead on the SocialLearn online learning platform, and learning analytics lead on the Open Science Lab (Outstanding ICT Initiative of the Year: THE Awards 2014). She is currently a pedagogic adviser to the FutureLearn MOOC platform, and evaluation lead on The Open University’s FutureLearn MOOCs. She is an active member of the Society for Learning Analytics Research, and have co-chaired many learning analytics events, included several associated with the Learning Analytics Community Exchange (LACE), European Project funded under Framework 7.
Rebecca’s most recent book, Augmented Education, was published by Palgrave in spring 2014.
Mor, Y., Ferguson, R., & Wasson, B. (2015). Editorial: Learning design, teacher inquiry into student learning and learning analytics: A call for action. British Journal of Educational Technology, 46(2), 221–229. https://doi.org/10.1111/bjet.12273
Hansen, C., Emin, V., Wasson, B., Mor, Y., Rodriguez-Triana, M., Dascalu, M., … Pernin, J. (2013). Towards an Integrated Model of Teacher Inquiry into Student Learning, Learning Design and Learning Analytics. Scaling up Learning for Sustained Impact – Proceedings of EC-TEL 2013, 8095, 605–606. https://doi.org/10.1007/978-3-642-40814-4_73