Todd Rose, the director of the Mind, Brain, and Education program at the Harvard Graduate School of Education, has emerged as a central intellectual figure behind the movement. In particular, his 2016 book, “The End of Average,” is seen as an important justification for and guide to the personalization of learning.
what Rose argues against. He holds that our culture is obsessed with measuring and finding averages—averages of human ability and averages of the human body. Sometimes the average is held to be the ideal.
The jaggedness principle means that many of the attributes we care about are multi-faceted, not of a whole. For example, human ability is not one thing, so it doesn’t make sense to talk about someone as “smart” or “dumb.” That’s unidimensional. Someone might be very good with numbers, very bad with words, about average in using space, and gifted in using of visual imagery.
Since the 1930s, psychologists have debated whether intelligence is best characterized as one thing or many.
But most psychologists stopped playing this game in the 1990s. The resolution came through the work of John Carroll, who developed a third model in which abilities form a hierarchy. We can think of abilities as separate, but nested in higher-order abilities. Hence, there is a general, all-purpose intelligence, and it influences other abilities, so they are correlated. But the abilities nested within general intelligence are independent, so the correlations are modest. Thus, Rose’s jaggedness principle is certainly not new to psychology, and it’s incomplete.
The second (Context Principle) of Rose’s principles holds that personality traits don’t exist, and there’s a similar problem with this claim: Rose describes a concept with limited predictive power as having none at all. The most commonly accepted theory holds that personality can be described by variation on five dimensions
Rose’s third principle (pathways principle) suggests that there are multiple ways to reach a goal like walking or reading, and that there is not a fixed set of stages through which each of us passes.
Rose thinks students should earn credentials, not diplomas. In other words, a school would not certify that you’re “educated in computer science” but that you have specific knowledge and skills—that you can program games on handheld devices, for example. He think grades should be replaced by testaments of competency (my note: badges); the school affirms that you’ve mastered the skills and knowledge, period. Finally, Rose argues that students should have more flexibility in choosing their educational pathways.
Badging programs are rapidly gaining momentum in higher education – join us to learn how to get your badging efforts off the ground.
Key Considerations: Assessment of Competencies
During this session, you will learn how to ask the right questions and evaluate if badges are a good fit within your unique institutional context, including determining ROI on badging efforts. You’ll also learn how to assess the competencies behind digital badges.
Key Technology Considerations
This session will allow for greater understanding of Open Badges standards, the variety of technology software and platforms, and the portability of badges. We will also explore emerging trends in the digital badging space and discuss campus considerations.
Key Financial Considerations
During this hour, we will take a closer look at answering key financial questions surrounding badges:
What does the business model look like behind existing institutional badging initiatives?
Are these money-makers for an institution? Is there revenue potential?
Where does funding for these efforts come from?
Partnering with Industry
Badging can be a catalyst for partnerships between higher education and industry. In this session, you will have the opportunity to learn more about strategies for collaborating with industry in the development of badges and how badges align with employer expectations.
Branding and Marketing Badges
Now that we have a better idea of the “why” and “what” of badges, how do we market their value to external and internal stakeholders? You’ll see examples of how other institutions are designing and marketing their badges.
Alongside your peers and our expert instructors, you will have the opportunity to brainstorm ideas, get feedback, ask questions, and get answers.
Next Steps and the Road Ahead: Where Badging in Higher Ed is Going
Most institutions are getting into the badging game, and we’ll talk about the far-reaching considerations in the world of badging. We’ll use this time to engage in forward-thinking and discuss the future of badging and what future trends in badging might be.
Sejnowski, T. J. (2018). The Deep Learning Revolution. Cambridge, MA: The MIT Press.
How deep learning―from Google Translate to driverless cars to personal cognitive assistants―is changing our lives and transforming every sector of the economy.
The deep learning revolution has brought us driverless cars, the greatly improved Google Translate, fluent conversations with Siri and Alexa, and enormous profits from automated trading on the New York Stock Exchange. Deep learning networks can play poker better than professional poker players and defeat a world champion at Go. In this book, Terry Sejnowski explains how deep learning went from being an arcane academic field to a disruptive technology in the information economy.
Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. Deep networks learn from data in the same way that babies experience the world, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. Learning algorithms extract information from raw data; information can be used to create knowledge; knowledge underlies understanding; understanding leads to wisdom. Someday a driverless car will know the road better than you do and drive with more skill; a deep learning network will diagnose your illness; a personal cognitive assistant will augment your puny human brain. It took nature many millions of years to evolve human intelligence; AI is on a trajectory measured in decades. Sejnowski prepares us for a deep learning future.
Buzzwords like “deep learning” and “neural networks” are everywhere, but so much of the popular understanding is misguided, says Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies.
Sejnowski, a pioneer in the study of learning algorithms, is the author of The Deep Learning Revolution(out next week from MIT Press). He argues that the hype about killer AI or robots making us obsolete ignores exciting possibilities happening in the fields of computer science and neuroscience, and what can happen when artificial intelligence meets human intelligence.
Machine learning is a very large field and goes way back. Originally, people were calling it “pattern recognition,” but the algorithms became much broader and much more sophisticated mathematically. Within machine learning are neural networks inspired by the brain, and then deep learning. Deep learning algorithms have a particular architecture with many layers that flow through the network. So basically, deep learning is one part of machine learning and machine learning is one part of AI.
December 2012 at the NIPS meeting, which is the biggest AI conference. There, [computer scientist] Geoff Hinton and two of his graduate students showed you could take a very large dataset called ImageNet, with 10,000 categories and 10 million images, and reduce the classification error by 20 percent using deep learning.Traditionally on that dataset, error decreases by less than 1 percent in one year. In one year, 20 years of research was bypassed. That really opened the floodgates.
The inspiration for deep learning really comes from neuroscience.
AlphaGo, the program that beat the Go champion included not just a model of the cortex, but also a model of a part of the brain called the basal ganglia, which is important for making a sequence of decisions to meet a goal. There’s an algorithm there called temporal differences, developed back in the ‘80s by Richard Sutton, that, when coupled with deep learning, is capable of very sophisticated plays that no human has ever seen before.
there’s a convergence occurring between AI and human intelligence. As we learn more and more about how the brain works, that’s going to reflect back in AI. But at the same time, they’re actually creating a whole theory of learning that can be applied to understanding the brain and allowing us to analyze the thousands of neurons and how their activities are coming out. So there’s this feedback loop between neuroscience and AI
Untether instructors from the room’s podium, allowing them control from anywhere in the room;
Streamline the start of class, including biometric login to the room’s technology, behind-the-scenes routing of course content to room displays, control of lights and automatic attendance taking;
Offer whiteboards that can be captured, routed to different displays in the room and saved for future viewing and editing;
Provide small-group collaboration displays and the ability to easily route content to and from these displays; and
Deliver these features through a simple, user-friendly and reliable room/technology interface.
Key players from Crestron, Google, Sony, Steelcase and Spectrum met with Indiana University faculty, technologists and architects to generate new ideas related to current and emerging technologies. Activities included collaborative brainstorming focusing on these questions:
What else can we do to create the classroom of the future?
What current technology exists to solve these problems?
What could be developed that doesn’t yet exist?
top five findings:
Screenless and biometric technology will play an important role in the evolution of classrooms in higher education. We plan to research how voice activation and other Internet of Things technologies can streamline the process for faculty and students.
The entire classroom will become a space for student activity and brainstorming; walls, windows, desks and all activities are easily captured to the cloud, allowing conversations to continue outside of class or at the next class meeting.
Technology will be leveraged to include advance automation for a variety of tasks, so the faculty member is released from duties to focus on teaching.
The technology will become invisible to the process and enhance and customize the experience for the learner.
Virtual assistants could play an important role in providing students with a supported experience throughout their entire campus career.
In September 2015, the back-then library dean (they change every 2-3 years) requested a committee of librarians to meet and discuss the remodeling of Miller Center 2018. By that time the SCSU CIO was asserting the BYOx as a new policy for SCSU. BYOx in essence means the necessity for stronger (wider) WiFI pipe. Based on that assertion, I, Plamen Miltenoff, was insisting to shift the cost of hardware (computers, laptops) to infrastructure (more WiFi nods in the room and around it) and prepare for the upcoming IoT by learning to remodel our syllabi for mobile devices and use those (students) mobile devices, rather squander University money on hardware. At least one faculty member from the committee honestly admitted she has no idea about IoT and respectively the merit of my proposal. Thus, my proposal was completely disregarded by the self-nominated chair of the committee of librarians, who pushed for her idea to replace the desktops with a cart of laptops (a very 2010 idea, which by 2015 was already passe). As per Kelly (2018) (second article above), it is obvious the failure of her proposal to the dean to choose laptops over mobile devices, considering that faculty DO see mobile devices completely replacing desktops and laptops; that faculty DO not see document cameras and overhead projectors as a tool to stay.
Here are the notes from September 2015 http://blog.stcloudstate.edu/ims/2015/09/25/mc218-remodel/
As are result, my IoT proposal as now reflected in the Johnston (2018) (first article above), did not make it even formally to the dean, hence the necessity to make it available through the blog.
The SCSU library thinking regarding physical remodeling of classrooms is behind its times and that costs money for the university, if that room needs to be remodeled again to be with the contemporary times.
With a growing body of research proving yoga’s healing benefits, it’s no wonder more doctors—including those with traditional Western training—are prescribing this ancient practice to their patients.
Yoga therapy is now recognized as a clinically viable treatment, with established programs at major health care centers, such as The University of Texas MD Anderson Cancer Center, Memorial Sloan Kettering Cancer Center, Cleveland Clinic, and many others. In 2003, there were just five yoga-therapy training programs in the International Association of Yoga Therapists (IAYT) database. Today, there are more than 130 worldwide, including 24 rigorous multi-year programs newly accredited by IAYT, with 20 more under review. According to a 2015 survey, most IAYT members work in hospital settings, while others work in outpatient clinics or physical therapy, oncology, or rehabilitation departments (and in private practice).
Some therapists focus on physical mechanics, while others bring in Ayurvedic healing principles and factor in diet, psychological health, and spirituality to create a holistic, customized plan.
“Researchers take blood samples before and after yoga practice to see which genes have been turned on and which were deactivated,” says Khalsa. “We’re also able to see which areas of the brain are changing in structure and size due to yoga and meditation.” This kind of research is helping take yoga into the realm of “real science,” he says, by showing how the practice changes psycho-physiological function.
Ungerer, L. M. (2016). Digital Curation as a Core Competency in Current Learning and Literacy: A Higher Education Perspective. The International Review of Research in Open and Distributed Learning, 17(5). https://doi.org/10.19173/irrodl.v17i5.2566
Dunaway (2011) suggests that learning landscapes in a digital age are networked, social, and technological. Since people commonly create and share information by collecting, filtering, and customizing digital content, educators should provide students opportunities to master these skills (Mills, 2013). In enhancing critical thinking, we have to investigate pedagogical models that consider students’ digital realities (Mihailidis & Cohen, 2013). November (as cited in Sharma & Deschaine, 2016), however warns that although the Web fulfils a pivotal role in societal media, students often are not guided on how to critically deal with the information that they access on the Web. Sharma and Deschaine (2016) further point out the potential for personalizing teaching and incorporating authentic material when educators themselves digitally curate resources by means of Web 2.0 tools.
p. 24. Communities of practice. Lave and Wenger’s (as cited in Weller, 2011) concept of situated learning and Wenger’s (as cited in Weller, 2011) idea of communities of practice highlight the importance of apprenticeship and the social role in learning.
criteria to publish a paper
Originality: Does the paper contain new and significant information adequate to justify publication?
Relationship to Literature: Does the paper demonstrate an adequate understanding of the relevant literature in the field and cite an appropriate range of literature sources? Is any significant work ignored?
Methodology: Is the paper’s argument built on an appropriate base of theory, concepts, or other ideas? Has the research or equivalent intellectual work on which the paper is based been well designed? Are the methods employed appropriate?
Results: Are results presented clearly and analyzed appropriately? Do the conclusions adequately tie together the other elements of the paper?
Implications for research, practice and/or society: Does the paper identify clearly any implications for research, practice and/or society? Does the paper bridge the gap between theory and practice? How can the research be used in practice (economic and commercial impact), in teaching, to influence public policy, in research (contributing to the body of knowledge)? What is the impact upon society (influencing public attitudes, affecting quality of life)? Are these implications consistent with the findings and conclusions of the paper?
Quality of Communication: Does the paper clearly express its case, measured against the technical language of the field and the expected knowledge of the journal’s readership? Has attention been paid to the clarity of expression and readability, such as sentence structure, jargon use, acronyms, etc.
Stanton, K. V., & Liew, C. L. (2011). Open Access Theses in Institutional Repositories: An Exploratory Study of the Perceptions of Doctoral Students. Information Research: An International Electronic Journal, 16(4),
We examine doctoral students’ awareness of and attitudes to open access forms of publication. Levels of awareness of open access and the concept of institutional repositories, publishing behaviour and perceptions of benefits and risks of open access publishing were explored. Method: Qualitative and quantitative data were collected through interviews with eight doctoral students enrolled in a range of disciplines in a New Zealand university and a self-completion Web survey of 251 students. Analysis: Interview data were analysed thematically, then evaluated against a theoretical framework. The interview data were then used to inform the design of the survey tool. Survey responses were analysed as a single set, then by disciple using SurveyMonkey’s online toolkit and Excel. Results: While awareness of open access and repository archiving is still low, the majority of interview and survey respondents were found to be supportive of the concept of open access. The perceived benefits of enhanced exposure and potential for sharing outweigh the perceived risks. The majority of respondents were supportive of an existing mandatory thesis submission policy. Conclusions: Low levels of awareness of the university repository remains an issue, and could be addressed by further investigating the effectiveness of different communication channels for promotion.
the researchers use the qualitative approach: by interviewing participants and analyzing their responses thematically, they build the survey.
Then then administer the survey (the quantitative approach)
How do you intend to use a mixed method? Please share
Metaphors: A Problem Statement is like… metaphor — a novel or poetic linguistic expression where one or more words for a concept are used outside normal conventional meaning to express a similar concept. Aristotle l The DNA of the research l A snapshot of the research l The foundation of the research l The Heart of the research l A “taste” of the research l A blueprint for the study
digital object identifier (DOI) is a unique alphanumeric string assigned by a registration agency (the International DOI Foundation) to identify content and provide a persistent link to its location on the Internet. The publisher assigns a DOI when your article is published and made available electronically.
Why do we need it?
2010 Changes to APA for Electronic Materials Digital object identifier (DOI). DOI available. If a DOI is available you no longer include a URL. Example: Author, A. A. (date). Title of article. Title of Journal, volume(number), page numbers. doi: xx.xxxxxxx
Accodring to Sugimoto et al (2016), the Use of social media platforms for by researchers is high — ranging from 75 to 80% in large -scale surveys (Rowlands et al., 2011; Tenopir et al., 2013; Van Eperen & Marincola, 2011) .
There is one more reason, and, as much as you want to dwell on the fact that you are practitioners and research is not the most important part of your job, to a great degree, you may be judged also by the scientific output of your office and/or institution.
In that sense, both social media and altimetrics might suddenly become extremely important to understand and apply.
Shortly altmetrics (alternative metrics) measure the impact your scientific output has on the community. Your teachers and you present, publish and create work, which might not be presented and published, but may be widely reflected through, e.g. social media, and thus, having impact on the community.
How such impact is measured, if measured at all, can greatly influence the money flow to your institution
Thelwall, M., & Wilson, P. (2016). Mendeley readership altmetrics for medical articles: An analysis of 45 fields. Journal of the Association for Information Science and Technology, 67(8), 1962–1972. https://doi.org/10.1002/asi.23501
Todd Tetzlaff is using Mendeley and he might be the only one to benefit … 🙂
Here is some food for thought from the article above:
Doctoral students and junior researchers are the largest reader group in Mendeley ( Haustein & Larivière, 2014; Jeng et al., 2015; Zahedi, Costas, & Wouters, 2014a) .
Studies have also provided evidence of high rate s of blogging among certain subpopulations: for example, approximately one -third of German university staff (Pscheida et al., 2013) and one fifth of UK doctoral students use blogs (Carpenter et al., 2012) .
Social data sharing platforms provide an infrastructure to share various types of scholarly objects —including datasets, software code, figures, presentation slides and videos —and for users to interact with these objects (e.g., comment on, favorite, like , and reuse ). Platforms such as Figshare and SlideShare disseminate scholars’ various types of research outputs such as datasets, figures, infographics, documents, videos, posters , or presentation slides (Enis, 2013) and displays views, likes, and shares by other users (Mas -Bleda et al., 2014) .
Frequently mentioned social platforms in scholarly communication research include research -specific tools such as Mendeley, Zotero, CiteULike, BibSonomy, and Connotea (now defunct) as well as general tools such as Delicious and Digg (Hammond, Hannay, Lund, & Scott, 2005; Hull, Pettifer, & Kell, 2008; Priem & Hemminger, 2010; Reher & Haustein, 2010) .
“The focus group interviews were analysed based on the principles of interpretative phenomenology”
if you are not podcast fans, I understand. The link above is a pain in the behind to make work, if you are not familiar with using podcast.
Here is an easier way to find it:
1. open your cell phone and go find the podcast icon, which is pre-installed, but you might have not ever used it [yet].
2. In the app, use the search option and type “stuff you should know”
3. the podcast will pop up. scroll and find “How the scientific method works,” and/or search for it if you can.
Once you can play it on the phone, you have to find time to listen to it.
I listen to podcast when i have to do unpleasant chores such as: 1. walking to work 2. washing the dishes 3. flying long hours (very rarely). 4. Driving in the car.
There are bunch of other situations, when you may be strapped and instead of filling disgruntled and stressed, you can deliver the mental [junk] food for your brain.
Earbuds help me: 1. forget the unpleasant task, 2. Utilize time 3. Learn cool stuff
Here are podcasts, I am subscribed for, besides “stuff you should know”:
TED Radio Hour
TED Talks Education
NPR Fresh Air
and bunch others, which, if i don’t go a listen for an year, i go and erase and if i peruse through the top chart and something picks my interest, I try.
If I did not manage to convince to podcast, totally fine; do not feel obligated.
However, this podcast, you can listen to on your computer, if you don’t want to download on your phone.
It is one hour show by two geeks, who are trying to make funny (and they do) a dry matter such as quantitative vs qualitative, which you want to internalize:
1. Sometimes at minute 12, they talk about inductive versus deductive to introduce you to qualitative versus quantitative. It is good to listen to their musings, since your dissertation is going through inductive and deductive process, and understanding it, can help you control better your dissertation writing.
2. Scientific method. Hypothesis etc (around min 17).
While this is not a Ph.D., but Ed.D. and we do not delve into the philosophy of science and dissertation etc. the more you know about this process, the better control you have over your dissertation.
3. Methods and how you prove (Chapter 3) is discussed around min 35
4. dependent and independent variables and how do you do your research in general (min ~45)
Shortly, listen and please do share your thoughts below. You do not have to be kind to this source offering. Actually, be as critical as possible, so you can help me decide, if I should offer it to the next cohort and thank you in advance for your feedback.
Looking for a beginner’s crash course in game making software and process? Games can be an excellent teaching resource, and game development is easier than ever. Whether you’re looking to develop your own teaching resources or run a game-making program for users, this course will give you the information you need to choose the most appropriate software development tool, structure your project, and accomplish your goals. Plain language, appropriate for absolute beginners, and practical illustrative examples will be used. Participants will receive practical basic exercises they can complete in open source software, as well as guides to advanced educational resources and available tutorials.
This is a blended format web course:
The course will be delivered as 4 separate live webinar lectures, one per week on Wednesday November 21 and then repeating Wednesdays, November 28, December 5 and December 12 at Noon Central time. You do not have to attend the live lectures in order to participate. The webinars will be recorded and distributed through the web course platform for asynchronous participation. The web course space will also contain the exercises and discussions for the course.
Participants will be able to name five different software tools available to assist them or their users in creating games and interactive web content, as well as identify the required knowledge and skills to effectively use each program.
Participants will be able to effectively structure the development process of a game from brainstorming to launch.
Participants will be able to identify and articulate areas in which games can increase educational effectiveness and provide practical, desirable skills.
Who Should Attend
Library staff looking to develop educational games or run game making programs for users (including tween or teen users).
Ruby Warren believes in the power of play, and that learning is a lot more effective when it’s interactive. She is the User Experience Librarian at the University of Manitoba Libraries, where she recently completed a research leave focused on educational game prototype development, and has been playing games from around the time she developed object permanence.
LITA Member: $135
ALA Member: $195
Moodle and Webinar login info will be sent to registrants the week prior to the start date.
Falsehoods are spread due to biases in the brain, society, and computer algorithms (Ciampaglia & Menczer, 2018). A combined problem is “information overload and limited attention contribute to a degradation of the market’s discriminative power” (Qiu, Oliveira, Shirazi, Flammini, & Menczer, 2017). Falsehoods spread quickly in the US through social media because this has become Americans’ preferred way to read the news (59%) in the 21st century (Mitchell, Gottfried, Barthel, & Sheer, 2016). While a mature critical reader may recognize a hoax disguised as news, there are those who share it intentionally. A 2016 US poll revealed that 23% of American adults had shared misinformation unwittingly or on purpose; this poll reported high to moderate confidence in one’s ability to identify fake news with only 15% not very confident (Barthel, Mitchell, & Holcomb, 2016).
Hoaxy® takes it one step further and shows you who is spreading or debunking a hoax or disinformation on Twitter.
It will be eons before AI thinks with a limbic brain, let alone has consciousness
AI programmes themselves generate additional computer programming code to fine-tune their algorithms—without the need for an army of computer programmers. In AI speak, this is now often referred to as “machine learning”.
An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
Stephen Hawking warns artificial intelligence could end mankind
By Rory Cellan-JonesTechnology correspondent,2 December 2014