Searching for "brain and learning"

prime the brain for learning

Activities That Prime the Brain for Learning

Brain breaks and focused attention practices help students feel relaxed and alert and ready to learn.

BRAIN BREAKS

FOCUSED ATTENTION PRACTICES

++++++++++++++
more on mindfulness in this IMS blog
https://blog.stcloudstate.edu/ims?s=mindfulness

ELI webinar AI and teaching

ELI Webinar | How AI and Machine Learning Shape the Future of Teaching

https://events.educause.edu/eli/webinars/2019/how-ai-and-machine-learning-shape-the-future-of-teaching

When:
1/23/2019 Wed
12:00 PM – 1:00 PM
Where:
Centennial Hall – 100
Lecture Room
Who:
Anyone interested in
new methods for teaching

Outcomes

  • Explore what is meant by AI and how it relates to machine learning and data science
  • Identify relevant uses of AI and machine learning to advance education
  • Explore opportunities for using AI and machine learning to transform teaching
  • Understand how technology can shape open educational materials

Kyle Bowen, Director, Teaching and Learning with Technology https://members.educause.edu/kyle-bowen

Jennifer Sparrow, Senior Director of Teaching and Learning With Tech, https://members.educause.edu/jennifer-sparrow

Malcolm Brown, Director, Educause, Learning Initiative

more in this IMB blog on Jennifer Sparrow and digital fluency: https://blog.stcloudstate.edu/ims/2018/11/01/preparing-learners-for-21st-century-digital-citizenship/

++++++++++++++++++++++++++++

Feb 5, 2018 webinar notes

creating a jazz band of one: ThoughSourus

Eureka: machine learning tool, brainstorming engine. give it an initial idea and it returns similar ideas. Like Google: refine the idea, so the machine can understand it better. create a collection of ideas to translate into course design or others.

Netlix:

influencers and microinfluencers, pre- and doing the execution

place to start explore and generate content.

https://answerthepublic.com/

a machine can construct a book with the help of a person. bionic book. machine and person working hand in hand. provide keywords and phrases from lecture notes, presentation materials. from there recommendations and suggestions based on own experience; then identify included and excluded content. then instructor can construct.

Design may be the least interesting part of the book for the faculty.

multiple choice quiz may be the least interesting part, and faculty might want to do much deeper assessment.

use these machine learning techniques to build assessment. how to more effectively. inquizitive is the machine learning

 

students engagements and similar prompts

presence in the classroom: pre-service teachers class. how to immerse them and practice classroom management skills

https://books.wwnorton.com/books/inquizitive/overview/

First class: marriage btw VR and use of AI – an environment headset: an algorithm reacts how teachers are interacting with the virtual kids. series of variables, oppty to interact with present behavior. classroom management skills. simulations and environments otherwise impossible to create. apps for these type of interactions

facilitation, reflection and research

AI for more human experience, allow more time for the faculty to be more human, more free time to contemplate.

Jason: Won’t the use of AI still reduce the amount of faculty needed?

Christina Dumeng: @Jason–I think it will most likely increase the amount of students per instructor.

Andrew Cole (UW-Whitewater): I wonder if instead of reducing faculty, these types of platforms (e.g., analytic capabilities) might require instructors to also become experts in the various technology platforms.

Dirk Morrison: Also wonder what the implications of AI for informal, self-directed learning?

Kate Borowske: The context that you’re presenting this in, as “your own jazz band,” is brilliant. These tools presented as a “partner” in the “band” seems as though it might be less threatening to faculty. Sort of gamifies parts of course design…?

Dirk Morrison: Move from teacher-centric to student-centric? Recommender systems, AI-based tutoring?

Andrew Cole (UW-Whitewater): The course with the bot TA must have been 100-level right? It would be interesting to see if those results replicate in 300, 400 level courses

Recording available here

https://events.educause.edu/eli/webinars/2019/how-ai-and-machine-learning-shape-the-future-of-teaching

personalized learning in the digital age

If This Is the End of Average, What Comes Next?

By Daniel T. Willingham     Jun 11, 2018

Todd Rose, the director of the Mind, Brain, and Education program at the Harvard Graduate School of Education, has emerged as a central intellectual figure behind the movement. In particular, his 2016 book, “The End of Average,” is seen as an important justification for and guide to the personalization of learning.

what Rose argues against. He holds that our culture is obsessed with measuring and finding averages—averages of human ability and averages of the human body. Sometimes the average is held to be the ideal.

The jaggedness principle means that many of the attributes we care about are multi-faceted, not of a whole. For example, human ability is not one thing, so it doesn’t make sense to talk about someone as “smart” or “dumb.” That’s unidimensional. Someone might be very good with numbers, very bad with words, about average in using space, and gifted in using of visual imagery.

Since the 1930s, psychologists have debated whether intelligence is best characterized as one thing or many.

But most psychologists stopped playing this game in the 1990s. The resolution came through the work of John Carroll, who developed a third model in which abilities form a hierarchy. We can think of abilities as separate, but nested in higher-order abilities. Hence, there is a general, all-purpose intelligence, and it influences other abilities, so they are correlated. But the abilities nested within general intelligence are independent, so the correlations are modest. Thus, Rose’s jaggedness principle is certainly not new to psychology, and it’s incomplete.

The second (Context Principle) of Rose’s principles holds that personality traits don’t exist, and there’s a similar problem with this claim: Rose describes a concept with limited predictive power as having none at all. The most commonly accepted theory holds that personality can be described by variation on five dimensions

Rose’s third principle (pathways principle) suggests that there are multiple ways to reach a goal like walking or reading, and that there is not a fixed set of stages through which each of us passes.

Rose thinks students should earn credentials, not diplomas. In other words, a school would not certify that you’re “educated in computer science” but that you have specific knowledge and skills—that you can program games on handheld devices, for example. He think grades should be replaced by testaments of competency (my note: badges); the school affirms that you’ve mastered the skills and knowledge, period. Finally, Rose argues that students should have more flexibility in choosing their educational pathways.

=++++++++++++++++
more on personalized learning in this IMS blog
https://blog.stcloudstate.edu/ims?s=personalized+learning

deep learning revolution

Sejnowski, T. J. (2018). The Deep Learning Revolution. Cambridge, MA: The MIT Press.

How deep learning―from Google Translate to driverless cars to personal cognitive assistants―is changing our lives and transforming every sector of the economy.

The deep learning revolution has brought us driverless cars, the greatly improved Google Translate, fluent conversations with Siri and Alexa, and enormous profits from automated trading on the New York Stock Exchange. Deep learning networks can play poker better than professional poker players and defeat a world champion at Go. In this book, Terry Sejnowski explains how deep learning went from being an arcane academic field to a disruptive technology in the information economy.

Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. Deep networks learn from data in the same way that babies experience the world, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. Learning algorithms extract information from raw data; information can be used to create knowledge; knowledge underlies understanding; understanding leads to wisdom. Someday a driverless car will know the road better than you do and drive with more skill; a deep learning network will diagnose your illness; a personal cognitive assistant will augment your puny human brain. It took nature many millions of years to evolve human intelligence; AI is on a trajectory measured in decades. Sejnowski prepares us for a deep learning future.

A pioneering scientist explains ‘deep learning’

Artificial intelligence meets human intelligence

neural networks

Buzzwords like “deep learning” and “neural networks” are everywhere, but so much of the popular understanding is misguided, says Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies.

Sejnowski, a pioneer in the study of learning algorithms, is the author of The Deep Learning Revolution (out next week from MIT Press). He argues that the hype about killer AI or robots making us obsolete ignores exciting possibilities happening in the fields of computer science and neuroscience, and what can happen when artificial intelligence meets human intelligence.

Machine learning is a very large field and goes way back. Originally, people were calling it “pattern recognition,” but the algorithms became much broader and much more sophisticated mathematically. Within machine learning are neural networks inspired by the brain, and then deep learning. Deep learning algorithms have a particular architecture with many layers that flow through the network. So basically, deep learning is one part of machine learning and machine learning is one part of AI.

December 2012 at the NIPS meeting, which is the biggest AI conference. There, [computer scientist] Geoff Hinton and two of his graduate students showed you could take a very large dataset called ImageNet, with 10,000 categories and 10 million images, and reduce the classification error by 20 percent using deep learning.Traditionally on that dataset, error decreases by less than 1 percent in one year. In one year, 20 years of research was bypassed. That really opened the floodgates.

The inspiration for deep learning really comes from neuroscience.

AlphaGo, the program that beat the Go champion included not just a model of the cortex, but also a model of a part of the brain called the basal ganglia, which is important for making a sequence of decisions to meet a goal. There’s an algorithm there called temporal differences, developed back in the ‘80s by Richard Sutton, that, when coupled with deep learning, is capable of very sophisticated plays that no human has ever seen before.

there’s a convergence occurring between AI and human intelligence. As we learn more and more about how the brain works, that’s going to reflect back in AI. But at the same time, they’re actually creating a whole theory of learning that can be applied to understanding the brain and allowing us to analyze the thousands of neurons and how their activities are coming out. So there’s this feedback loop between neuroscience and AI

+++++++++++
deep learning revolution
https://blog.stcloudstate.edu/ims?s=deep+learning

Games and Online Interactive Content

Wednesday, 11/21/2018 – Wednesday, 12/12/2018

Looking for a beginner’s crash course in game making software and process? Games can be an excellent teaching resource, and game development is easier than ever. Whether you’re looking to develop your own teaching resources or run a game-making program for users, this course will give you the information you need to choose the most appropriate software development tool, structure your project, and accomplish your goals. Plain language, appropriate for absolute beginners, and practical illustrative examples will be used. Participants will receive practical basic exercises they can complete in open source software, as well as guides to advanced educational resources and available tutorials.

This is a blended format web course:

The course will be delivered as 4 separate live webinar lectures, one per week on Wednesday November 21 and then repeating Wednesdays, November 28, December 5 and December 12 at Noon Central time. You do not have to attend the live lectures in order to participate. The webinars will be recorded and distributed through the web course platform for asynchronous participation. The web course space will also contain the exercises and discussions for the course.

Learning Outcomes

  • Participants will be able to name five different software tools available to assist them or their users in creating games and interactive web content, as well as identify the required knowledge and skills to effectively use each program.
  • Participants will be able to effectively structure the development process of a game from brainstorming to launch.
  • Participants will be able to identify and articulate areas in which games can increase educational effectiveness and provide practical, desirable skills.

Who Should Attend

Library staff looking to develop educational games or run game making programs for users (including tween or teen users).

Instructors

Ruby Warren

Ruby Warren believes in the power of play, and that learning is a lot more effective when it’s interactive. She is the User Experience Librarian at the University of Manitoba Libraries, where she recently completed a research leave focused on educational game prototype development, and has been playing games from around the time she developed object permanence.

<Cost

  • LITA Member: $135
  • ALA Member: $195
  • Non-member: $260

Moodle and Webinar login info will be sent to registrants the week prior to the start date.

How to Register

Register here, courses are listed by date and you need to log in.

+++++++++++
more on games and libraries in this IMS blog
https://blog.stcloudstate.edu/ims?s=games+library

sound and the brain

What Types of Sound Experiences Enable Children to Learn Best?

https://www.kqed.org/mindshift/46824/what-types-of-sound-experiences-enable-children-to-learn-best
At Northwestern’s Auditory Neuroscience Lab, Kraus and colleagues measure how the brain responds when various sounds enter the ear. They’ve found that the brain reacts to sound in microseconds, and that brain waves closely resemble the sound waves.
Making sense of sound is one of the most “computationally complex” functions of the brain, Kraus said, which explains why so many language and other disorders, including autism, reveal themselves in the way the brain processes sound. The way the brain responds to the “ingredients” of sound—pitching, timing and timbre—is a window into brain health and learning ability.

practical suggestions for creating space “activities that promote sound-to-meaning development,” whether at home or in school:
Reduce noise. Chronic background noise is associated with several auditory and learning problems: it contributes to “neural noise,” wherein brain neurons fire spontaneously in the absence of sound; it reduces the brain’s sensitivity to sound; and it slows auditory growth.
Read aloud. Even before kids are able to read themselves, hearing stories told by others develops vocabulary and builds working memory; to understand how a story unfolds, listeners, need to remember what was said before.
Encourage children to play a musical instrument. “There is an explicit link between making music and strengthening language skills, so that keeping music education at the center of curricula can pay big dividends for children’s cognitive, emotional, and educational health.Two years of music instruction in elementary and even secondary school can trigger biological changes in how the brain processes sound, which in turn affects language development.
Listen to audiobooks and podcasts. Well-told stories can draw kids in and build attention skills and working memory. The number and quality of these recordings has exploded in recent years, making it that much easier to find a good fit for individuals and classes.
Support learning a second language. Growing up in a bilingual environment causes a child’s brain to manage two languages at once.
Avoid white noise machines. In an effort to soothe children to sleep, some parents set up sound machines in bedrooms. These devices, which emit “meaningless sound,” as Kraus put it, can interfere with how the brain develops sound-processing circuitry.
Use the spread of technology to your advantage. Rather than bemoan the constant bleeping and chirping of everyday life, much of it the result of technological advances, welcome the new sound opportunities these developments provide. Technologies that shrink the globalized world enable second-language learning.

=++++++++++++
More on the brain in this IMS blog
https://blog.stcloudstate.edu/ims?s=brain

Limbic thought and artificial intelligence

Limbic thought and artificial intelligence

September 5, 2018  Siddharth (Sid) Pai

https://www.linkedin.com/pulse/limbic-thought-artificial-intelligence-siddharth-sid-pai/

An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
++++++++++++++++++

Stephen Hawking warns artificial intelligence could end mankind

https://www.bbc.com/news/technology-30290540
++++++++++++++++++++
thank you Sarnath Ramnat (sarnath@stcloudstate.edu) for the finding

An AI Wake-Up Call From Ancient Greece

  https://www.project-syndicate.org/commentary/artificial-intelligence-pandoras-box-by-adrienne-mayor-2018-10

++++++++++++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artifical+intelligence

new brain cells

(thank you Mike Pickle: https://www.facebook.com/groups/190982954245635/permalink/2110516852292226/)

Mysterious new brain cell found in people

“rosehip neurons,” were found in the uppermost layer of the cortex, which is home to many different types of neurons that inhibit the activity of other neurons.

the set of genes expressed in these inhibitory rosehip neurons doesn’t closely match any previously identified cell in the mouse, suggesting they have no analog in the rodent often used as a model for humans,

The locations of their points of contact on other neurons suggest they’re in a powerful position to put the brakes on other incoming, excitatory signals—by which complex circuits of neurons activate one another throughout the brain.

+++++++++++
more on learning and the brain in this IMS blog
https://blog.stcloudstate.edu/ims?s=brain+learning

how to student learning

How Can We Amplify Student Learning? The ANSWER from Cognitive Psychology

By: 

We know now from rigorous testing in cognitive psychology that learning styles are really learning preferences that do not correlate with achievement (An et al., 2017).
https://www.facultyfocus.com/articles/teaching-and-learning/how-can-we-amplify-student-learning-the-answer-from-cognitive-psychology/

An, Donggun, and Martha Carr. “Learning styles theory fails to explain learning and achievement: Recommendations for alternative approaches.” Personality and Individual Differences 116 (2017): 410-416.

To assist time-strapped instructional faculty and staff, we offer a consolidated summary of key cognitive science principles, in the form of an easy-to-remember acronym: ANSWER.

Attention: Learning requires memory, and memory requires focused attention. Multitasking is a myth, and even the more scientifically-accurate term “task-switching” yields errors compared to focused attention. The brain is quite adept at filtering out dozens of simultaneous stimuli, as it does every second of wakefulness. Attention is a required ingredient for learning. This has ramifications for syllabus policies on the use of electronic devices for note-taking, which have been shown to be irresistible and therefore lead to distraction and lower scores (Ravizza, Uitvlugt, and Fenn). Even when students are not distracted, laptops are used primarily for dictation, which does little for long-term memory; writing by hand does more to stimulate attention and build neural networks than typing (Mueller and Oppenheimer).

Novelty: variety into lesson plans, activities, and opportunities for practice, instructors amplify potential learning for their students. Further, the use of metaphors in teaching enhances transfer, hemispheric integration, and retention, so using picture prompts and images can further solidify student learning (Sousa).

Spacing: Sometimes called “distributed practice,”the spacing effect refers to the jump in performance when students study a subject and then practice with gaps of time, ideally over one or more nights (sleep helps with memory consolidation), as compared to studying all at once, as if cramming the night before a test. Cramming, or massed practice, is successful for temporary test performance, since information is loaded into working memory. But the practices that work well for short-term memory do not work well for long-term memory. The spacing effect is particularly effective when combined with interleaving, the intentional practice of mixing in older learning tasks/skills with the new ones (Roedeiger, et al.). An ideal example of this would be regular quizzes in the semester that are cumulative (think “tiny final exams”).

Why: Memory is associative; when new memories are formed, neurons wire together (and later fire together), so the context can lead to the information, and vice versa. A teaching strategy of comprised of questions to guide lesson plans (perhaps even beginning with mystery) can pique student interest and learning potential.  If you use PowerPoint, Haiku Deck, or Prezi, do your slides consist primarily of answers or questions?

Emotions: Short-term memories are stored in the hippocampus, a portion of the brain associated with emotions; the same area where we consolidate short-term into long-term memories overnight.
As instructors, we create the conditions in which students will motivate themselves (Ryan & Deci, 2000) by infusing our interactions with the positive emptions of curiosity, discovery, and fun. Simple gamification (quizzes with immediate feedback, for instance) can help.

Repetition: The creation of a new memory really means the formation of synapses across neurons and new neural pathways. These pathways and bridges degrade over time unless the synapse fires again. Consider the days before smartphones, when the way to remember a phone number was to repeat it several times mentally. Repetition, in all its forms, enables more effective recall later. This is why quizzing, practice testing, flashcards, and instructor-driven questioning and challenges are so effective.

+++++++++++
more on learning styles in this IMS blog
https://blog.stcloudstate.edu/ims?s=learning+styles

more on multitasking in this blog
https://blog.stcloudstate.edu/ims?s=multitasking

for and against the use of technology in the classroom
https://blog.stcloudstate.edu/ims/2017/04/03/use-of-laptops-in-the-classroom/

on spaced learning in this blog
https://blog.stcloudstate.edu/ims/2017/03/28/digital-learning/

1 2 3 4 8