“It’s a matter of finding balance,” he said. “Upgrade the technology skills of older ‘digital immigrants,’ and help young kids improve social skills.”
On one hand, we’re trained not to think deeply about subjects when we text quick snippets, Tweet short thoughts,
On the other hand, technology trains the brain to be nimble and to process new ideas quickly. We become more open to new ideas, and communicate more freely and frequently.
the intersection of teacher education, learning technologies and game-based learning. He thinks educators shouldn’t ignore video games if they want students to be media-literate, because they are the “storytelling medium of the 21st century.”
gaming can help build other SEL skills, such as empathy.
Video games are good for teaching kids problem-solving and ethical decision-making
Some experts have expressed concern about how video games affect children. According to the Washington Post, the World Health Organization has recognized “gaming disorder”—characterized as a lasting addiction to video games—as a condition. Yet, not all experts agree that “game addiction” should be pathologized.
Todd Rose, the director of the Mind, Brain, and Education program at the Harvard Graduate School of Education, has emerged as a central intellectual figure behind the movement. In particular, his 2016 book, “The End of Average,” is seen as an important justification for and guide to the personalization of learning.
what Rose argues against. He holds that our culture is obsessed with measuring and finding averages—averages of human ability and averages of the human body. Sometimes the average is held to be the ideal.
The jaggedness principle means that many of the attributes we care about are multi-faceted, not of a whole. For example, human ability is not one thing, so it doesn’t make sense to talk about someone as “smart” or “dumb.” That’s unidimensional. Someone might be very good with numbers, very bad with words, about average in using space, and gifted in using of visual imagery.
Since the 1930s, psychologists have debated whether intelligence is best characterized as one thing or many.
But most psychologists stopped playing this game in the 1990s. The resolution came through the work of John Carroll, who developed a third model in which abilities form a hierarchy. We can think of abilities as separate, but nested in higher-order abilities. Hence, there is a general, all-purpose intelligence, and it influences other abilities, so they are correlated. But the abilities nested within general intelligence are independent, so the correlations are modest. Thus, Rose’s jaggedness principle is certainly not new to psychology, and it’s incomplete.
The second (Context Principle) of Rose’s principles holds that personality traits don’t exist, and there’s a similar problem with this claim: Rose describes a concept with limited predictive power as having none at all. The most commonly accepted theory holds that personality can be described by variation on five dimensions
Rose’s third principle (pathways principle) suggests that there are multiple ways to reach a goal like walking or reading, and that there is not a fixed set of stages through which each of us passes.
Rose thinks students should earn credentials, not diplomas. In other words, a school would not certify that you’re “educated in computer science” but that you have specific knowledge and skills—that you can program games on handheld devices, for example. He think grades should be replaced by testaments of competency (my note: badges); the school affirms that you’ve mastered the skills and knowledge, period. Finally, Rose argues that students should have more flexibility in choosing their educational pathways.
Sejnowski, T. J. (2018). The Deep Learning Revolution. Cambridge, MA: The MIT Press.
How deep learning―from Google Translate to driverless cars to personal cognitive assistants―is changing our lives and transforming every sector of the economy.
The deep learning revolution has brought us driverless cars, the greatly improved Google Translate, fluent conversations with Siri and Alexa, and enormous profits from automated trading on the New York Stock Exchange. Deep learning networks can play poker better than professional poker players and defeat a world champion at Go. In this book, Terry Sejnowski explains how deep learning went from being an arcane academic field to a disruptive technology in the information economy.
Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. Deep networks learn from data in the same way that babies experience the world, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. Learning algorithms extract information from raw data; information can be used to create knowledge; knowledge underlies understanding; understanding leads to wisdom. Someday a driverless car will know the road better than you do and drive with more skill; a deep learning network will diagnose your illness; a personal cognitive assistant will augment your puny human brain. It took nature many millions of years to evolve human intelligence; AI is on a trajectory measured in decades. Sejnowski prepares us for a deep learning future.
Buzzwords like “deep learning” and “neural networks” are everywhere, but so much of the popular understanding is misguided, says Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies.
Sejnowski, a pioneer in the study of learning algorithms, is the author of The Deep Learning Revolution(out next week from MIT Press). He argues that the hype about killer AI or robots making us obsolete ignores exciting possibilities happening in the fields of computer science and neuroscience, and what can happen when artificial intelligence meets human intelligence.
Machine learning is a very large field and goes way back. Originally, people were calling it “pattern recognition,” but the algorithms became much broader and much more sophisticated mathematically. Within machine learning are neural networks inspired by the brain, and then deep learning. Deep learning algorithms have a particular architecture with many layers that flow through the network. So basically, deep learning is one part of machine learning and machine learning is one part of AI.
December 2012 at the NIPS meeting, which is the biggest AI conference. There, [computer scientist] Geoff Hinton and two of his graduate students showed you could take a very large dataset called ImageNet, with 10,000 categories and 10 million images, and reduce the classification error by 20 percent using deep learning.Traditionally on that dataset, error decreases by less than 1 percent in one year. In one year, 20 years of research was bypassed. That really opened the floodgates.
The inspiration for deep learning really comes from neuroscience.
AlphaGo, the program that beat the Go champion included not just a model of the cortex, but also a model of a part of the brain called the basal ganglia, which is important for making a sequence of decisions to meet a goal. There’s an algorithm there called temporal differences, developed back in the ‘80s by Richard Sutton, that, when coupled with deep learning, is capable of very sophisticated plays that no human has ever seen before.
there’s a convergence occurring between AI and human intelligence. As we learn more and more about how the brain works, that’s going to reflect back in AI. But at the same time, they’re actually creating a whole theory of learning that can be applied to understanding the brain and allowing us to analyze the thousands of neurons and how their activities are coming out. So there’s this feedback loop between neuroscience and AI
At Northwestern’s Auditory Neuroscience Lab, Kraus and colleagues measure how the brain responds when various sounds enter the ear. They’ve found that the brain reacts to sound in microseconds, and that brain waves closely resemble the sound waves.
Making sense of sound is one of the most “computationally complex” functions of the brain, Kraus said, which explains why so many language and other disorders, including autism, reveal themselves in the way the brain processes sound. The way the brain responds to the “ingredients” of sound—pitching, timing and timbre—is a window into brain health and learning ability.
practical suggestions for creating space “activities that promote sound-to-meaning development,” whether at home or in school:
Reduce noise. Chronic background noise is associated with several auditory and learning problems: it contributes to “neural noise,” wherein brain neurons fire spontaneously in the absence of sound; it reduces the brain’s sensitivity to sound; and it slows auditory growth.
Read aloud. Even before kids are able to read themselves, hearing stories told by others develops vocabulary and builds working memory; to understand how a story unfolds, listeners, need to remember what was said before.
Encourage children to play a musical instrument. “There is an explicit link between making music and strengthening language skills, so that keeping music education at the center of curricula can pay big dividends for children’s cognitive, emotional, and educational health.Two years of music instruction in elementary and even secondary school can trigger biological changes in how the brain processes sound, which in turn affects language development.
Listen to audiobooks and podcasts. Well-told stories can draw kids in and build attention skills and working memory. The number and quality of these recordings has exploded in recent years, making it that much easier to find a good fit for individuals and classes.
Support learning a second language. Growing up in a bilingual environment causes a child’s brain to manage two languages at once.
Avoid white noise machines. In an effort to soothe children to sleep, some parents set up sound machines in bedrooms. These devices, which emit “meaningless sound,” as Kraus put it, can interfere with how the brain develops sound-processing circuitry.
Use the spread of technology to your advantage. Rather than bemoan the constant bleeping and chirping of everyday life, much of it the result of technological advances, welcome the new sound opportunities these developments provide. Technologies that shrink the globalized world enable second-language learning.
The locations of their points of contact on other neurons suggest they’re in a powerful position to put the brakes on other incoming, excitatory signals—by which complex circuits of neurons activate one another throughout the brain.