Artificial intelligence (AI) and machine learning are no longer fantastical prospects seen only in science fiction. Products like Amazon Echo and Siri have brought AI into many homes,
Kelly Calhoun Williams, an education analyst for the technology research firm Gartner Inc., cautions there is a clear gap between the promise of AI and the reality of AI.
Artificial intelligence is a broad term used to describe any technology that emulates human intelligence, such as by understanding complex information, drawing its own conclusions and engaging in natural dialog with people.
Machine learning is a subset of AI in which the software can learn or adapt like a human can. Essentially, it analyzes huge amounts of data and looks for patterns in order to classify information or make predictions. The addition of a feedback loop allows the software to “learn” as it goes by modifying its approach based on whether the conclusions it draws are right or wrong.
AI can process far more information than a human can, and it can perform tasks much faster and with more accuracy. Some curriculum software developers have begun harnessing these capabilities to create programs that can adapt to each student’s unique circumstances.
GoGuardian, a Los Angeles company, uses machine learning technology to improve the accuracy of its cloud-based Internet filtering and monitoring software for Chromebooks. (My note: that smells Big Brother).Instead of blocking students’ access to questionable material based on a website’s address or domain name, GoGuardian’s software uses AI to analyze the actual content of a page in real time to determine whether it’s appropriate for students. (my note: privacy)
serious privacy concerns. It requires an increased focus not only on data quality and accuracy, but also on the responsible stewardship of this information. “School leaders need to get ready for AI from a policy standpoint,” Calhoun Williams said. For instance: What steps will administrators take to secure student data and ensure the privacy of this information?
It will be eons before AI thinks with a limbic brain, let alone has consciousness
AI programmes themselves generate additional computer programming code to fine-tune their algorithms—without the need for an army of computer programmers. In AI speak, this is now often referred to as “machine learning”.
An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.
Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”
Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs. Where there are choices to be made, that’s where ethics comes in.
our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable.
As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit”
The venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work.
Significant Challenges Impeding Technology Adoption in K–12 Education
Improving Digital Literacy.
Schools are charged with developing students’ digital citizenship, ensuring mastery of responsible and appropriate technology use, including online etiquette and digital rights and responsibilities in blended and online learning settings. Due to the multitude of elements comprising digital literacy, it is a challenge for schools to implement a comprehensive and cohesive approach to embedding it in curricula.
Rethinking the Roles of Teachers.
Pre-service teacher training programs are also challenged to equip educators with digital and social–emotional competencies, such as the ability to analyze and use student data, amid other professional requirements to ensure classroom readiness.
p. 28 Improving Digital Literacy
Digital literacy spans across subjects and grades, taking a school-wide effort to embed it in curricula. This can ensure that students are empowered to adapt in a quickly changing world
Education Overview: Digital Literacy Has to Encompass More Than Social Use
The American Library Association (ALA) defines digital literacy as “the ability to use information and communication technologies to find, evaluate, create, and communicate or share information, requiring both cognitive and technical skills.” While the ALA’s definition does align to some of the skills in “Participate”, it does not specifically mention the skills related to the “Open Practice.”
The library community’s digital and information literacy standards do not specifically include the coding, revision and remixing of digital content as skills required for creating digital information. Most digital content created for the web is “dynamic,” rather than fixed, and coding and remixing skills are needed to create new content and refresh or repurpose existing content. Leaving out these critical skills ignores the fact that library professionals need to be able to build and contribute online content to the ever-changing Internet.
p. 30 Rethinking the Roles of Teachers
Teachers implementing new games and software learn alongside students, which requires
a degree of risk on the teacher’s part as they try new methods and learn what works
p. 32 Teaching Computational Thinking
p. 36 Sustaining Innovation through Leadership Changes
shift the role of teachers from depositors of knowledge to mentors working alongside students;
p. 38 Important Developments in Educational Technology for K–12 Education
Consumer technologies are tools created for recreational and professional purposes and were not designed, at least initially, for educational use — though they may serve well as learning aids and be quite adaptable for use in schools.
Drones > Real-Time Communication Tools > Robotics > Wearable Technology
Digital strategies are not so much technologies as they are ways of using devices and software to enrich teaching and learning, whether inside or outside the classroom.
> Games and Gamification > Location Intelligence > Makerspaces > Preservation and Conservation Technologies
Enabling technologies are those technologies that have the potential to transform what we expect of our devices and tools. The link to learning in this category is less easy to make, but this group of technologies is where substantive technological innovation begins to be visible. Enabling technologies expand the reach of our tools, making them more capable and useful
Affective Computing > Analytics Technologies > Artificial Intelligence > Dynamic Spectrum and TV White Spaces > Electrovibration > Flexible Displays > Mesh Networks > Mobile Broadband > Natural User Interfaces > Near Field Communication > Next Generation Batteries > Open Hardware > Software-Defined Networking > Speech-to-Speech Translation > Virtual Assistants > Wireless Powe
Internet technologies include techniques and essential infrastructure that help to make the technologies underlying how we interact with the network more transparent, less obtrusive, and easier to use.
Bibliometrics and Citation Technologies > Blockchain > Digital Scholarship Technologies > Internet of Things > Syndication Tools
Learning technologies include both tools and resources developed expressly for the education sector, as well as pathways of development that may include tools adapted from other purposes that are matched with strategies to make them useful for learning.
Adaptive Learning Technologies > Microlearning Technologies > Mobile Learning > Online Learning > Virtual and Remote Laboratories
Social media technologies could have been subsumed under the consumer technology category, but they have become so ever-present and so widely used in every part of society that they have been elevated to their own category.
Crowdsourcing > Online Identity > Social Networks > Virtual Worlds
Visualization technologies run the gamut from simple infographics to complex forms of visual data analysis
3D Printing > GIS/Mapping > Information Visualization > Mixed Reality > Virtual Reality
p. 46 Virtual Reality
p. 48 AI
p. 50 IoT
more on NMC Horizon Reports in this IMS blog
The Internet of Things (IoT), augmented reality, and advancements in online learning have changed the way universities reach prospective students, engage with their current student body, and provide them the resources they need.
The Internet of Things has opened up a whole new world of possibilities in higher education. The increased connectivity between devices and “everyday things” means better data tracking and analytics, and improved communication between student, professor, and institution, often without ever saying a word. IoT is making it easier for students to learn when, how, and where they want, while providing professors support to create a more flexible and connected learning environment.
Virtual and augmented reality technologies have begun to take Higher Ed into the realm of what used to be considered science fiction.
the only path to developing really powerful AI would be to use this unstructured information. It’s also called unsupervised learning— you just give it data and it learns by itself what to do with it, what the structure is, what the insights are.
One of the people you work with at Google is Geoff Hinton, a pioneer of neural networks. Has his work been crucial to yours?
Sure. He had this big paper in 2006 that rejuvenated this whole area. And he introduced this idea of deep neural networks—Deep Learning. The other big thing that we have here is reinforcement learning, which we think is equally important. A lot of what Deep Mind has done so far is combining those two promising areas of research together in a really fundamental way. And that’s resulted in the Atari game player, which really is the first demonstration of an agent that goes from pixels to action, as we call it.