Way back in 1983, I identified A.I. as one of 20 exponential technologies that would increasingly drive economic growth for decades to come.
Artificial intelligence applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, decision trees and machine learning to recognize patterns from vast amounts of data, provide insights, predict outcomes and make complex decisions. A.I. can be applied to pattern recognition, object classification, language translation, data translation, logistical modeling and predictive modeling, to name a few. It’s important to understand that all A.I. relies on vast amounts of quality data and advanced analytics technology. The quality of the data used will determine the reliability of the A.I. output.
Machine learning is a subset of A.I. that utilizes advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon’s Alexa, Apple’s Siri, or any of the others from companies like Google and Microsoft all get better every year thanks to all of the use we give them and the machine learning that takes place in the background.
Deep learning is a subset of machine learning that uses advanced algorithms to enable an A.I. system to train itself to perform tasks by exposing multi-layered neural networks to vast amounts of data, then using what has been learned to recognize new patterns contained in the data. Learning can be Human Supervised Learning, Unsupervised Learningand/or Reinforcement Learning like Google used with DeepMind to learn how to beat humans at the complex game Go. Reinforcement learning will drive some of the biggest breakthroughs.
Autonomous computing uses advanced A.I. tools such as deep learning to enable systems to be self-governing and capable of acting according to situational data without human command. A.I. autonomy includes perception, high-speed analytics, machine-to-machine communications and movement. For example, autonomous vehicles use all of these in real time to successfully pilot a vehicle without a human driver.
Augmented thinking: Over the next five years and beyond, A.I. will become increasingly embedded at the chip level into objects, processes, products and services, and humans will augment their personal problem-solving and decision-making abilities with the insights A.I. provides to get to a better answer faster.
Technology is not good or evil, it is how we as humans apply it. Since we can’t stop the increasing power of A.I., I want us to direct its future, putting it to the best possible use for humans.
Sejnowski, T. J. (2018). The Deep Learning Revolution. Cambridge, MA: The MIT Press.
How deep learning―from Google Translate to driverless cars to personal cognitive assistants―is changing our lives and transforming every sector of the economy.
The deep learning revolution has brought us driverless cars, the greatly improved Google Translate, fluent conversations with Siri and Alexa, and enormous profits from automated trading on the New York Stock Exchange. Deep learning networks can play poker better than professional poker players and defeat a world champion at Go. In this book, Terry Sejnowski explains how deep learning went from being an arcane academic field to a disruptive technology in the information economy.
Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. Deep networks learn from data in the same way that babies experience the world, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. Learning algorithms extract information from raw data; information can be used to create knowledge; knowledge underlies understanding; understanding leads to wisdom. Someday a driverless car will know the road better than you do and drive with more skill; a deep learning network will diagnose your illness; a personal cognitive assistant will augment your puny human brain. It took nature many millions of years to evolve human intelligence; AI is on a trajectory measured in decades. Sejnowski prepares us for a deep learning future.
Buzzwords like “deep learning” and “neural networks” are everywhere, but so much of the popular understanding is misguided, says Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies.
Sejnowski, a pioneer in the study of learning algorithms, is the author of The Deep Learning Revolution(out next week from MIT Press). He argues that the hype about killer AI or robots making us obsolete ignores exciting possibilities happening in the fields of computer science and neuroscience, and what can happen when artificial intelligence meets human intelligence.
Machine learning is a very large field and goes way back. Originally, people were calling it “pattern recognition,” but the algorithms became much broader and much more sophisticated mathematically. Within machine learning are neural networks inspired by the brain, and then deep learning. Deep learning algorithms have a particular architecture with many layers that flow through the network. So basically, deep learning is one part of machine learning and machine learning is one part of AI.
December 2012 at the NIPS meeting, which is the biggest AI conference. There, [computer scientist] Geoff Hinton and two of his graduate students showed you could take a very large dataset called ImageNet, with 10,000 categories and 10 million images, and reduce the classification error by 20 percent using deep learning.Traditionally on that dataset, error decreases by less than 1 percent in one year. In one year, 20 years of research was bypassed. That really opened the floodgates.
The inspiration for deep learning really comes from neuroscience.
AlphaGo, the program that beat the Go champion included not just a model of the cortex, but also a model of a part of the brain called the basal ganglia, which is important for making a sequence of decisions to meet a goal. There’s an algorithm there called temporal differences, developed back in the ‘80s by Richard Sutton, that, when coupled with deep learning, is capable of very sophisticated plays that no human has ever seen before.
there’s a convergence occurring between AI and human intelligence. As we learn more and more about how the brain works, that’s going to reflect back in AI. But at the same time, they’re actually creating a whole theory of learning that can be applied to understanding the brain and allowing us to analyze the thousands of neurons and how their activities are coming out. So there’s this feedback loop between neuroscience and AI
Artificial intelligence (AI) and machine learning are no longer fantastical prospects seen only in science fiction. Products like Amazon Echo and Siri have brought AI into many homes,
Kelly Calhoun Williams, an education analyst for the technology research firm Gartner Inc., cautions there is a clear gap between the promise of AI and the reality of AI.
Artificial intelligence is a broad term used to describe any technology that emulates human intelligence, such as by understanding complex information, drawing its own conclusions and engaging in natural dialog with people.
Machine learning is a subset of AI in which the software can learn or adapt like a human can. Essentially, it analyzes huge amounts of data and looks for patterns in order to classify information or make predictions. The addition of a feedback loop allows the software to “learn” as it goes by modifying its approach based on whether the conclusions it draws are right or wrong.
AI can process far more information than a human can, and it can perform tasks much faster and with more accuracy. Some curriculum software developers have begun harnessing these capabilities to create programs that can adapt to each student’s unique circumstances.
For instance, a Seattle-based nonprofit company calledEnlearn has developed an adaptive learning platform that uses machine learning technology to create highly individualized learning paths that can accelerate learning for every student. (My note: about learning and technology, Alfie Kohn in https://blog.stcloudstate.edu/ims/2018/09/11/educational-technology/)
GoGuardian, a Los Angeles company, uses machine learning technology to improve the accuracy of its cloud-based Internet filtering and monitoring software for Chromebooks. (My note: that smells Big Brother).Instead of blocking students’ access to questionable material based on a website’s address or domain name, GoGuardian’s software uses AI to analyze the actual content of a page in real time to determine whether it’s appropriate for students. (my note: privacy)
serious privacy concerns. It requires an increased focus not only on data quality and accuracy, but also on the responsible stewardship of this information. “School leaders need to get ready for AI from a policy standpoint,” Calhoun Williams said. For instance: What steps will administrators take to secure student data and ensure the privacy of this information?
The ACM/IEEE Joint Conference on Digital Libraries in 2018 (JCDL 2018L: https://2018.jcdl.org/) will be held in conjunction with UNT Open Access
Symposium 2018 (https://openaccess.unt.edu/symposium/2018) on June 3 – 6, 2018
in Fort Worth, Texas, the rustic and artistic threshold into the American
West. JCDL welcomes interesting submissions ranging across theories, systems,
services, and applications. We invite those managing, operating, developing,
curating, evaluating, or utilizing digital libraries broadly defined, covering
academic or public institutions, including archives, museums, and social
networks. We seek involvement of those in iSchools, as well as working in
computer or information or social sciences and technologies. Multiple tracks
and sessions will ensure tailoring to researchers, practitioners, and diverse
communities including data science/analytics, data curation/stewardship, information retrieval, human-computer interaction, hypertext (and Web/network science), multimedia, publishing, preservation, digital humanities, machine learning/AI, heritage/culture, health/medicine, policy, law, and privacy/ intellectual property.
General Instructions on submissions of full papers, short papers, posters and
demonstrations, doctoral consortium, tutorials, workshops, and panels can be
found at https://2018.jcdl.org/general_instructions. Below are the submission
deadlines:
• Jan. 15, 2018 – Tutorial and workshop proposal submissions
• Jan. 15, 2018 – Full paper and short paper submissions
• Jan. 29, 2018 – Panel, poster and demonstration submissions
• Feb. 1, 2018 – Notification of acceptance for tutorials and workshops
• Mar. 8, 2018 – Notification of acceptance for full papers, short papers,
panels, posters, and demonstrations
• Mar. 25, 2018 – Doctoral Consortium abstract submissions
• Apr. 5, 2018 – Notification of acceptance for Doctoral Consortium
• Apr. 15, 2018 – Final camera-ready deadline for full papers, short papers,
panels, posters, and demonstrations
A free, open source, powerful tool for working with messy data
OpenRefine (formerly Google Refine) is a powerful tool for working with messy data: cleaning it; transforming it from one format into another; and extending it with web services and external data.
Please note that since October 2nd, 2012, Google is not actively supporting this project, which has now been rebranded to OpenRefine. Project development, documentation and promotion is now fully supported by volunteers.