Many educational institutions maintain their own data centers. “We need to minimize the amount of work we do to keep systems up and running, and spend more energy innovating on things that matter to people.”
what’s the difference between machine learning (ML) and artificial intelligence (AI)?
Jeff Olson: That’s actually the setup for a joke going around the data science community. The punchline? If it’s written in Python or R, it’s machine learning. If it’s written in PowerPoint, it’s AI.
machine learning is in practical use in a lot of places, whereas AI conjures up all these fantastic thoughts in people.
What is serverless architecture, and why are you excited about it?
Instead of having a machine running all the time, you just run the code necessary to do what you want—there is no persisting server or container. There is only this fleeting moment when the code is being executed. It’s called Function as a Service, and AWS pioneered it with a service called AWS Lambda. It allows an organization to scale up without planning ahead.
How do you think machine learning and Function as a Service will impact higher education in general?
The radical nature of this innovation will make a lot of systems that were built five or 10 years ago obsolete. Once an organization comes to grips with Function as a Service (FaaS) as a concept, it’s a pretty simple step for that institution to stop doing its own plumbing. FaaS will help accelerate innovation in education because of the API economy.
If the campus IT department will no longer be taking care of the plumbing, what will its role be?
I think IT will be curating the inter-operation of services, some developed locally but most purchased from the API economy.
As a result, you write far less code and have fewer security risks, so you can innovate faster. A succinct machine-learning algorithm with fewer than 500 lines of code can now replace an application that might have required millions of lines of code. Second, it scales. If you happen to have a gigantic spike in traffic, it deals with it effortlessly. If you have very little traffic, you incur a negligible cost.
Researchers at the Fraunhofer Institute for Microelectronic Circuits and Systems IMS have developed AIfES, an artificial intelligence (AI) concept for microcontrollers and sensors that contains a completely configurable artificial neural network. AIfES is a platform-independent machine learning library which can be used to realize self-learning microelectronics requiring no connection to a cloud or to high-performance computers. The sensor-related AI system recognizes handwriting and gestures, enabling for example gesture control of input when the library is running on a wearable.
a machine learning library programmed in C that can run on microcontrollers, but also on other platforms such as PCs, Raspberry PI and Android.
Sejnowski, T. J. (2018). The Deep Learning Revolution. Cambridge, MA: The MIT Press.
How deep learning―from Google Translate to driverless cars to personal cognitive assistants―is changing our lives and transforming every sector of the economy.
The deep learning revolution has brought us driverless cars, the greatly improved Google Translate, fluent conversations with Siri and Alexa, and enormous profits from automated trading on the New York Stock Exchange. Deep learning networks can play poker better than professional poker players and defeat a world champion at Go. In this book, Terry Sejnowski explains how deep learning went from being an arcane academic field to a disruptive technology in the information economy.
Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. Deep networks learn from data in the same way that babies experience the world, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. Learning algorithms extract information from raw data; information can be used to create knowledge; knowledge underlies understanding; understanding leads to wisdom. Someday a driverless car will know the road better than you do and drive with more skill; a deep learning network will diagnose your illness; a personal cognitive assistant will augment your puny human brain. It took nature many millions of years to evolve human intelligence; AI is on a trajectory measured in decades. Sejnowski prepares us for a deep learning future.
Buzzwords like “deep learning” and “neural networks” are everywhere, but so much of the popular understanding is misguided, says Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies.
Sejnowski, a pioneer in the study of learning algorithms, is the author of The Deep Learning Revolution(out next week from MIT Press). He argues that the hype about killer AI or robots making us obsolete ignores exciting possibilities happening in the fields of computer science and neuroscience, and what can happen when artificial intelligence meets human intelligence.
Machine learning is a very large field and goes way back. Originally, people were calling it “pattern recognition,” but the algorithms became much broader and much more sophisticated mathematically. Within machine learning are neural networks inspired by the brain, and then deep learning. Deep learning algorithms have a particular architecture with many layers that flow through the network. So basically, deep learning is one part of machine learning and machine learning is one part of AI.
December 2012 at the NIPS meeting, which is the biggest AI conference. There, [computer scientist] Geoff Hinton and two of his graduate students showed you could take a very large dataset called ImageNet, with 10,000 categories and 10 million images, and reduce the classification error by 20 percent using deep learning.Traditionally on that dataset, error decreases by less than 1 percent in one year. In one year, 20 years of research was bypassed. That really opened the floodgates.
The inspiration for deep learning really comes from neuroscience.
AlphaGo, the program that beat the Go champion included not just a model of the cortex, but also a model of a part of the brain called the basal ganglia, which is important for making a sequence of decisions to meet a goal. There’s an algorithm there called temporal differences, developed back in the ‘80s by Richard Sutton, that, when coupled with deep learning, is capable of very sophisticated plays that no human has ever seen before.
there’s a convergence occurring between AI and human intelligence. As we learn more and more about how the brain works, that’s going to reflect back in AI. But at the same time, they’re actually creating a whole theory of learning that can be applied to understanding the brain and allowing us to analyze the thousands of neurons and how their activities are coming out. So there’s this feedback loop between neuroscience and AI
meetings with Chief Learning Officers, talent management leaders, and vendors of next generation learning tools.
The corporate L&D industry is over $140 billion in size, and it crosses over into the $300 billion marketplace for college degrees, professional development, and secondary education around the world.
Digital Learning does not mean learning on your phone, it means “bringing learning to where employees are.” In other words, this new era is not only a shift in tools, it’s a shift toward employee-centric design. Shifting from “instructional design” to “experience design” and using design thinking are key here.
1) The traditional LMS is no longer the center of corporate learning, and it’s starting to go away.
LMS platforms were designed around the traditional content model, using a 17 year old standard called SCORM. SCORM is a technology developed in the 1980s, originally intended to help companies like track training records from their CD-ROM based training programs.
the paradigm that we built was focused on the idea of a “course catalog,” an artifact that makes sense for formal education, but no longer feels relevant for much of our learning today.
not saying the $4 billion LMS market is dead, but the center or action has moved (ie. their cheese has been moved). Today’s LMS is much more of a compliance management system, serving as a platform for record-keeping, and this function can now be replaced by new technologies.
We have come from a world of CD ROMs to online courseware (early 2000s) to an explosion of video and instructional content (YouTube and MOOCs in the last five years), to a new world of always-on, machine-curated content of all shapes and sizes. The LMS, which was largely architected in the early 2000s, simply has not kept up effectively.
2) The emergence of the X-API makes everything we do part of learning.
In the days of SCORM (the technology developed by Boeing in the 1980s to track CD Roms) we could only really track what you did in a traditional or e-learning course. Today all these other activities are trackable using the X-API (also called Tin Can or the Experience API). So just like Google and Facebook can track your activities on websites and your browser can track your clicks on your PC or phone, the X-API lets products like the learning record store keep track of all your digital activities at work.
3) As content grows in volume, it is falling into two categories: micro-learning and macro-learning.
4) Work Has Changed, Driving The Need for Continuous Learning
Why is all the micro learning content so important? Quite simply because the way we work has radically changed. We spend an inordinate amount of time looking for information at work, and we are constantly bombarded by distractions, messages, and emails.
5) Spaced Learning Has Arrived
If we consider the new world of content (micro and macro), how do we build an architecture that teaches people what to use when? Can we make it easier and avoid all this searching?
Neurological research has proved that we don’t learn well through “binge education” like a course. We learn by being exposed to new skills and ideas over time, with spacing and questioning in between. Studies have shown that students who cram for final exams lose much of their memory within a few weeks, yet students who learn slowly with continuous reinforcement can capture skills and knowledge for decades.
6) A New Learning Architecture Has Emerged: With New Vendors To Consider
One of the keys to digital learning is building a new learning architecture. This means using the LMS as a “player” but not the “center,” and looking at a range of new tools and systems to bring content together.
On the upper left is a relatively new breed of vendors, including companies like Degreed, EdCast, Pathgather, Jam, Fuse, and others, that serve as “learning experience” platforms. They aggregate, curate, and add intelligence to content, without specifically storing content or authoring in any way. In a sense they develop a “learning experience,” and they are all modeled after magazine-like interfaces that enables users to browse, read, consume, and rate content.
The second category the “program experience platforms” or “learning delivery systems.” These companies, which include vendors like NovoEd, EdX, Intrepid, Everwise, and many others (including many LMS vendors), help you build a traditional learning “program” in an open and easy way. They offer pathways, chapters, social features, and features for assessment, scoring, and instructor interaction. While many of these features belong in an LMS, these systems are built in a modern cloud architecture, and they are effective for programs like sales training, executive development, onboarding, and more. In many ways you can consider them “open MOOC platforms” that let you build your own MOOCs.
The third category at the top I call “micro-learning platforms” or “adaptive learning platforms.” These are systems that operate more like intelligent, learning-centric content management systems that help you take lots of content, arrange it into micro-learning pathways and programs, and serve it up to learners at just the right time. Qstream, for example, has focused initially on sales training – and clients tell me it is useful at using spaced learning to help sales people stay up to speed (they are also entering the market for management development). Axonify is a fast-growing vendor that serves many markets, including safety training and compliance training, where people are reminded of important practices on a regular basis, and learning is assessed and tracked. Vendors in this category, again, offer LMS-like functionality, but in a way that tends to be far more useful and modern than traditional LMS systems. And I expect many others to enter this space.
Perhaps the most exciting part of tools today is the growth of AI and machine-learning systems, as well as the huge potential for virtual reality.
7) Traditional Coaching, Training, and Culture of Learning Has Not Gone Away
8) A New Business Model for Learning
he days of spending millions of dollars on learning platforms is starting to come to an end. We do have to make strategic decisions about what vendors to select, but given the rapid and immature state of the market, I would warn against spending too much money on any one vendor at a time. The market has yet to shake out, and many of these vendors could go out of business, be acquired, or simply become irrelevant in 3-5 years.
9) The Impact of Microsoft, Google, Facebook, and Slack Is Coming
The newest versions of Microsoft Teams, Google Hangouts and Google Drive, Workplace by Facebook, Slack, and other enterprise IT products now give employees the opportunity to share content, view videos, and find context-relevant documents in the flow of their daily work.
We can imagine that Microsoft’s acquisition of LinkedIn will result in some integration of Lynda.com content in the flow of work. (Imagine if you are trying to build a spreadsheet and a relevant Lynda course opens up). This is an example of “delivering learning to where people are.”
10) A new set of skills and capabilities in L&D
It’s no longer enough to consider yourself a “trainer” or “instructional designer” by career. While instructional design continues to play a role, we now need L&D to focus on “experience design,” “design thinking,” the development of “employee journey maps,” and much more experimental, data-driven, solutions in the flow of work.
lmost all the companies are now teaching themselves design thinking, they are using MVP (minimal viable product) approaches to new solutions, and they are focusing on understanding and addressing the “employee experience,” rather than just injecting new training programs into the company.
In the research project led by Ph.D. candidate Gabriel Culbertson, 48 students were recruited to play two versions of the game. In one group, students were connected via a chat interface with another player who could, if they wanted, offer advice on how to play. The second group played a version of the game in which they were definitely required to collaborate on quests.
The research group found the students in the second so-called “high-interdependence” group spent more time communicating and, as a consequence, learned more words.
The research then expanded to a larger group of 186 Reddit users who were learning Japanese. After reviewing gameplay logs, interviews and Reddit posts, they found that those who spent the most time engaged in the game learned more new words and phrases.
The Cornell research team presented its research results at the Association for Computing Machinery Conference on Human-Computer Interaction in May in San Jose, CA.
Crompton, Muilenburg and Berge’s definition for m-learning is “learning across multiple contexts, through social and content interactions, using personal electronic devices.”
The “context”in this definition encompasses m-learnng that is formalself-directed, and spontaneous learning, as well as learning that is context aware and context neutral.
therefore, m-learning can occur inside or outside the classroom, participating in a formal lesson on a mobile device; it can be self-directed, as a person determines his or her own approach to satisfy a learning goal; or spontaneous learning, as a person can use the devices to look up something that has just prompted an interest (Crompton, 2013, p. 83). (Gaming article Tallinn)Constructivist Learnings in the 1980s – Following Piage’s (1929), Brunner’s (1996) and Jonassen’s (1999) educational philosophies, constructivists proffer that knowledge acquisition develops through interactions with the environment. (p. 85). The computer was no longer a conduit for the presentation of information: it was a tool for the active manipulation of that information” (Naismith, Lonsdale, Vavoula, & Sharples, 2004, p. 12)Constructionist Learning in the 1980s – Constructionism differed from constructivism as Papert (1980) posited an additional component to constructivism: students learned best when they were actively involved in constructing social objects. The tutee position. Teaching the computer to perform tasks.Problem-Based learning in the 1990s – In the PBL, students often worked in small groups of five or six to pool knowledge and resources to solve problems. Launched the sociocultural revolution, focusing on learning in out of school contexts and the acquisition of knowledge through social interaction
Socio-Constructivist Learning in the 1990s. SCL believe that social and individual processes are independent in the co-construction of knowledge (Sullivan-Palinscar, 1998; Vygotsky, 1978).
96-97). Keegan (2002) believed that e-learning was distance learning, which has been converted to e-learning through the use of technologies such as the WWW. Which electronic media and tools constituted e-learning: e.g., did it matter if the learning took place through a networked technology, or was it simply learning with an electronic device?
99-100. Traxler (2011) described five ways in which m-learning offers new learning opportunities: 1. Contingent learning, allowing learners to respond and react to the environment and changing experiences; 2. Situated learning, in which learning takes place in the surroundings applicable to the learning; 3. Authentic learning;
Diel, W. (2013). M-Learning as a subfield of open and distance education. In: Berge and Muilenburg (Eds.). Handbook of Mobile Learning.
15) Historical context in relation to the field of distance education (embedded librarian)
16 definition of independent study (workshop on mlearning and distance education
17. Theory of transactional distance (Moore)
Cochrane, T. (2013). A Summary and Critique of M-Learning Research and Practice. In: Berge and Muilenburg (Eds.). Handbook of Mobile Learning.
( Galin class, workshop)
According to Cook and Sharples (2010) the development of M learning research has been characterized by three general faces a focus upon Devices Focus on learning outside the classroom He focus on the mobility of the learner
Baby I am learning studies focus upon content delivery for small screen devices and the PDA capabilities of mobile devices rather than leveraging the potential of mobile devices for collaborative learning as recommended by hope Joyner Mill Road and sharp P. 26 Large scale am learning project Several larger am learning projects have tended to focus on specific groups of learners rather than developing pedagogical strategies for the integration of am mlearning with him tertiary education in general
m learning research funding
In comparison am learning research projects in countries with smaller population sizes such as Australia and New Zealand are typiclly funded on a shoe string budget
M-learning research methodologies
I am learning research has been predominantly characterized by short term case studies focused upon The implementation of rapidly changing technologies with early adopters but with little evaluation reflection or emphasis on mainstream tertiary-education integration
p. 29 identifying the gaps in M learning research
lack of explicit underlying pedagogical theory Lack of transferable design frameworks
Pachler, N., Bachmair, B., and Cook, J. (2013). A Sociocultural Ecological Frame for Mobile Learning. In: Berge and Muilenburg (Eds.). Handbook of Mobile Learning.
(Tom video studio)
35 a line of argumentation that defines mobile devices such as mobile phones as cultural resources. Mobile cultural resources emerge within what we call a “bile complex‘, which consist of specifics structures, agency and cultural practices.
36 pedagogy looks for learning in the context of identify formation of learners within a wider societal context However at the beginning of the twentieth first century and economy oriented service function of learning driven by targets and international comparisons has started to occupy education systems and schools within them Dunning 2000 describes the lengthy transformation process from natural assets Land unskilled labor to tangible assets machinery to intangible created assets such as knowledge and information of all kinds Araya and Peters 2010 describe the development of the last 20 years in terms of faces from the post industrial economy to d information economy to the digital economy to the knowledge economy to the creative economy Cultural ecology can refer to the debate about natural resources we argue for a critical debate about the new cultural resources namely mobile devices and the services for us the focus must not be on the exploitation of mobile devices and services for learning but instead on the assimilation of learning with mobiles in informal contacts of everyday life into formal education
Ecology comes into being is there exists a reciprocity between perceiver and environment translated to M learning processes this means that there is a reciprocity between the mobile devices in the activity context of everyday life and the formal learning
Rather than focusing on the acquisition of knowledge in relation to externally defined notions of relevance increasingly in a market-oriented system individual faces the challenge of shape his/her knowledge out of his/her own sense of his/her world information is material which is selected by individuals to be transformed by them into knowledge to solve a problem in the life world
Crompton, H. (2013). A Sociocultural Ecological Frame for Mobile Learning. In: Berge and Muilenburg (Eds.). Handbook of Mobile Learning.
p. 47 As philosophies and practice move toward learner-centered pedagogies, technology in a parallel move, is now able to provide new affordances to the learner, such as learning that is personalized, contextualized, and unrestricted by temporal and spatial constrains.
The necessity for m-learning to have a theory of its own, describing exactly what makes m-learning unique from conventional, tethered electronic learning and traditional learning.
48 . Definition and devices. Four central constructs. Learning pedagogies, technological devices, context and social interactions.
“learning across multiple contexts, through social and content interactions, using personal electronic devices.”
It is difficult, and ill advisable, to determine specifically which devices should be included in a definition of m-learning, as technologies are constantly being invented or redesigned. (my note against the notion that since D2L is a MnSCU mandated tool, it must be the one and only). One should consider m-learning as the utilization of electronic devices that are easily transported and used anytime and anywhere.
49 e-learning does not have to be networked learning: therefore, e-learnng activities could be used in the classroom setting, as the often are.
Why m-learning needs a different theory beyond e-learning. Conventional e-learning is tethered, in that students are anchored to one place while learning. What sets m-learning apart from conventional e-learning is the very lack of those special and temporal constrains; learning has portability, ubiquitous access and social connectivity.
50 dominant terms for m-learning should include spontaneous, intimate, situated, connected, informal, and personal, whereas conventional e-learning should include the terms computer, multimedia, interactive, hyperlinked, and media-rich environment.
51 Criteria for M-Learning
second consideration is that one must be cognizant of the substantial amount of learning taking place beyond the academic and workplace setting.
52 proposed theories
Activity theory: Vygotsky and Engestroem
Conversation theory: Pask 1975, cybernetic and dialectic framework for how knowledge is constructed. Laurillard (2007) although conversation is common for all forms of learning, m-learning can build in more opportunities for students to have ownership and control over what they are learning through digitally facilitated, location-specific activities.
53 multiple theories;
54 Context is central construct of mobile learning. Traxler (2011) described the role of context in m-learning as “context in the wider context”, as the notion of context becomes progressively richer. This theme fits with Nasimith et al situated theory, which describes the m-learning activities promoting authentic context and culture.
unlike e-learning, the learner is not anchored to a set place. it links to Vygotsky’s sociocultural approach.
Learning happens within various social groups and locations, providing a diverse range of connected learning experiences. furthermore, connectivity is without temporal restraints, such as the schedules of educators.
m-larning as “learning dispersed in time”
my note student-centered learning
Moura, A., Carvalho, A. (2013). Framework For Mobile Learning Integration Into Educational Contexts. In: Berge and Muilenburg (Eds.). Handbook of Mobile Learning.
Eureka: machine learning tool, brainstorming engine. give it an initial idea and it returns similar ideas. Like Google: refine the idea, so the machine can understand it better. create a collection of ideas to translate into course design or others.
influencers and microinfluencers, pre- and doing the execution
a machine can construct a book with the help of a person. bionic book. machine and person working hand in hand. provide keywords and phrases from lecture notes, presentation materials. from there recommendations and suggestions based on own experience; then identify included and excluded content. then instructor can construct.
Design may be the least interesting part of the book for the faculty.
multiple choice quiz may be the least interesting part, and faculty might want to do much deeper assessment.
use these machine learning techniques to build assessment. how to more effectively. inquizitive is the machine learning
students engagements and similar prompts
presence in the classroom: pre-service teachers class. how to immerse them and practice classroom management skills
First class: marriage btw VR and use of AI – an environment headset: an algorithm reacts how teachers are interacting with the virtual kids. series of variables, oppty to interact with present behavior. classroom management skills. simulations and environments otherwise impossible to create. apps for these type of interactions
facilitation, reflection and research
AI for more human experience, allow more time for the faculty to be more human, more free time to contemplate.
Jason: Won’t the use of AI still reduce the amount of faculty needed?
Christina Dumeng: @Jason–I think it will most likely increase the amount of students per instructor.
Andrew Cole (UW-Whitewater): I wonder if instead of reducing faculty, these types of platforms (e.g., analytic capabilities) might require instructors to also become experts in the various technology platforms.
Dirk Morrison: Also wonder what the implications of AI for informal, self-directed learning?
Kate Borowske: The context that you’re presenting this in, as “your own jazz band,” is brilliant. These tools presented as a “partner” in the “band” seems as though it might be less threatening to faculty. Sort of gamifies parts of course design…?
Dirk Morrison: Move from teacher-centric to student-centric? Recommender systems, AI-based tutoring?
Andrew Cole (UW-Whitewater): The course with the bot TA must have been 100-level right? It would be interesting to see if those results replicate in 300, 400 level courses
Way back in 1983, I identified A.I. as one of 20 exponential technologies that would increasingly drive economic growth for decades to come.
Artificial intelligence applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, decision trees and machine learning to recognize patterns from vast amounts of data, provide insights, predict outcomes and make complex decisions. A.I. can be applied to pattern recognition, object classification, language translation, data translation, logistical modeling and predictive modeling, to name a few. It’s important to understand that all A.I. relies on vast amounts of quality data and advanced analytics technology. The quality of the data used will determine the reliability of the A.I. output.
Machine learning is a subset of A.I. that utilizes advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon’s Alexa, Apple’s Siri, or any of the others from companies like Google and Microsoft all get better every year thanks to all of the use we give them and the machine learning that takes place in the background.
Deep learning is a subset of machine learning that uses advanced algorithms to enable an A.I. system to train itself to perform tasks by exposing multi-layered neural networks to vast amounts of data, then using what has been learned to recognize new patterns contained in the data. Learning can be Human Supervised Learning, Unsupervised Learningand/or Reinforcement Learning like Google used with DeepMind to learn how to beat humans at the complex game Go. Reinforcement learning will drive some of the biggest breakthroughs.
Autonomous computing uses advanced A.I. tools such as deep learning to enable systems to be self-governing and capable of acting according to situational data without human command. A.I. autonomy includes perception, high-speed analytics, machine-to-machine communications and movement. For example, autonomous vehicles use all of these in real time to successfully pilot a vehicle without a human driver.
Augmented thinking: Over the next five years and beyond, A.I. will become increasingly embedded at the chip level into objects, processes, products and services, and humans will augment their personal problem-solving and decision-making abilities with the insights A.I. provides to get to a better answer faster.
Technology is not good or evil, it is how we as humans apply it. Since we can’t stop the increasing power of A.I., I want us to direct its future, putting it to the best possible use for humans.
Artificial intelligence could erase many practical advantages of democracy, and erode the ideals of liberty and equality. It will further concentrate power among a small elite if we don’t take steps to stop it.
Ordinary people may not understand artificial intelligence and biotechnology in any detail, but they can sense that the future is passing them by. In 1938 the common man’s condition in the Soviet Union, Germany, or the United States may have been grim, but he was constantly told that he was the most important thing in the world, and that he was the future (provided, of course, that he was an “ordinary man,” rather than, say, a Jew or a woman).
n 2018 the common person feels increasingly irrelevant. Lots of mysterious terms are bandied about excitedly in ted Talks, at government think tanks, and at high-tech conferences—globalization, blockchain, genetic engineering, AI, machine learning—and common people, both men and women, may well suspect that none of these terms is about them.
Fears of machines pushing people out of the job market are, of course, nothing new, and in the past such fears proved to be unfounded. But artificial intelligence is different from the old machines. In the past, machines competed with humans mainly in manual skills. Now they are beginning to compete with us in cognitive skills.
Israel is a leader in the field of surveillance technology, and has created in the occupied West Bank a working prototype for a total-surveillance regime. Already today whenever Palestinians make a phone call, post something on Facebook, or travel from one city to another, they are likely to be monitored by Israeli microphones, cameras, drones, or spy software. Algorithms analyze the gathered data, helping the Israeli security forces pinpoint and neutralize what they consider to be potential threats.
The conflict between democracy and dictatorship is actually a conflict between two different data-processing systems. AI may swing the advantage toward the latter.
As we rely more on Google for answers, our ability to locate information independently diminishes. Already today, “truth” is defined by the top results of a Google search. This process has likewise affected our physical abilities, such as navigating space.
So what should we do?
For starters, we need to place a much higher priority on understanding how the human mind works—particularly how our own wisdom and compassion can be cultivated.