Artificial intelligence (AI) training costs, for example, are dropping 40-70% at an annual rate, a record-breaking deflationary force. AI is likely to transform every sector, industry, and company during the 5 to 10 years.
Artificial intelligence (AI) training costs, for example, are dropping 40-70% at an annual rate, a record-breaking deflationary force. AI is likely to transform every sector, industry, and company during the 5 to 10 years.
“The goal of ALOE is to develop new artificial intelligence theories and techniques to make online education for adults at least as effective as in-person education in STEM fields,” says Co-PI Ashok Goel, Professor of Computer Science and Human-Centered Computing and the Chief Scientist with the Center for 21stCentury Universities at Georgia Tech
Research and development at ALOE aims to blend online educational resources and courses to make education more widely available, as well as use virtual assistants to make it more affordable and achievable. According to Goel, ALOE will make fundamental advances in personalization at scale, machine teaching, mutual theory of mind and responsible AI.
The ALOE Institute represents a powerful consortium of several universities (Arizona State, Drexel, Georgia Tech, Georgia State, Harvard, UNC-Greensboro); technical colleges in TCSG; major industrial partners (Boeing, IBM and Wiley); and non-profit organizations (GRA and IMS).
According to the author of the report, one of the participants in the AI education market will be IBM, AWS, Microsoft, Google, Nuance, Century Tech, Blackboard, Pearson, Cognii, Volley.com, Blippar, Knewton, Jenzabar, Content Technologies, PLEIQ, Luilishuo, Pixatel System, Cerevrum Inc., CheckiO, and Quantum Adaptive Learning.
Europe is expected to hold a significant market share with supportive government initiatives.
AI is being used to monitor students and their work. The most prominent uses of AI in higher education are attached to applications designed to protect or preserve academic integrity through the use of plagiarism-detection software (60%) and proctoring applications (42%) (see figure 1).
The chatbots are coming! A sizable percentage (36%) of respondents reported that chatbots and digital assistants are in use at least somewhat on their campuses, with another 17% reporting that their institutions are in the planning, piloting, and initial stages of use (see figure 2). The use of chatbots in higher education by admissions, student affairs, career services, and other student success and support units is not entirely new, but the pandemic has likely contributed to an increase in their use as they help students get efficient, relevant, and correct answers to their questions without long waits.Footnote10 Chatbots may also liberate staff from repeatedly responding to the same questions and reduce errors by deploying updates immediately and universally.
AI is being used for student success tools such as identifying students who are at-risk academically (22%) and sending early academic warnings (16%); another 14% reported that their institutions are in the stage of planning, piloting, and initial usage of AI for these tasks.
Nearly three-quarters of respondents said that ineffective data management and integration (72%) and insufficient technical expertise (71%) present at least a moderate challenge to AI implementation. Financial concerns (67%) and immature data governance (66%) also pose challenges. Insufficient leadership support (56%) is a foundational challenge that is related to each of the previous listed challenges in this group.
Current use of AI
Chatbots for informational and technical support, HR benefits questions, parking questions, service desk questions, and student tutoring
Research applications, conducting systematic reviews and meta-analyses, and data science research (my italics)
Library services (my italics)
Recruitment of prospective students
Providing individual instructional material pathways, assessment feedback, and adaptive learning software
Proctoring and plagiarism detection
Student engagement support and nudging, monitoring well-being, and predicting likelihood of disengaging the institution
Last year, Australia’s Chief Scientist Alan Finkel suggested that we in Australia should become “human custodians”. This would mean being leaders in technological development, ethics, and human rights.
A recent report from the Australian Council of Learned Academies (ACOLA) brought together experts from scientific and technical fields as well as the humanities, arts and social sciences to examine key issues arising from artificial intelligence.
A similar vision drives Stanford University’s Institute for Human-Centered Artificial Intelligence. The institute brings together researchers from the humanities, education, law, medicine, business and STEM to study and develop “human-centred” AI technologies.
Meanwhile, across the Atlantic, the Future of Humanity Institute at the University of Oxford similarly investigates “big-picture questions” to ensure “a long and flourishing future for humanity”.
The IT sector is also wrestling with the ethical issues raised by rapid technological advancement. Microsoft’s Brad Smith and Harry Shum wrote in their 2018 book The Future Computed that one of their “most important conclusions” was that the humanities and social sciences have a crucial role to play in confronting the challenges raised by AI
Without training in ethics, human rights and social justice, the people who develop the technologies that will shape our future could make poor decisions.
Got a new open access article out on the ways AI is embedding in education research. Well-funded precision education experts and learning engineers aim to collect psychodata, brain data and biodata as evidence of the embodied substrates of learning. https://t.co/CbdHReXUiz
This article presents an examination of how education research is being remade as an experimental data-intensive science. AI is combining with learning science in new ‘digital laboratories’ where ownership over data, and power and authority over educational knowledge production, are being redistributed to research assemblages of computational machines and scientific expertise.
Research across the sciences, humanities and social sciences is increasingly conducted through digital knowledge machines that are reconfiguring the ways knowledge is generated, circulated and used (Meyer and Schroeder, 2015).
Knowledge infrastructures, such as those of statistical institutes or research-intensive universities, have undergone significant digital transformation with the arrival of data-intensive technologies, with knowledge production now enacted in myriad settings, from academic laboratories and research institutes to commercial research and development studios, think tanks and consultancies. Datafied knowledge infrastructures have become hubs of command and control over the creation, analysis and exchange of data (Bigo et al., 2019).
The combination of AI and learning science into an AILSci research assemblage consists of particular forms of scientific expertise embodied by knowledge actors – individuals and organizations – identified by categories including science of learning, AIED, precision education and learning engineering.
Precision education overtly uses psychological, neurological and genomic data to tailor or personalize learning around the unique needs of the individual (Williamson, 2019). Precision education approaches include cognitive tracking, behavioural monitoring, brain imaging and DNA analysis.
Expert power is therefore claimed by those who can perform big data analyses, especially those able to translate and narrate the data for various audiences. Likewise, expert power in education is now claimed by those who can enact data-intensive science of learning, precision education and learning engineering research and development, and translate AILSci findings into knowledge for application in policy and practitioner settings.
the thinking of a thinking infrastructure is not merely a conscious human cognitive process, but relationally performed across humans and socio-material strata, wherein interconnected technical devices and other forms ‘organize thinking and thought and direct action’.
As an infrastructure for AILSci analyses, these technologies at least partly structure how experts think: they generate new understandings and knowledge about processes of education and learning that are only thinkable and knowable due to the computational machinery of the research enterprise.
Big data-based molecular genetics studies are part of a bioinformatics-led transformation of biomedical sciences based on analysing exceptional volumes of data (Parry and Greenhough, 2018), which has transformed the biological sciences to focus on structured and computable data rather than embodied evidence itself.
Isin and Ruppert (2019) have recently conceptualized an emergent form of power that they characterize as sensory power. Building on Foucault, they note how sovereign power gradually metamorphosed into disciplinary power and biopolitical forms of statistical regulation over bodies and populations. Sensory power marks a shift to practices of data-intensive sensing, and to the quantified tracking, recording and representing of living pulses, movements and sentiments through devices such as wearable fitness monitors, online natural-language processing and behaviour-tracking apps. Davies (2019: 515–20) designates these as ‘techno-somatic real-time sensing’ technologies that capture the ‘rhythms’ and ‘metronomic vitality’ of human bodies, and bring about ‘new cyborg-type assemblages of bodies, codes, screens and machines’ in a ‘constant cybernetic loop of action, feedback and adaptation’.
Techno-somatic modes of neural sensing, using neurotechnologies for brain imaging and neural analysis, are the next frontier in AILSci. Real-time brainwave sensing is being developed and trialled in multiple expert settings.
Blended Reality, a cross-curricular applied research program through which they create interactive experiences using virtual reality, augmented reality and 3D printing tools. Yale is one of about 20 colleges participating in the HP/Educause Campus of the Future project investigating the use of this technology in higher education.
Interdisciplinary student and professor teams at Yale have developed projects that include using motion capture and artificial intelligence to generate dance choreography, converting museum exhibits into detailed digital replicas, and making an app that uses augmented reality to simulate injuries on the mannequins medical students use for training.
The perspectives and skills of art and humanities students have been critical to the success of these efforts, says Justin Berry, faculty member at the Yale Center for Collaborative Arts and Media and principal investigator for the HP Blended Reality grant.
Artificial intelligence and mixed reality have driven demand in learning games around the world, according to a new report by Metaari. A five-year forecast has predicted that educational gaming will reach $24 billion by 2024, with a compound annual growth rate of 33 percent and a quadrupling of revenues. Metaari is an analyst firm that tracks advanced learning technology.
a two-day conference about artificial intelligence in education organized by a company called Squirrel AI.
he believes that having AI-driven tutors or instructors will help them each get the individual approach they need.
the Chinese government has declared a national goal of surpassing the U.S. in AI technology by the year 2030, so there is almost a Sputnik-like push for the tech going on right now in China.