Posts Tagged ‘chatbot’

AI use in education

EDUCAUSE QuickPoll Results: Artificial Intelligence Use in Higher Education

D. Christopher Brooks” Friday, June 11, 2021

https://er.educause.edu/articles/2021/6/educause-quickpoll-results-artificial-intelligence-use-in-higher-education

AI is being used to monitor students and their work. The most prominent uses of AI in higher education are attached to applications designed to protect or preserve academic integrity through the use of plagiarism-detection software (60%) and proctoring applications (42%) (see figure 1).

The chatbots are coming! A sizable percentage (36%) of respondents reported that chatbots and digital assistants are in use at least somewhat on their campuses, with another 17% reporting that their institutions are in the planning, piloting, and initial stages of use (see figure 2). The use of chatbots in higher education by admissions, student affairs, career services, and other student success and support units is not entirely new, but the pandemic has likely contributed to an increase in their use as they help students get efficient, relevant, and correct answers to their questions without long waits.Footnote10 Chatbots may also liberate staff from repeatedly responding to the same questions and reduce errors by deploying updates immediately and universally.

AI is being used for student success tools such as identifying students who are at-risk academically (22%) and sending early academic warnings (16%); another 14% reported that their institutions are in the stage of planning, piloting, and initial usage of AI for these tasks.

Nearly three-quarters of respondents said that ineffective data management and integration (72%) and insufficient technical expertise (71%) present at least a moderate challenge to AI implementation. Financial concerns (67%) and immature data governance (66%) also pose challenges. Insufficient leadership support (56%) is a foundational challenge that is related to each of the previous listed challenges in this group.

Current use of AI

  • Chatbots for informational and technical support, HR benefits questions, parking questions, service desk questions, and student tutoring
  • Research applications, conducting systematic reviews and meta-analyses, and data science research (my italics)
  • Library services (my italics)
  • Recruitment of prospective students
  • Providing individual instructional material pathways, assessment feedback, and adaptive learning software
  • Proctoring and plagiarism detection
  • Student engagement support and nudging, monitoring well-being, and predicting likelihood of disengaging the institution
  • Detection of network attacks
  • Recommender systems

++++++++++++++++++
more on AI in education in this IMS blog
https://blog.stcloudstate.edu/ims?s=artificial+intelligence+education

digital ethics

O’Brien, J. (2020). Digital Ethics in Higher Education: 2020. Educause Review. https://er.educause.edu/articles/2020/5/digital-ethics-in-higher-education-2020

digital ethics, which I define simply as “doing the right thing at the intersection of technology innovation and accepted social values.”
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, written by Cathy O’Neil in early 2016, continues to be relevant and illuminating. O’Neil’s book revolves around her insight that “algorithms are opinions embedded in code,” in distinct contrast to the belief that algorithms are based on—and produce—indisputable facts.
Safiya Umoja Noble’s book Algorithms of Oppression: How Search Engines Reinforce Racism
The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power

+++++++++++++++++

International Dialogue on “The Ethics of Digitalisation” Kicks Off in Berlin | Berkman Klein Center. (2020, August 20). [Harvard University]. Berkman Klein Center. https://cyber.harvard.edu/story/2020-08/international-dialogue-ethics-digitalisation-kicks-berlin

+++++++++++++++++
more on ethics in this IMS blog
https://blog.stcloudstate.edu/ims?s=ethics

intelligent chatbots

https://www.nytimes.com/interactive/2018/11/14/magazine/tech-design-ai-chatbot.html

TWO YEARS AGO, Alison Darcy built a robot to help out the depressed. As a clinical research psychologist at Stanford University, she knew that one powerful way to help people suffering from depression or anxiety is cognitive behavioral therapy, or C.B.T. It’s a form of treatment in which a therapist teaches patients simple techniques that help them break negative patterns of thinking.

In a study with 70 young adults, Darcy found that after two weeks of interacting with the bot, the test subjects had lower incidences of depression and anxiety. They were impressed, and even touched, by the software’s attentiveness.

Many tell Darcy that it’s easier to talk to a bot than a human; they don’t feel judged.

Darcy argues this is a glimpse of our rapidly arriving future, where talking software is increasingly able to help us manage our emotions. There will be A.I.s that detect our feelings, possibly better than we can. “I think you’ll see robots for weight loss, and robots for being more effective communicators,” she says. It may feel odd at first

RECENT HISTORY HAS seen a rapid change in at least one human attitude toward machines: We’ve grown accustomed to talking to them. Millions now tell Alexa or Siri or Google Assistant to play music, take memos, put something on their calendar or tell a terrible joke.

One reason botmakers are embracing artificiality is that the Turing Test turns out to be incredibly difficult to pass. Human conversation is full of idioms, metaphors and implied knowledge: Recognizing that the expression “It’s raining cats and dogs” isn’t actually about cats and dogs, for example, surpasses the reach of chatbots.

Conversational bots thus could bring on a new wave of unemployment — or “readjustment,” to use the bloodless term of economics. Service workers, sales agents, telemarketers — it’s not hard to imagine how millions of jobs that require social interaction, whether on the phone or online, could eventually be eliminated by code.

One person who bought a Jibo was Erin Partridge, an art therapist in Alameda, Calif., who works with the elderly. When she took Jibo on visits, her patients loved it.

For some technology critics, including Sherry Turkle, who does research on the psychology of tech at M.I.T., this raises ethical concerns. “People are hard-wired with sort of Darwinian vulnerabilities, Darwinian buttons,” she told me. “And these Darwinian buttons are pushed by this technology.” That is, programmers are manipulating our emotions when they create objects that inquire after our needs.

The precursor to today’s bots, Joseph Weizenbaum’s ELIZA, was created at M.I.T. in 1966. ELIZA was a pretty crude set of prompts, but by simply asking people about their feelings, it drew them into deep conversations.