Searching for "bot"

intelligent chatbots

https://www.nytimes.com/interactive/2018/11/14/magazine/tech-design-ai-chatbot.html

TWO YEARS AGO, Alison Darcy built a robot to help out the depressed. As a clinical research psychologist at Stanford University, she knew that one powerful way to help people suffering from depression or anxiety is cognitive behavioral therapy, or C.B.T. It’s a form of treatment in which a therapist teaches patients simple techniques that help them break negative patterns of thinking.

In a study with 70 young adults, Darcy found that after two weeks of interacting with the bot, the test subjects had lower incidences of depression and anxiety. They were impressed, and even touched, by the software’s attentiveness.

Many tell Darcy that it’s easier to talk to a bot than a human; they don’t feel judged.

Darcy argues this is a glimpse of our rapidly arriving future, where talking software is increasingly able to help us manage our emotions. There will be A.I.s that detect our feelings, possibly better than we can. “I think you’ll see robots for weight loss, and robots for being more effective communicators,” she says. It may feel odd at first

RECENT HISTORY HAS seen a rapid change in at least one human attitude toward machines: We’ve grown accustomed to talking to them. Millions now tell Alexa or Siri or Google Assistant to play music, take memos, put something on their calendar or tell a terrible joke.

One reason botmakers are embracing artificiality is that the Turing Test turns out to be incredibly difficult to pass. Human conversation is full of idioms, metaphors and implied knowledge: Recognizing that the expression “It’s raining cats and dogs” isn’t actually about cats and dogs, for example, surpasses the reach of chatbots.

Conversational bots thus could bring on a new wave of unemployment — or “readjustment,” to use the bloodless term of economics. Service workers, sales agents, telemarketers — it’s not hard to imagine how millions of jobs that require social interaction, whether on the phone or online, could eventually be eliminated by code.

One person who bought a Jibo was Erin Partridge, an art therapist in Alameda, Calif., who works with the elderly. When she took Jibo on visits, her patients loved it.

For some technology critics, including Sherry Turkle, who does research on the psychology of tech at M.I.T., this raises ethical concerns. “People are hard-wired with sort of Darwinian vulnerabilities, Darwinian buttons,” she told me. “And these Darwinian buttons are pushed by this technology.” That is, programmers are manipulating our emotions when they create objects that inquire after our needs.

The precursor to today’s bots, Joseph Weizenbaum’s ELIZA, was created at M.I.T. in 1966. ELIZA was a pretty crude set of prompts, but by simply asking people about their feelings, it drew them into deep conversations.

automated Twitter bots

twitter bots

To identify bots, the Center used a tool known as “Botometer,” developed by researchers at the University of Southern California and Indiana University.

Previous studies have documented the nature and sources of tweets regarding immigration news, the ways in which news is shared via social media in a polarized Congress, the degree to which science information on social media is shared and trusted, the role of social media in the broader context of online harassment, how key social issues like race relations play out on these platforms, and the patterns of how different groups arrange themselves on Twitter.

It is important to note that bot accounts do not always clearly identify themselves as such in their profiles, and any bot classification system inevitably carries some risk of error. The Botometer system has been documented and validated in an array of academic publications, and researchers from the Center conducted a number of independent validation measures of its results.8

++++++++++++++++++++
more on fake news in this IMS blog
http://blog.stcloudstate.edu/ims?s=fake+news

bots, big data and the future

Computational Propaganda: Bots, Targeting And The Future

February 9, 201811:37 AM ET 

https://www.npr.org/sections/13.7/2018/02/09/584514805/computational-propaganda-yeah-that-s-a-thing-now

Combine the superfast calculational capacities of Big Compute with the oceans of specific personal information comprising Big Data — and the fertile ground for computational propaganda emerges. That’s how the small AI programs called bots can be unleashed into cyberspace to target and deliver misinformation exactly to the people who will be most vulnerable to it. These messages can be refined over and over again based on how well they perform (again in terms of clicks, likes and so on). Worst of all, all this can be done semiautonomously, allowing the targeted propaganda (like fake news stories or faked images) to spread like viruses through communities most vulnerable to their misinformation.

According to Bolsover and Howard, viewing computational propaganda only from a technical perspective would be a grave mistake. As they explain, seeing it just in terms of variables and algorithms “plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it.”

Computational propaganda is a new thing. People just invented it. And they did so by realizing possibilities emerging from the intersection of new technologies (Big Compute, Big Data) and new behaviors those technologies allowed (social media). But the emphasis on behavior can’t be lost.

People are not machines. We do things for a whole lot of reasons including emotions of loss, anger, fear and longing. To combat computational propaganda’s potentially dangerous effects on democracy in a digital age, we will need to focus on both its howand its why.

++++++++++++++++
more on big data in this IMS blog
http://blog.stcloudstate.edu/ims?s=big+data

more on bots in this IMS blog
http://blog.stcloudstate.edu/ims?s=bot

more on fake news in this IMS blog
http://blog.stcloudstate.edu/ims?s=fake+news

Ayahuasca mindfulness

Psychedelic Ayahuasca improved mindfulness and cognitive flexibility significantly in the 24 hours after use. from r/science

https://link.springer.com/article/10.1007%2Fs00213-019-05445-3

Results

Mindfulness (FFMQ total scores and four of the five mindfulness facets: observe, describe, act with awareness, and non-reactivity) and decentering (EQ) significantly increased in the 24 h after ayahuasca use. Cognitive flexibility (CFS and WPCST) significantly improved in the 24 h after ayahuasca use. Changes in both mindfulness and cognitive flexibility were not influenced by prior ayahuasca use.

Conclusions

The present study supports ayahuasca’s ability to enhance mindfulness and further reports changes in cognitive flexibility in the ‘afterglow’ period occur, suggesting both could be possible psychological mechanisms concerning the psychotherapeutic effects of ayahuasca. Given psychological gains occurred regardless of prior ayahuasca use suggests potentially therapeutic effects for both naïve and experienced ayahuasca drinkers.

Turn Bad Data Into Good Data

How to Turn Bad Data Into Good Data

https://events.edsurge.com/webinars/how-to-turn-bad-data-into-good-data

Date: Wednesday, January 22, 2020  Time: 1:00 pm CT

a panel of data and education experts about how to make the most of your education data. In this webinar you’ll learn about:

  • How rapid data turnover can hurt you (and your bottom line)
  • How to access “good‘‘ data and what it looks like
  • Opportunities open to you when your data is clean 
  • Avoiding the pitfalls of using outdated or irrelevant data and making decisions that are not data informed
  • Navigating the unique challenges of working in education, such as privacy regulations that might hinder communication 

+++++++++++++
more on big data in this IMS blog
http://blog.stcloudstate.edu/ims?s=big+data

100 tech debacles of the decade

http://hackeducation.com/2019/12/31/what-a-shitshow

1. Anti-School Shooter Software

4. “The Year of the MOOC” (2012)

6. “Everyone Should Learn to Code”

8. LAUSD’s iPad Initiative (2013)

9. Virtual Charter Schools

10. Google for Education

14. inBloom. The Shared Learning Collaborative (2011)

17. Test Prep

20. Predictive Analytics

22. Automated Essay Grading

25. Peter Thiel

26. Google Glass

32. Common Core State Standards

44. YouTube, the New “Educational TV”

48. The Hour of Code

49. Yik Yak

52. Virtual Reality

57. TurnItIn (and the Cheating Detection Racket) (my note: repeating the same for years: http://blog.stcloudstate.edu/ims?s=turnitin)

59. Clayton Christensen’s Predictions
http://blog.stcloudstate.edu/ims?s=clayton

61. Edmodo. http://blog.stcloudstate.edu/ims?s=edmodo

62. Edsurge

64. Alexa at School

65. Apple’s iTextbooks (2011)

67. UC Berkeley Deletes Its Online Lectures. ADA

72. Chatbot Instructors. IBM Watson “AI” technology (2016)

81. Interactive Whiteboards (my note: repeating the same for years: http://blog.stcloudstate.edu/ims?s=smartboard)

82. “The End of Library” Stories (and the Software that Seems to Support That)

86. Badges

89. Clickers

90. “Ban Laptops” Op-Eds (my note: collecting pros and cons for years: http://blog.stcloudstate.edu/ims/2017/04/03/use-of-laptops-in-the-classroom/)

92. “The Flipped Classroom”

93. 3D Printing

100. The Horizon Report

schools going broke

https://www.cnbc.com/2019/12/03/the-other-college-debt-crisis-schools-are-going-broke.html

Rethinking liberal arts

The result was a top-to-bottom makeover of the school’s curriculum and its overall approach. Gone were majors seen as stodgy or less aligned with a career path — including religion, art history and music. In their place are programs in sport management, international studies and crime, law and justice. There is a new emphasis on technology, and all students are required to complete an internship, a study-away trip or a research project in order to graduate.

The college has dubbed its approach “the new liberal arts” and trademarked the term.

 

1 2 3 55