Jan
2022
Digital Literacy for St. Cloud State University
Rapid speeds and massive capacity connections offered by 5G hold the potential to drastically transform how healthcare is delivered. The DOD last June named JBSA as the experimentation site specifically for 5G in telemedicine and medical training. This pilot is part of the second tranche of military installations the Pentagon is steering as part of an ambitious effort to explore and prototype 5G-enabled technologies
+++++++++++++++++
more on AR in this IMS blog
https://blog.stcloudstate.edu/ims?s=Augmented+reality
Build your own X, a collection of tutorials to build your own 3D renderer, Blockchain, Bot, Game, Neural Network, Search Engine, Text Editor, and much more! (27 things to build!) from r/programming
https://github.com/danistefanovic/build-your-own-x
+++++++++++++++
more on chatbots in this IMS blog
https://blog.stcloudstate.edu/ims?s=chatbot
Twitter Bots Are a Major Source of Climate Disinformation. Researchers determined that nearly 9.5% of the users in their sample were likely bots. But those bots accounted for 25% of the total tweets about climate change on most days from r/science
paper published last week in the journal Climate Policy is part of an expanding body of research about the role of bots in online climate discourse.
+++++++++++
more on climate in this IMS blog
https://blog.stcloudstate.edu/ims?s=climate
Vatican enlists bots to protect library from onslaught of hackers from r/technology
The library has partnered with Darktrace, a company founded by Cambridge University mathematicians, which claims to be the first to develop an AI system for cybersecurity.
+++++++++++++++
more on bots in this IMS blog
https://blog.stcloudstate.edu/ims?s=bots
https://medium.com/8th-wall/introducing-8th-wall-curved-image-targets-f7793c31201e
Like all of 8th Wall’s WebAR capabilities, projects created using Curved Image Targets work across iOS and Android devices with an estimated reach of nearly 3 billion smartphones, and can be immediately experienced with the tap of a link or by scanning a QR code.
+++++++++++++++
+++++++++++++++++
+++++++++++++++
more on augmented reality in this IMS blog
https://blog.stcloudstate.edu/ims?s=augmented+reality
Computer-generated humans and disinformation campaigns could soon take over political debate. Last year, researchers found that 70 countries had political disinformation campaigns over two years from r/Futurology
Last year, researchers at Oxford University found that 70 countries had political disinformation campaigns over two years.
Perhaps the most notable of such campaigns was that initiated by a Russian propaganda group to influence the 2016 US election result.
he US Federal Communications Commission hosted a period in 2017 where the public could comment on its plans to repeal net neutrality. Harvard Kennedy School lecturer Bruce Schneier notes that while the agency received 22 million comments, many of them were made by fake identities.
Schneier argues that the escalating prevalence of computer-generated personas could “starve” people of democracy
++++++++++++
more on deepfake in this IMS blog
https://blog.stcloudstate.edu/ims?s=deepfake
https://www.edsurge.com/news/2018-04-13-can-a-family-of-bots-reshape-college-teaching
++++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=Artificial+Intelligence+and+education
https://www.nytimes.com/interactive/2018/11/14/magazine/tech-design-ai-chatbot.html
TWO YEARS AGO, Alison Darcy built a robot to help out the depressed. As a clinical research psychologist at Stanford University, she knew that one powerful way to help people suffering from depression or anxiety is cognitive behavioral therapy, or C.B.T. It’s a form of treatment in which a therapist teaches patients simple techniques that help them break negative patterns of thinking.
In a study with 70 young adults, Darcy found that after two weeks of interacting with the bot, the test subjects had lower incidences of depression and anxiety. They were impressed, and even touched, by the software’s attentiveness.
Many tell Darcy that it’s easier to talk to a bot than a human; they don’t feel judged.
Darcy argues this is a glimpse of our rapidly arriving future, where talking software is increasingly able to help us manage our emotions. There will be A.I.s that detect our feelings, possibly better than we can. “I think you’ll see robots for weight loss, and robots for being more effective communicators,” she says. It may feel odd at first
RECENT HISTORY HAS seen a rapid change in at least one human attitude toward machines: We’ve grown accustomed to talking to them. Millions now tell Alexa or Siri or Google Assistant to play music, take memos, put something on their calendar or tell a terrible joke.
One reason botmakers are embracing artificiality is that the Turing Test turns out to be incredibly difficult to pass. Human conversation is full of idioms, metaphors and implied knowledge: Recognizing that the expression “It’s raining cats and dogs” isn’t actually about cats and dogs, for example, surpasses the reach of chatbots.
Conversational bots thus could bring on a new wave of unemployment — or “readjustment,” to use the bloodless term of economics. Service workers, sales agents, telemarketers — it’s not hard to imagine how millions of jobs that require social interaction, whether on the phone or online, could eventually be eliminated by code.
One person who bought a Jibo was Erin Partridge, an art therapist in Alameda, Calif., who works with the elderly. When she took Jibo on visits, her patients loved it.
For some technology critics, including Sherry Turkle, who does research on the psychology of tech at M.I.T., this raises ethical concerns. “People are hard-wired with sort of Darwinian vulnerabilities, Darwinian buttons,” she told me. “And these Darwinian buttons are pushed by this technology.” That is, programmers are manipulating our emotions when they create objects that inquire after our needs.
The precursor to today’s bots, Joseph Weizenbaum’s ELIZA, was created at M.I.T. in 1966. ELIZA was a pretty crude set of prompts, but by simply asking people about their feelings, it drew them into deep conversations.
To identify bots, the Center used a tool known as “Botometer,” developed by researchers at the University of Southern California and Indiana University.
Previous studies have documented the nature and sources of tweets regarding immigration news, the ways in which news is shared via social media in a polarized Congress, the degree to which science information on social media is shared and trusted, the role of social media in the broader context of online harassment, how key social issues like race relations play out on these platforms, and the patterns of how different groups arrange themselves on Twitter.
It is important to note that bot accounts do not always clearly identify themselves as such in their profiles, and any bot classification system inevitably carries some risk of error. The Botometer system has been documented and validated in an array of academic publications, and researchers from the Center conducted a number of independent validation measures of its results.8
++++++++++++++++++++
more on fake news in this IMS blog
https://blog.stcloudstate.edu/ims?s=fake+news