Searching for "bot"

bots and disinformation

Computer-generated humans and disinformation campaigns could soon take over political debate. Last year, researchers found that 70 countries had political disinformation campaigns over two years from r/Futurology

Bots will dominate political debate, experts warn

 

 

Last year, researchers at Oxford University found that 70 countries had political disinformation campaigns over two years.
Perhaps the most notable of such campaigns was that initiated by a Russian propaganda group to influence the 2016 US election result.

he US Federal Communications Commission hosted a period in 2017 where the public could comment on its plans to repeal net neutrality. Harvard Kennedy School lecturer Bruce Schneier notes that while the agency received 22 million comments, many of them were made by fake identities.
Schneier argues that the escalating prevalence of computer-generated personas could “starve” people of democracy

 

++++++++++++
more on deepfake in this IMS blog
http://blog.stcloudstate.edu/ims?s=deepfake

intelligent chatbots

https://www.nytimes.com/interactive/2018/11/14/magazine/tech-design-ai-chatbot.html

TWO YEARS AGO, Alison Darcy built a robot to help out the depressed. As a clinical research psychologist at Stanford University, she knew that one powerful way to help people suffering from depression or anxiety is cognitive behavioral therapy, or C.B.T. It’s a form of treatment in which a therapist teaches patients simple techniques that help them break negative patterns of thinking.

In a study with 70 young adults, Darcy found that after two weeks of interacting with the bot, the test subjects had lower incidences of depression and anxiety. They were impressed, and even touched, by the software’s attentiveness.

Many tell Darcy that it’s easier to talk to a bot than a human; they don’t feel judged.

Darcy argues this is a glimpse of our rapidly arriving future, where talking software is increasingly able to help us manage our emotions. There will be A.I.s that detect our feelings, possibly better than we can. “I think you’ll see robots for weight loss, and robots for being more effective communicators,” she says. It may feel odd at first

RECENT HISTORY HAS seen a rapid change in at least one human attitude toward machines: We’ve grown accustomed to talking to them. Millions now tell Alexa or Siri or Google Assistant to play music, take memos, put something on their calendar or tell a terrible joke.

One reason botmakers are embracing artificiality is that the Turing Test turns out to be incredibly difficult to pass. Human conversation is full of idioms, metaphors and implied knowledge: Recognizing that the expression “It’s raining cats and dogs” isn’t actually about cats and dogs, for example, surpasses the reach of chatbots.

Conversational bots thus could bring on a new wave of unemployment — or “readjustment,” to use the bloodless term of economics. Service workers, sales agents, telemarketers — it’s not hard to imagine how millions of jobs that require social interaction, whether on the phone or online, could eventually be eliminated by code.

One person who bought a Jibo was Erin Partridge, an art therapist in Alameda, Calif., who works with the elderly. When she took Jibo on visits, her patients loved it.

For some technology critics, including Sherry Turkle, who does research on the psychology of tech at M.I.T., this raises ethical concerns. “People are hard-wired with sort of Darwinian vulnerabilities, Darwinian buttons,” she told me. “And these Darwinian buttons are pushed by this technology.” That is, programmers are manipulating our emotions when they create objects that inquire after our needs.

The precursor to today’s bots, Joseph Weizenbaum’s ELIZA, was created at M.I.T. in 1966. ELIZA was a pretty crude set of prompts, but by simply asking people about their feelings, it drew them into deep conversations.

automated Twitter bots

twitter bots

To identify bots, the Center used a tool known as “Botometer,” developed by researchers at the University of Southern California and Indiana University.

Previous studies have documented the nature and sources of tweets regarding immigration news, the ways in which news is shared via social media in a polarized Congress, the degree to which science information on social media is shared and trusted, the role of social media in the broader context of online harassment, how key social issues like race relations play out on these platforms, and the patterns of how different groups arrange themselves on Twitter.

It is important to note that bot accounts do not always clearly identify themselves as such in their profiles, and any bot classification system inevitably carries some risk of error. The Botometer system has been documented and validated in an array of academic publications, and researchers from the Center conducted a number of independent validation measures of its results.8

++++++++++++++++++++
more on fake news in this IMS blog
http://blog.stcloudstate.edu/ims?s=fake+news

bots, big data and the future

Computational Propaganda: Bots, Targeting And The Future

February 9, 201811:37 AM ET 

https://www.npr.org/sections/13.7/2018/02/09/584514805/computational-propaganda-yeah-that-s-a-thing-now

Combine the superfast calculational capacities of Big Compute with the oceans of specific personal information comprising Big Data — and the fertile ground for computational propaganda emerges. That’s how the small AI programs called bots can be unleashed into cyberspace to target and deliver misinformation exactly to the people who will be most vulnerable to it. These messages can be refined over and over again based on how well they perform (again in terms of clicks, likes and so on). Worst of all, all this can be done semiautonomously, allowing the targeted propaganda (like fake news stories or faked images) to spread like viruses through communities most vulnerable to their misinformation.

According to Bolsover and Howard, viewing computational propaganda only from a technical perspective would be a grave mistake. As they explain, seeing it just in terms of variables and algorithms “plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it.”

Computational propaganda is a new thing. People just invented it. And they did so by realizing possibilities emerging from the intersection of new technologies (Big Compute, Big Data) and new behaviors those technologies allowed (social media). But the emphasis on behavior can’t be lost.

People are not machines. We do things for a whole lot of reasons including emotions of loss, anger, fear and longing. To combat computational propaganda’s potentially dangerous effects on democracy in a digital age, we will need to focus on both its howand its why.

++++++++++++++++
more on big data in this IMS blog
http://blog.stcloudstate.edu/ims?s=big+data

more on bots in this IMS blog
http://blog.stcloudstate.edu/ims?s=bot

more on fake news in this IMS blog
http://blog.stcloudstate.edu/ims?s=fake+news

Algorithmic Test Proctoring

Our Bodies Encoded: Algorithmic Test Proctoring in Higher Education

SHEA SWAUGER ED-TECH

https://hybridpedagogy.org/our-bodies-encoded-algorithmic-test-proctoring-in-higher-education/

While in-person test proctoring has been used to combat test-based cheating, this can be difficult to translate to online courses. Ed-tech companies have sought to address this concern by offering to watch students take online tests, in real time, through their webcams.

Some of the more prominent companies offering these services include ProctorioRespondusProctorUHonorLockKryterion Global Testing Solutions, and Examity.

Algorithmic test proctoring’s settings have discriminatory consequences across multiple identities and serious privacy implications. 

While racist technology calibrated for white skin isn’t new (everything from photography to soap dispensers do this), we see it deployed through face detection and facial recognition used by algorithmic proctoring systems.

While some test proctoring companies develop their own facial recognition software, most purchase software developed by other companies, but these technologies generally function similarly and have shown a consistent inability to identify people with darker skin or even tell the difference between Chinese people. Facial recognition literally encodes the invisibility of Black people and the racist stereotype that all Asian people look the same.

As Os Keyes has demonstrated, facial recognition has a terrible history with gender. This means that a software asking students to verify their identity is compromising for students who identify as trans, non-binary, or express their gender in ways counter to cis/heteronormativity.

These features and settings create a system of asymmetric surveillance and lack of accountability, things which have always created a risk for abuse and sexual harassment. Technologies like these have a long history of being abused, largely by heterosexual men at the expense of women’s bodies, privacy, and dignity.

Their promotional messaging functions similarly to dog whistle politics which is commonly used in anti-immigration rhetoric. It’s also not a coincidence that these technologies are being used to exclude people not wanted by an institution; biometrics and facial recognition have been connected to anti-immigration policies, supported by both Republican and Democratic administrations, going back to the 1990’s.

Borrowing from Henry A. Giroux, Kevin Seeber describes the pedagogy of punishment and some of its consequences in regards to higher education’s approach to plagiarism in his book chapter “The Failed Pedagogy of Punishment: Moving Discussions of Plagiarism beyond Detection and Discipline.”

my note: I am repeating this for years
Sean Michael Morris and Jesse Stommel’s ongoing critique of Turnitin, a plagiarism detection software, outlines exactly how this logic operates in ed-tech and higher education: 1) don’t trust students, 2) surveil them, 3) ignore the complexity of writing and citation, and 4) monetize the data.

Technological Solutionism

Cheating is not a technological problem, but a social and pedagogical problem.
Our habit of believing that technology will solve pedagogical problems is endemic to narratives produced by the ed-tech community and, as Audrey Watters writes, is tied to the Silicon Valley culture that often funds it. Scholars have been dismantling the narrative of technological solutionism and neutrality for some time now. In her book “Algorithms of Oppression,” Safiya Umoja Noble demonstrates how the algorithms that are responsible for Google Search amplify and “reinforce oppressive social relationships and enact new modes of racial profiling.”

Anna Lauren Hoffmann, who coined the term “data violence” to describe the impact harmful technological systems have on people and how these systems retain the appearance of objectivity despite the disproportionate harm they inflict on marginalized communities.

This system of measuring bodies and behaviors, associating certain bodies and behaviors with desirability and others with inferiority, engages in what Lennard J. Davis calls the Eugenic Gaze.

Higher education is deeply complicit in the eugenics movement. Nazism borrowed many of its ideas about racial purity from the American school of eugenics, and universities were instrumental in supporting eugenics research by publishing copious literature on it, establishing endowed professorships, institutes, and scholarly societies that spearheaded eugenic research and propaganda.

+++++++++++++++++
more on privacy in this IMS blog
http://blog.stcloudstate.edu/ims?s=privacy

questions about online education

https://nepc.colorado.edu/blog/does-it-work

One of the most frequently and persistently asked questions about online education is “does it work” or “is it effective.”
The question is meaningless because there cannot be any definitive answer for a number of reasons.

First, online education (and its variants such a online instruction, online teaching, distance education and distance learning) is a big umbrella that covers a wide array of different practices, which vary a great deal in terms of quality. Comparing the effectiveness of online education with face-to-face education has been the most common research approach to examine the effectiveness of online education. And the answer has been, for a long time, that there is no significant difference between the two. This answer, however, does not mean online is effective or not, it simply means there are plenty of effective and ineffective programs in both online and face-to-face education. In other words, the within variation is larger than the between variation.

Second, another reason that there cannot be a definitive answer to this question is the diversity of stakeholders in online education.
And unfortunately what works for one stakeholder may not work for the others.

Third, even within the same program and with only students as the stakeholder, there cannot be a definitive answer because no program can possibly have the same effects on all students equally.

Fourth, yet another reason that the question cannot have a definitive answer is the multiplicity of outcomes. Education outcomes include more than what has been typically measured by grades or tests.

Fifth, the rapid changes in technology that can be used to deliver online education add to the elusiveness of a definitive answer to the question. While pedagogy, design, and human actors certainly paly a significant role in the experiences of online education, so does technology.

++++++++++
more on online education in this IMS blog
http://blog.stcloudstate.edu/ims?s=online+education

1 2 3 57