Searching for "bot"

intelligent chatbots

https://www.nytimes.com/interactive/2018/11/14/magazine/tech-design-ai-chatbot.html

TWO YEARS AGO, Alison Darcy built a robot to help out the depressed. As a clinical research psychologist at Stanford University, she knew that one powerful way to help people suffering from depression or anxiety is cognitive behavioral therapy, or C.B.T. It’s a form of treatment in which a therapist teaches patients simple techniques that help them break negative patterns of thinking.

In a study with 70 young adults, Darcy found that after two weeks of interacting with the bot, the test subjects had lower incidences of depression and anxiety. They were impressed, and even touched, by the software’s attentiveness.

Many tell Darcy that it’s easier to talk to a bot than a human; they don’t feel judged.

Darcy argues this is a glimpse of our rapidly arriving future, where talking software is increasingly able to help us manage our emotions. There will be A.I.s that detect our feelings, possibly better than we can. “I think you’ll see robots for weight loss, and robots for being more effective communicators,” she says. It may feel odd at first

RECENT HISTORY HAS seen a rapid change in at least one human attitude toward machines: We’ve grown accustomed to talking to them. Millions now tell Alexa or Siri or Google Assistant to play music, take memos, put something on their calendar or tell a terrible joke.

One reason botmakers are embracing artificiality is that the Turing Test turns out to be incredibly difficult to pass. Human conversation is full of idioms, metaphors and implied knowledge: Recognizing that the expression “It’s raining cats and dogs” isn’t actually about cats and dogs, for example, surpasses the reach of chatbots.

Conversational bots thus could bring on a new wave of unemployment — or “readjustment,” to use the bloodless term of economics. Service workers, sales agents, telemarketers — it’s not hard to imagine how millions of jobs that require social interaction, whether on the phone or online, could eventually be eliminated by code.

One person who bought a Jibo was Erin Partridge, an art therapist in Alameda, Calif., who works with the elderly. When she took Jibo on visits, her patients loved it.

For some technology critics, including Sherry Turkle, who does research on the psychology of tech at M.I.T., this raises ethical concerns. “People are hard-wired with sort of Darwinian vulnerabilities, Darwinian buttons,” she told me. “And these Darwinian buttons are pushed by this technology.” That is, programmers are manipulating our emotions when they create objects that inquire after our needs.

The precursor to today’s bots, Joseph Weizenbaum’s ELIZA, was created at M.I.T. in 1966. ELIZA was a pretty crude set of prompts, but by simply asking people about their feelings, it drew them into deep conversations.

automated Twitter bots

twitter bots

To identify bots, the Center used a tool known as “Botometer,” developed by researchers at the University of Southern California and Indiana University.

Previous studies have documented the nature and sources of tweets regarding immigration news, the ways in which news is shared via social media in a polarized Congress, the degree to which science information on social media is shared and trusted, the role of social media in the broader context of online harassment, how key social issues like race relations play out on these platforms, and the patterns of how different groups arrange themselves on Twitter.

It is important to note that bot accounts do not always clearly identify themselves as such in their profiles, and any bot classification system inevitably carries some risk of error. The Botometer system has been documented and validated in an array of academic publications, and researchers from the Center conducted a number of independent validation measures of its results.8

++++++++++++++++++++
more on fake news in this IMS blog
http://blog.stcloudstate.edu/ims?s=fake+news

bots, big data and the future

Computational Propaganda: Bots, Targeting And The Future

February 9, 201811:37 AM ET 

https://www.npr.org/sections/13.7/2018/02/09/584514805/computational-propaganda-yeah-that-s-a-thing-now

Combine the superfast calculational capacities of Big Compute with the oceans of specific personal information comprising Big Data — and the fertile ground for computational propaganda emerges. That’s how the small AI programs called bots can be unleashed into cyberspace to target and deliver misinformation exactly to the people who will be most vulnerable to it. These messages can be refined over and over again based on how well they perform (again in terms of clicks, likes and so on). Worst of all, all this can be done semiautonomously, allowing the targeted propaganda (like fake news stories or faked images) to spread like viruses through communities most vulnerable to their misinformation.

According to Bolsover and Howard, viewing computational propaganda only from a technical perspective would be a grave mistake. As they explain, seeing it just in terms of variables and algorithms “plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it.”

Computational propaganda is a new thing. People just invented it. And they did so by realizing possibilities emerging from the intersection of new technologies (Big Compute, Big Data) and new behaviors those technologies allowed (social media). But the emphasis on behavior can’t be lost.

People are not machines. We do things for a whole lot of reasons including emotions of loss, anger, fear and longing. To combat computational propaganda’s potentially dangerous effects on democracy in a digital age, we will need to focus on both its howand its why.

++++++++++++++++
more on big data in this IMS blog
http://blog.stcloudstate.edu/ims?s=big+data

more on bots in this IMS blog
http://blog.stcloudstate.edu/ims?s=bot

more on fake news in this IMS blog
http://blog.stcloudstate.edu/ims?s=fake+news

Data driven design

Valuing data over design instinct puts metrics over users

Benek Lisefski August 13, 2019

https://modus.medium.com/data-driven-design-is-killing-our-instincts-d448d141653d

Overreliance on data to drive design decisions can be just as harmful as ignoring it. Data only tells one kind of story. But your project goals are often more complex than that. Goals can’t always be objectively measured.

Data-driven design is about using information gleaned from both quantitative and qualitative sources to inform how you make decisions for a set of users. Some common tools used to collect data include user surveys, A/B testing, site usage and analytics, consumer research, support logs, and discovery calls. 

Designers justified their value through their innate talent for creative ideas and artistic execution. Those whose instincts reliably produced success became rock stars.

In today’s data-driven world, that instinct is less necessary and holds less power. But make no mistake, there’s still a place for it.

Data is good at measuring things that are easy to measure. Some goals are less tangible, but that doesn’t make them less important.

Data has become an authoritarian who has fired the other advisors who may have tempered his ill will. A designer’s instinct would ask, “Do people actually enjoy using this?” or “How do these tactics reflect on our reputation and brand?”

Digital interface design is going through a bland period of sameness.

Data is only as good as the questions you ask

When to use data vs. when to use instinct

Deciding between two or three options? This is where data shines. Nothing is more decisive than an A/B test to compare potential solutions and see which one actually performs better. Make sure you’re measuring long-term value metrics and not just views and clicks.

Sweating product quality and aesthetics? Turn to your instinct. The overall feeling of quality is a collection of hundreds of micro-decisions, maintained consistency, and execution with accuracy. Each one of those decisions isn’t worth validating on its own. Your users aren’t design experts, so their feedback will be too subjective and variable. Trust your design senses when finessing the details.

Unsure about user behavior? Use data rather than asking for opinions. When asked what they’ll do, customers will do what they think you want them to. Instead, trust what they actually do when they think nobody’s looking.

Building brand and reputation? Data can’t easily measure this. But we all know trustworthiness is as important as clicks (and sometimes they’re opposing goals). When building long-term reputation, trust your instinct to guide you to what’s appealing, even if it sometimes contradicts short-term data trends. You have to play the long game here.

+++++++++
more on big data in this IMS blog
http://blog.stcloudstate.edu/ims?s=big+data

digital practices framework

Imagine if we didn’t know how to use books – notes on a digital practices framework

 

the 20/60/20 model of change. The idea is that the top 20% of any group will be game for anything, they are your early adopters, always willing to try the next best thing. The bottom 20% of a group will hate everything and spend most of their time either subtly slowly things down or in open rebellion. The middle 60% are the people who have the potential to be won or lost depending on how good your plan is

The top stream is about all the sunshine and light about working with others on the internet. It’s advantages and pitfalls, ways in which to promote prosocial discourse. The middle stream is about pragmatics. The how’s of doing things, it starts out with simple guidelines and moves forward the technical realities of licensing, content production and tech using. The bottom stream is about the self. How to keep yourself safe, how to have a healthy relationship with the internet from a personal perspective.

Level 1 – Awareness

Level 2 – Learning

Level 3 Interacting and making

Level 4 – Teaching

++++++++++++++
more on digital literacy in this IMS blog
http://blog.stcloudstate.edu/ims?s=digital+literacy

textbooks transformation

https://www.wired.com/story/digital-textbooks-radical-transformation/

Pearson “digital first” strategy.
My note: see our postings
http://blog.stcloudstate.edu/ims/2018/07/09/pearson-selling-us-k12-business/
http://blog.stcloudstate.edu/ims/2019/04/19/change-in-the-k12-sector/
It also enables Pearson to staunch the bleeding caused by an explosion in the second-hand market. A company called Chegg launched the first major online textbook rental service in 2007; Amazon followed suit in 2012. Both advertise savings of up to 90 percent off the sticker price.

But more technology doesn’t always mean better results. Within K-12 learning environments, the digital divide means that students in low-income and rural households have less access to reliable internet and fewer connected deviceson which to complete the online portions of their homework. And while Pearson’s initiative applies only to textbooks in higher ed, the shift to digital has implications at the collegiate level as well.

Just as traditional software has a thriving open source community, textbooks have Open Educational Resources, complete textbooks that typically come free of charge digitally, or for a small fee—enough to cover the printing—in hard copy. And while it’s not an entirely new concept, OER has gained momentum in recent years, particularly as support has picked up at an institutional level, rather than on a course by course basis. According to a 2018 Babson College survey, faculty awareness of OER jumped from 34 percent to 46 percent since 2015.

One of OER’s leading proponents is OpenStax, a nonprofit based out of Rice University that offers a few dozen free textbooks, covering everything from AP Biology to Principles of Accounting. In the 2019–2020 academic year, 2.7 million students across 6,600 institutions used an OpenStax product instead of a for-profit equivalent.

The knock against OER is that, well, you get what you pay for. “One faculty member told me only half-jokingly, that OER is like a puppy that’s free. You get the free puppy, but then you have to do all the work,” says Cengage’s Hansen, who argues that traditional publishers provide critical supporting materials, like assessment questions, that OER often lacks, and can push more regular updates.

By virtue of being free, OER materials also heavily skew toward digital, with hardcover as a secondary option. (Or you can download the PDF and print it out yourself.) The same caveats about efficacy apply. But at least OER doesn’t lock you into one digital platform, the way the major publishers do. OpenStax alone counts around 50 ecosystem partners to provide homework and testing support.

Like and Subscribe

Or you could always split the difference.

That’s the territory Cengage wants to stake out. Late last summer, the educational publishing behemoth—it announced a planned merger with McGraw Hill in May; the combined company would surpass all but Pearson in market capitalization—rolled out Cengage Unlimited, a “Netflix for Textbooks” model that rolls all textbook rentals and digital platform access into a single rate: $120 for a semester, $180 for a full year, or $240 for two years. Almost a year in, the US-only program has a million subscribers.

My note: more about Cengage and McGraw Hill in this blog
http://blog.stcloudstate.edu/ims/2017/06/22/textbook-model/

this added Sept 13, 2019:

 

+++++++++++++
more on textbooks in this IMS blog
http://blog.stcloudstate.edu/ims?s=textbooks

NLP and ACL

NLP – natural language processing; ACL – Association for Computational Linguistics (ACL 2019)

Major trends in NLP: a review of 20 years of ACL research

Janna Lipenkova, July 23, 2019

https://www.linkedin.com/pulse/major-trends-nlp-review-20-years-acl-research-janna-lipenkova

The 57th Annual Meeting of the Association for Computational Linguistics (ACL 2019)

 Data: working around the bottlenecks

large data is inherently noisy. \In general, the more “democratic” the production channel, the dirtier the data – which means that more effort has to be spent on its cleaning. For example, data from social media will require a longer cleaning pipeline. Among others, you will need to deal with extravagancies of self-expression like smileys and irregular punctuation, which are normally absent in more formal settings such as scientific papers or legal contracts.

The other major challenge is the labeled data bottleneck

crowd-sourcing and Training Data as a Service (TDaaS). On the other hand, a range of automatic workarounds for the creation of annotated datasets have also been suggested in the machine learning community.

Algorithms: a chain of disruptions in Deep Learning

Neural Networks are the workhorse of Deep Learning (cf. Goldberg and Hirst (2017) for an introduction of the basic architectures in the NLP context). Convolutional Neural Networks have seen an increase in the past years, whereas the popularity of the traditional Recurrent Neural Network (RNN) is dropping. This is due, on the one hand, to the availability of more efficient RNN-based architectures such as LSTM and GRU. On the other hand, a new and pretty disruptive mechanism for sequential processing – attention – has been introduced in the sequence-to-sequence (seq2seq) model by Sutskever et al. (2014).

Consolidating various NLP tasks

the three “global” NLP development curves – syntax, semantics and context awareness
the third curve – the awareness of a larger context – has already become one of the main drivers behind new Deep Learning algorithms.

A note on multilingual research

Think of different languages as different lenses through which we view the same world – they share many properties, a fact that is fully accommodated by modern learning algorithms with their increasing power for abstraction and generalization.

Spurred by the global AI hype, the NLP field is exploding with new approaches and disruptive improvements. There is a shift towards modeling meaning and context dependence, probably the most universal and challenging fact of human language. The generalisation power of modern algorithms allows for efficient scaling across different tasks, languages and datasets, thus significantly speeding up the ROI cycle of NLP developments and allowing for a flexible and efficient integration of NLP into individual business scenarios.

1 2 3 53