Posts Tagged ‘bots’

Twitter bots climate disinformation

Twitter Bots Are a Major Source of Climate Disinformation. Researchers determined that nearly 9.5% of the users in their sample were likely bots. But those bots accounted for 25% of the total tweets about climate change on most days from r/science

https://www.scientificamerican.com/article/twitter-bots-are-a-major-source-of-climate-disinformation/

paper published last week in the journal Climate Policy is part of an expanding body of research about the role of bots in online climate discourse.

+++++++++++
more on climate in this IMS blog
https://blog.stcloudstate.edu/ims?s=climate

Information Overload Fake News Social Media

Information Overload Helps Fake News Spread, and Social Media Knows It

Understanding how algorithm manipulators exploit our cognitive vulnerabilities empowers us to fight back

https://www.scientificamerican.com/article/information-overload-helps-fake-news-spread-and-social-media-knows-it/

a minefield of cognitive biases.

People who behaved in accordance with them—for example, by staying away from the overgrown pond bank where someone said there was a viper—were more likely to survive than those who did not.

Compounding the problem is the proliferation of online information. Viewing and producing blogs, videos, tweets and other units of information called memes has become so cheap and easy that the information marketplace is inundated. My note: folksonomy in its worst.

At the University of Warwick in England and at Indiana University Bloomington’s Observatory on Social Media (OSoMe, pronounced “awesome”), our teams are using cognitive experiments, simulations, data mining and artificial intelligence to comprehend the cognitive vulnerabilities of social media users.
developing analytical and machine-learning aids to fight social media manipulation.

As Nobel Prize–winning economist and psychologist Herbert A. Simon noted, “What information consumes is rather obvious: it consumes the attention of its recipients.”

attention economy

Nodal diagrams representing 3 social media networks show that more memes correlate with higher load and lower quality of information shared

 Our models revealed that even when we want to see and share high-quality information, our inability to view everything in our news feeds inevitably leads us to share things that are partly or completely untrue.

Frederic Bartlett
Cognitive biases greatly worsen the problem.

We now know that our minds do this all the time: they adjust our understanding of new information so that it fits in with what we already know. One consequence of this so-called confirmation bias is that people often seek out, recall and understand information that best confirms what they already believe.
This tendency is extremely difficult to correct.

Making matters worse, search engines and social media platforms provide personalized recommendations based on the vast amounts of data they have about users’ past preferences.

pollution by bots

Nodal diagrams representing 2 social media networks show that when more than 1% of real users follow bots, low-quality information prevails

Social Herding

social groups create a pressure toward conformity so powerful that it can overcome individual preferences, and by amplifying random early differences, it can cause segregated groups to diverge to extremes.

Social media follows a similar dynamic. We confuse popularity with quality and end up copying the behavior we observe.
information is transmitted via “complex contagion”: when we are repeatedly exposed to an idea, typically from many sources, we are more likely to adopt and reshare it.

Twitter users with extreme political views are more likely than moderate users to share information from low credibility sources

In addition to showing us items that conform with our views, social media platforms such as Facebook, Twitter, YouTube and Instagram place popular content at the top of our screens and show us how many people have liked and shared something. Few of us realize that these cues do not provide independent assessments of quality.

programmers who design the algorithms for ranking memes on social media assume that the “wisdom of crowds” will quickly identify high-quality items; they use popularity as a proxy for quality. My note: again, ill-conceived folksonomy.

Echo Chambers
the political echo chambers on Twitter are so extreme that individual users’ political leanings can be predicted with high accuracy: you have the same opinions as the majority of your connections. This chambered structure efficiently spreads information within a community while insulating that community from other groups.

socially shared information not only bolsters our biases but also becomes more resilient to correction.

machine-learning algorithms to detect social bots. One of these, Botometer, is a public tool that extracts 1,200 features from a given Twitter account to characterize its profile, friends, social network structure, temporal activity patterns, language and other features. The program compares these characteristics with those of tens of thousands of previously identified bots to give the Twitter account a score for its likely use of automation.

Some manipulators play both sides of a divide through separate fake news sites and bots, driving political polarization or monetization by ads.
recently uncovered a network of inauthentic accounts on Twitter that were all coordinated by the same entity. Some pretended to be pro-Trump supporters of the Make America Great Again campaign, whereas others posed as Trump “resisters”; all asked for political donations.

a mobile app called Fakey that helps users learn how to spot misinformation. The game simulates a social media news feed, showing actual articles from low- and high-credibility sources. Users must decide what they can or should not share and what to fact-check. Analysis of data from Fakey confirms the prevalence of online social herding: users are more likely to share low-credibility articles when they believe that many other people have shared them.

Hoaxy, shows how any extant meme spreads through Twitter. In this visualization, nodes represent actual Twitter accounts, and links depict how retweets, quotes, mentions and replies propagate the meme from account to account.

Free communication is not free. By decreasing the cost of information, we have decreased its value and invited its adulteration. 

Vatican bots hackers

Vatican enlists bots to protect library from onslaught of hackers from r/technology

Vatican enlists bots to protect library from onslaught of hackers

https://www.theguardian.com/world/2020/nov/08/vatican-enlists-bots-to-protect-library-from-onslaught-of-hackers

The library has partnered with Darktrace, a company founded by Cambridge University mathematicians, which claims to be the first to develop an AI system for cybersecurity.

+++++++++++++++
more on bots in this IMS blog
https://blog.stcloudstate.edu/ims?s=bots

bots, big data and the future

Computational Propaganda: Bots, Targeting And The Future

February 9, 201811:37 AM ET 

https://www.npr.org/sections/13.7/2018/02/09/584514805/computational-propaganda-yeah-that-s-a-thing-now

Combine the superfast calculational capacities of Big Compute with the oceans of specific personal information comprising Big Data — and the fertile ground for computational propaganda emerges. That’s how the small AI programs called bots can be unleashed into cyberspace to target and deliver misinformation exactly to the people who will be most vulnerable to it. These messages can be refined over and over again based on how well they perform (again in terms of clicks, likes and so on). Worst of all, all this can be done semiautonomously, allowing the targeted propaganda (like fake news stories or faked images) to spread like viruses through communities most vulnerable to their misinformation.

According to Bolsover and Howard, viewing computational propaganda only from a technical perspective would be a grave mistake. As they explain, seeing it just in terms of variables and algorithms “plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it.”

Computational propaganda is a new thing. People just invented it. And they did so by realizing possibilities emerging from the intersection of new technologies (Big Compute, Big Data) and new behaviors those technologies allowed (social media). But the emphasis on behavior can’t be lost.

People are not machines. We do things for a whole lot of reasons including emotions of loss, anger, fear and longing. To combat computational propaganda’s potentially dangerous effects on democracy in a digital age, we will need to focus on both its howand its why.

++++++++++++++++
more on big data in this IMS blog
https://blog.stcloudstate.edu/ims?s=big+data

more on bots in this IMS blog
https://blog.stcloudstate.edu/ims?s=bot

more on fake news in this IMS blog
https://blog.stcloudstate.edu/ims?s=fake+news

weaponizing the web RT hybrid war

Fake news and botnets: how Russia weaponised the web

https://www.theguardian.com/technology/2017/dec/02/fake-news-botnets-how-russia-weaponised-the-web-cyber-attack-estonia

The digital attack that brought Estonia to a standstill 10 years ago was the first shot in a cyberwar that has been raging between Moscow and the west ever since

It began at exactly 10pm on 26 April, 2007, when a Russian-speaking mob began rioting in the streets of Tallinn, the capital city of Estonia, killing one person and wounding dozens of others. That incident resonates powerfully in some of the recent conflicts in the US. In 2007, the Estonian government had announced that a bronze statue of a heroic second world war Soviet soldier was to be removed from a central city square. For ethnic Estonians, the statue had less to do with the war than with the Soviet occupation that followed it, which lasted until independence in 1991. For the country’s Russian-speaking minority – 25% of Estonia’s 1.3 million people – the removal of the memorial was another sign of ethnic discrimination.

That evening, Jaan Priisalu – a former risk manager for Estonia’s largest bank, Hansabank, who was working closely with the government on its cybersecurity infrastructure – was at home in Tallinn with his girlfriend when his phone rang. On the line was Hillar Aarelaid, the chief of Estonia’s cybercrime police.

“It’s going down,” Aarelaid declared. Alongside the street fighting, reports of digital attacks were beginning to filter in. The websites of the parliament, major universities, and national newspapers were crashing. Priisalu and Aarelaid had suspected something like this could happen one day. A digital attack on Estoniahad begun.

“The Russian theory of war allows you to defeat the enemy without ever having to touch him,” says Peter Pomerantsev, author of Nothing is True and Everything is Possible. “Estonia was an early experiment in that theory.”

Since then, Russia has only developed, and codified, these strategies. The techniques pioneered in Estonia are known as the “Gerasimov doctrine,” named after Valery Gerasimov, the chief of the general staff of the Russian military. In 2013, Gerasimov published an article in the Russian journal Military-Industrial Courier, articulating the strategy of what is now called “hybrid” or “nonlinear” warfare. “The lines between war and peace are blurred,” he wrote. New forms of antagonism, as seen in 2010’s Arab spring and the “colour revolutions” of the early 2000s, could transform a “perfectly thriving state, in a matter of months, and even days, into an arena of fierce armed conflict”.

Russia has deployed these strategies around the globe. Its 2008 war with Georgia, another former Soviet republic, relied on a mix of both conventional and cyber-attacks, as did the 2014 invasion of Crimea. Both began with civil unrest sparked via digital and social media – followed by tanks. Finland and Sweden have experienced near-constant Russian information operations. Russian hacks and social media operations have also occurred during recent elections in Holland, Germany, and France. Most recently, Spain’s leading daily, El País, reported on Russian meddling in the Catalonian independence referendum. Russian-supported hackers had allegedly worked with separatist groups, presumably with a mind to further undermining the EU in the wake of the Brexit vote.

The Kremlin has used the same strategies against its own people. Domestically, history books, school lessons, and media are manipulated, while laws are passed blocking foreign access to the Russian population’s online data from foreign companies – an essential resource in today’s global information-sharing culture. According to British military researcher Keir Giles, author of Nato’s Handbook of Russian Information Warfare, the Russian government, or actors that it supports, has even captured the social media accounts of celebrities in order to spread provocative messages under their names but without their knowledge. The goal, both at home and abroad, is to sever outside lines of communication so that people get their information only through controlled channels.

+++++++++++++++++++++
24-hour Putin people: my week watching Kremlin ‘propaganda channel’ RT

https://www.theguardian.com/media/2017/nov/29/24-hour-putin-people-my-week-watching-kremlin-propaganda-channel-rt-russia-today

 Wednesday 29 November 2017 

According to its detractors, RT is Vladimir Putin’s global disinformation service, countering one version of the truth with another in a bid to undermine the whole notion of empirical truth. And yet influential people from all walks of public life appear on it, or take its money. You can’t criticise RT’s standards, they say, if you don’t watch it. So I watched it. For a week.

Suchet, the son of former ITV newsreader John Suchet and the nephew of actor David Suchet, has been working for RT since 2009. The offspring of well-known people feature often on RT. Sophie Shevardnadze, who presents Sophie & Co, is the granddaughter of former Georgian president and Soviet foreign minister Eduard ShevardnadzeTyrel Ventura, who presents Watching the Hawks on RT America, is the son of wrestler-turned-politician Jesse Ventura. His co-host is Oliver Stone’s son Sean.

My note; so this is why Oliver Stone in his “documentary” went gentle on Putin, so his son can have a job. #Nepotism #FakeNews

RT’s stated mission is to offer an “alternative perspective on major global events”, but the world according to RT is often downright surreal.

Peter Pomerantsev, author of Nothing Is True and Everything Is Possible, about Putin’s Russia, and now a senior visiting fellow in global affairs at the London School of Economics, was in Moscow working in television when Russia Today first started hiring graduates from Britain and the US. “The people were really bright, they were being paid well,” he says. But they soon found they were being ordered to change their copy, or instructed how to cover certain stories to reflect well on the Kremlin. “Everyone had their own moment when they first twigged that this wasn’t like the BBC,” he says. “That, actually, this is being dictated from above.” The coverage of Russia’s war with Georgia in 2008 was a lightbulb moment for many, he says. They quit.

+++++++++++++++

more on Russian bots, trolls:
https://blog.stcloudstate.edu/ims/2017/11/22/bots-trolls-and-fake-news/

+++++++++++++++
more on state propaganda in this IMS blog
https://blog.stcloudstate.edu/ims/2017/11/21/china-of-xi/