Posts Tagged ‘computational propaganda’

Herd Immunity to Internet Propaganda

Internet propaganda is becoming an industrialized commodity, warns Phil Howard, the director of the Oxford Internet…

Posted by SPIEGEL International on Friday, January 15, 2021

Posted by SPIEGEL International on Friday, January 15, 2021

Can We Develop Herd Immunity to Internet Propaganda?

Internet propaganda is becoming an industrialized commodity, warns Phil Howard, the director of the Oxford Internet Institute and author of many books on disinformation. In an interview, he calls for greater transparency and regulation of the industry.
https://www.oii.ox.ac.uk/people/philip-howard/
Platforms like Parler, TheDonald, Breitbart and Anon are like petri dishes for testing out ideas, to see what sticks. If extremist influencers see that something gets traction, they ramp it up. In the language of disease, you would say these platforms act as a vector, like a germ that carries a disease into other, more public forums.
at some point a major influencer takes a new meme from one of these extremist forums and puts it out before a wider audience. It works like a vector-borne disease like malaria, where the mosquitoes do the transmission. So, maybe a Hollywood actor or an influencer who knows nothing about politics will take this idea and post it on the bigger, better known platform. From there, these memes escalate as they move from Parler to maybe Reddit and from there to Twitter, Facebook,  Instagram and YouTube. We call this “cascades of misinformation.
Sometimes the cascades of misinformation bounce from country to country between the U.S., Canada and the UK for example. So, it echoes back and forth.
Within Europe, two reservoirs for disinformation stick out: Poland and Hungary.
Our 2020 report shows that cyber troop activity continues to increase around the world. This year, we found evidence of 81 countries using social media to spread computational propaganda and disinformation about politics. This has increased from last years’ report, where we identified 70 countries with cyber troop activity.
identified 63 new instances of private firms working with governments or political parties to spread disinformation about elections or other important political issues. We identified 21 such cases in 2017-2018, yet only 15 in the period between 2009 and 2016.
Why would well-funded Russian agencies buy disinformation services from a newcomer like Nigeria?
(1) Russian actors have found a lab in Nigeria that can provide services at competitive prices. (2) But countries like China and Russia seem to be developing an interest in political influence in many African countries, so it is possible that there is a service industry for disinformation in Nigeria for that part of the world.
Each social media company should provide some kind of accounting statement about how it deals with misuse, with reporting hate speech, with fact checking and jury systems and so on. This system of transparency and accountability works for the stock markets, why shouldn’t it work in the social media realm? 
We clearly need a digital civics curriculum. The 12 to 16 year olds are developing their media attitudes now, they will be voting soon. There is very good media education in Canada or the Netherlands for example, and that is an excellent long-term strategy. 

++++++++++++
more on fake news in this IMS blog
https://blog.stcloudstate.edu/ims?s=fake+news

bots, big data and the future

Computational Propaganda: Bots, Targeting And The Future

February 9, 201811:37 AM ET 

https://www.npr.org/sections/13.7/2018/02/09/584514805/computational-propaganda-yeah-that-s-a-thing-now

Combine the superfast calculational capacities of Big Compute with the oceans of specific personal information comprising Big Data — and the fertile ground for computational propaganda emerges. That’s how the small AI programs called bots can be unleashed into cyberspace to target and deliver misinformation exactly to the people who will be most vulnerable to it. These messages can be refined over and over again based on how well they perform (again in terms of clicks, likes and so on). Worst of all, all this can be done semiautonomously, allowing the targeted propaganda (like fake news stories or faked images) to spread like viruses through communities most vulnerable to their misinformation.

According to Bolsover and Howard, viewing computational propaganda only from a technical perspective would be a grave mistake. As they explain, seeing it just in terms of variables and algorithms “plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it.”

Computational propaganda is a new thing. People just invented it. And they did so by realizing possibilities emerging from the intersection of new technologies (Big Compute, Big Data) and new behaviors those technologies allowed (social media). But the emphasis on behavior can’t be lost.

People are not machines. We do things for a whole lot of reasons including emotions of loss, anger, fear and longing. To combat computational propaganda’s potentially dangerous effects on democracy in a digital age, we will need to focus on both its howand its why.

++++++++++++++++
more on big data in this IMS blog
https://blog.stcloudstate.edu/ims?s=big+data

more on bots in this IMS blog
https://blog.stcloudstate.edu/ims?s=bot

more on fake news in this IMS blog
https://blog.stcloudstate.edu/ims?s=fake+news