THERE’S A MEME on Instagram, circulated by a group called “Born Liberal.” “Born Liberal” was a creation of the Internet Research Agency, the Russian propaganda wing
Conversations around the IRA’s operations traditionally have focused on Facebook and Twitter, but like any hip millennial, the IRA was actually most obsessive about Instagram.
the IRA deployed 3,841 accounts, including several personas that “regularly played hashtag games.” That approach paid off; 1.4 million people engaged with the tweets, leading to nearly 73 million engagements. Most of this work was focused on news, while on Facebook and Instagram, the Russians prioritized “deeper relationships,” according to the researchers. On Facebook, the IRA notched a total of 3.3 million page followers, who engaged with their politically divisive content 76.5 million times. Russia’s most popular pages targeted the right wing and the black community. The trolls also knew their audiences; they deployed Pepe memes at pages intended for right-leaning millennials, but kept them away from posts directed at older conservative Facebook users. Not every attempt was a hit; while 33 of the 81 IRA Facebook pages had over 1,000 followers, dozens had none at all.
The report also points out new links between the IRA’s pages and Wikileaks, which helped disseminate hacked emails from Clinton campaign manager John Podesta
“While many people think of memes as “cat pictures with words,” the Defense Department and DARPA have studied them for years as a powerful tool of cultural influence, capable of reinforcing or even changing values and behavior.
“over the past five years, disinformation has evolved from a nuisance into high-stakes information war.” And yet, rather than fighting back effectively, Americans are battling each other over what to do about it.
A year after the Meme Warfare Center proposal was published, DARPA, the Pentagon agency that develops new military technology, commissioned a four-year study of memetics. The research was led by Dr. Robert Finkelstein, founder of the Robotic Technology Institute, and an academic with a background in physics and cybernetics.
Finkelstein’s study of “Military Memetics” centered on a basic problem in the field, determining “whether memetics can be established as a science with the ability to explain and predict phenomena.” It still had to be proved, in other words, that memes were actual components of reality and not just a nifty concept with great marketing.
the type of data: wikipedia. the dangers of learning from wikipedia. how individuals can organize mitigate some of these dangers. wikidata, algorithms.
IBM Watson is using wikipedia by algorythms making sense, AI system
youtube videos debunked of conspiracy theories by using wikipedia.
semantic relatedness, Word2Vec
how does algorithms work: large body of unstructured text. picks specific words
lots of AI learns about the world from wikipedia. the neutral point of view policy. WIkipedia asks editors present as proportionally as possible. Wikipedia biases: 1. gender bias (only 20-30 % are women).
conceptnet. debias along different demographic dimensions.
citations analysis gives also an idea about biases. localness of sources cited in spatial articles. structural biases.
geolocation on Twitter by County. predicting the people living in urban areas. FB wants to push more local news.
danger (biases) #3. wikipedia search results vs wkipedia knowledge panel.
collective action against tech: Reddit, boycott for FB and Instagram.
data labor: what the primary resources this companies have. posts, images, reviews etc.
boycott, data strike (data not being available for algorithms in the future). GDPR in EU – all historical data is like the CA Consumer Privacy Act. One can do data strike without data boycott. general vs homogeneous (group with shared identity) boycott.
the wikipedia SPAM policy is obstructing new editors and that hit communities such as women.
how to access at different levels. methods and methodological concerns. ethical concerns, legal concerns,
tweetdeck for advanced Twitter searches. quoting, likes is relevant, but not enough, sometimes screenshot
social listening platforms: crimson hexagon, parsely, sysomos – not yet academic platforms, tools to setup queries and visualization, but difficult to algorythm, the data samples etc. open sources tools (Urbana, Social Media microscope: SMILE (social media intelligence and learning environment) to collect data from twitter, reddit and within the platform they can query Twitter. create trend analysis, sentiment analysis, Voxgov (subscription service: analyzing political social media)
graduate level and faculty research: accessing SM large scale data web scraping & APIs Twitter APIs. Jason script, Python etc. Gnip Firehose API ($) ; Web SCraper Chrome plugin (easy tool, Pyhon and R created); Twint (Twitter scraper)
Facepager (open source) if not Python or R coder. structure and download the data sets.
TAGS archiving google sheets, uses twitter API. anything older 7 days not avaialble, so harvest every week.
social feed manager (GWUniversity) – Justin Litman with Stanford. Install on server but allows much more.
legal concerns: copyright (public info, but not beyond copyrighted). fair use argument is strong, but cannot publish the data. can analyize under fair use. contracts supercede copyright (terms of service/use) licensed data through library.
methods: sampling concerns tufekci, 2014 questions for sm. SM data is a good set for SM, but other fields? not according to her. hashtag studies: self selection bias. twitter as a model organism: over-represnted data in academic studies.
methodological concerns: scope of access – lack of historical data. mechanics of platform and contenxt: retweets are not necessarily endorsements.
ethical concerns. public info – IRB no informed consent. the right to be forgotten. anonymized data is often still traceable.
table discussion: digital humanities, journalism interested, but too narrow. tools are still difficult to find an operate. context of the visuals. how to spread around variety of majors and classes. controversial events more likely to be deleted.
takedowns, lies and corrosion: what is a librarian to do: trolls, takedown,
development kit circulation. familiarity with the Oculus Rift resulted in lesser reservation. Downturn also.
An experience station. clean up free apps.
question: spherical video, video 360.
safety issues: policies? instructional perspective: curating,WI people: user testing. touch controllers more intuitive then xbox controller. Retail Oculus Rift
app Scatchfab. 3modelviewer. obj or sdl file. Medium, Tiltbrush.
College of Liberal Arts at the U has their VR, 3D print set up.
Penn State (Paul, librarian, kiniseology, anatomy programs), Information Science and Technology. immersive experiences lab for video 360.
CALIPHA part of it is xrlibraries. libraries equal education. content provider LifeLiqe STEM library of AR and VR objects. https://www.lifeliqe.com/
Social media is the best friend disinformation ever had, and the cure is far from obvious.
Anya Schiffrin is an adjunct faculty member at the School of International and Public Affairs at Columbia University. She worked in Hanoi from 1997 to 1999 as the bureau chief of Dow Jones Newswires.
Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics By Yochai Benkler, Robert Faris, & Hal Roberts
Oxford University Press
A Harvard law professor who is a well-known theorist of the digital age, Benkler and colleagues have produced an authoritative tome that includes multiple taxonomies and literature reviews as well as visualizations of the flow of disinformation.
white supremacist and alt-right trolls
a history of the scholarship on propaganda, reminding the reader that much of the discussion began in the 1930s.
Benkler’s optimistic 2007 book, The Wealth of Networks, predicted that the Internet would bring people together and transform the way information is created and spread. Today, Benkler is far less sanguine and has become one of the foremost researchers of disinformation networks.
Fox News, Breitbart, The Daily Caller, InfoWars, and Zero Hedge
As a result, mainstream journalists repeat and amplify the falsehoods even as they debunk them.
There is no clear line, they argue, between Russian propaganda, Breitbart lies, and the Trump victory. They add that Fox News is probably more influential than Facebook.
after George Soros gave a speech in January 2018 calling for regulation of the social media platforms, Facebook hired a Republican opposition research firm to shovel dirt at George Soros.
The European Union has not yet tried to regulate disinformation (although they do have codes of practice for the platforms), instead focusing on taxation, competition regulation, and protection of privacy. But Germany has strengthened its regulations regarding online hate speech, including the liability of the social media platforms.
disclosure of the sources of online political advertising.It’s a bit toothless because, just as with offshore bank accounts, it may be possible to register which U.S. entity is paying for online political advertising, but it’s impossible to know whether that entity is getting its funds from overseas. Even the Honest Ads bill was too much for Facebook to take.
Facebook Inc.’s Instagram played a much bigger role in Russia’s manipulation of U.S. voters than the company has previously discussed, and will be a key Russian tool in the 2020 elections, according to a report commissioned by the Senate Intelligence Committee.
The Russian Internet Research Agency, the troll farm that has sought to divide Americans with misinformation and meme content around the 2016 election, received more engagement on Instagram than it did on any other social media platform, including Facebook, according to a joint report by three groups of researchers.
until recently, broadcasting and publishing were difficult and expensive affairs, their infrastructures riddled with bottlenecks and concentrated in a few hands.
When protests broke out in Ferguson, Missouri, in August 2014, a single livestreamer named Mustafa Hussein reportedly garnered an audience comparable in size to CNN’s for a short while. If a Bosnian Croat war criminal drinks poison in a courtroom, all of Twitter knows about it in minutes.
In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of free speech.
And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? My note: see the ability to create fake audio and video footage: http://blog.stcloudstate.edu/ims/2017/07/15/fake-news-and-video/
HERE’S HOW THIS golden age of speech actually works: In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure. It’s limited instead by one’s ability to garner and distribute attention. And right now, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms: Facebook, Google (which owns YouTube), and, to a lesser extent, Twitter.
at their core, their business is mundane: They’re ad brokers
They use massive surveillance of our behavior, online and off, to generate increasingly accurate, automated predictions of what advertisements we are most susceptible to and what content will keep us clicking, tapping, and scrolling down a bottomless feed.
in reality, posts are targeted and delivered privately, screen by screen by screen. Today’s phantom public sphere has been fragmented and submerged into billions of individual capillaries. Yes, mass discourse has become far easier for everyone to participate in—but it has simultaneously become a set of private conversations happening behind your back. Behind everyone’s backs.
It’s important to realize that, in using these dark posts, the Trump campaign wasn’t deviantly weaponizing an innocent tool. It was simply using Facebook exactly as it was designed to be used. The campaign did it cheaply, with Facebook staffers assisting right there in the office, as the tech company does for most large advertisers and political campaigns.
The digital attack that brought Estonia to a standstill 10 years ago was the first shot in a cyberwar that has been raging between Moscow and the west ever since
It began at exactly 10pm on 26 April, 2007, when a Russian-speaking mob began rioting in the streets of Tallinn, the capital city of Estonia, killing one person and wounding dozens of others. That incident resonates powerfully in some of the recent conflicts in the US. In 2007, the Estonian government had announced that a bronze statue of a heroic second world war Soviet soldier was to be removed from a central city square. For ethnic Estonians, the statue had less to do with the war than with the Soviet occupation that followed it, which lasted until independence in 1991. For the country’s Russian-speaking minority – 25% of Estonia’s 1.3 million people – the removal of the memorial was another sign of ethnic discrimination.
That evening, Jaan Priisalu – a former risk manager for Estonia’s largest bank, Hansabank, who was working closely with the government on its cybersecurity infrastructure – was at home in Tallinn with his girlfriend when his phone rang. On the line was Hillar Aarelaid, the chief of Estonia’s cybercrime police.
“It’s going down,” Aarelaid declared. Alongside the street fighting, reports of digital attacks were beginning to filter in. The websites of the parliament, major universities, and national newspapers were crashing. Priisalu and Aarelaid had suspected something like this could happen one day. A digital attack on Estoniahad begun.
“The Russian theory of war allows you to defeat the enemy without ever having to touch him,” says Peter Pomerantsev, author of Nothing is True and Everything is Possible. “Estonia was an early experiment in that theory.”
Since then, Russia has only developed, and codified, these strategies. The techniques pioneered in Estonia are known as the “Gerasimov doctrine,” named after Valery Gerasimov, the chief of the general staff of the Russian military. In 2013, Gerasimov published an article in the Russian journal Military-Industrial Courier, articulating the strategy of what is now called “hybrid” or “nonlinear” warfare. “The lines between war and peace are blurred,” he wrote. New forms of antagonism, as seen in 2010’s Arab spring and the “colour revolutions” of the early 2000s, could transform a “perfectly thriving state, in a matter of months, and even days, into an arena of fierce armed conflict”.
Russia has deployed these strategies around the globe. Its 2008 war with Georgia, another former Soviet republic, relied on a mix of both conventional and cyber-attacks, as did the 2014 invasion of Crimea. Both began with civil unrest sparked via digital and social media – followed by tanks. Finland and Sweden have experienced near-constant Russian information operations. Russian hacks and social media operations have also occurred during recent elections in Holland, Germany, and France. Most recently, Spain’s leading daily, El País, reported on Russian meddling in the Catalonian independence referendum. Russian-supported hackers had allegedly worked with separatist groups, presumably with a mind to further undermining the EU in the wake of the Brexit vote.
The Kremlin has used the same strategies against its own people. Domestically, history books, school lessons, and media are manipulated, while laws are passed blocking foreign access to the Russian population’s online data from foreign companies – an essential resource in today’s global information-sharing culture. According to British military researcher Keir Giles, author of Nato’s Handbook of Russian Information Warfare, the Russian government, or actors that it supports, has even captured the social media accounts of celebrities in order to spread provocative messages under their names but without their knowledge. The goal, both at home and abroad, is to sever outside lines of communication so that people get their information only through controlled channels.
According to its detractors, RT is Vladimir Putin’s global disinformation service, countering one version of the truth with another in a bid to undermine the whole notion of empirical truth. And yet influential people from all walks of public life appear on it, or take its money. You can’t criticise RT’s standards, they say, if you don’t watch it. So I watched it. For a week.
My note; so this is why Oliver Stone in his “documentary” went gentle on Putin, so his son can have a job. #Nepotism #FakeNews
RT’s stated mission is to offer an “alternative perspective on major global events”, but the world according to RT is often downright surreal.
Peter Pomerantsev, author of Nothing Is True and Everything Is Possible, about Putin’s Russia, and now a senior visiting fellow in global affairs at the London School of Economics, was in Moscow working in television when Russia Today first started hiring graduates from Britain and the US. “The people were really bright, they were being paid well,” he says. But they soon found they were being ordered to change their copy, or instructed how to cover certain stories to reflect well on the Kremlin. “Everyone had their own moment when they first twigged that this wasn’t like the BBC,” he says. “That, actually, this is being dictated from above.” The coverage of Russia’s war with Georgia in 2008 was a lightbulb moment for many, he says. They quit.
Having specific days or times that you post content can help drive traffic, as well as giving potential customers something to look forward to. If posts are always made on a certain day or at a certain time, then followers will get into the habit of checking your social media to see if anything new has appeared, creating a more dedicated base.
having a variety of random hashtags isn’t advisable, even if those tags are trending. “Use a good hashtag to tie all of the pieces of your campaign together,
Make Your Own
create a short, easy-to-remember tag that sticks in people’s minds long after they log off. Clever use of alliteration or wordplay are great ways to go about this.
Engage Your Followers
Showcasing the work of fans or followers automatically makes them more invested in talking about your product.
Promoting contests is a great way to go about this.
Offer Private Streams
Having private boards or groups where you can converse with your customers is one way to provide that relationship.
Make a Safe Space
Keeping watch over your social media to make sure everyone’s getting along can not only foster trust and appreciation for your company, but also make others more likely to visit your page and be honest with you.