metaphor for this aspect of the spy game: a layer cake.
There’s a layer of activity that is visible to all — the actions or comments of public figures, or statements made via official channels.
Then there’s a clandestine layer that is usually visible only to another clandestine service: the work of spies being watched by other spies.
Counterintelligence officials watching Chinese intelligence activities in Houston, for example, knew the consulate was a base for efforts to steal intellectual property or recruit potential agents
And there’s at least a third layer about which the official statements raised questions: the work of spies who are operating without being detected.
The challenges of election security include its incredible breadth — every county in the United States is a potential target — and vast depth, from the prospect of cyberattacks on voter systems, to the theft of information that can then be released to embarrass a target, to the ongoing and messy war on social media over disinformation and political agitation.
Witnesses have told Congress that when Facebook and Twitter made it more difficult to create and use fake accounts to spread disinformation and amplify controversy, Russia and China began to rely more on open channels.
In 2016, Russian influencemongers posed as fake Americans and engaged with them as though they were responding to the same election alongside one another. Russian operatives even used Facebook to organize real-world campaign events across the United States.
But RT’s account on Twitter or China’s foreign ministry representatives aren’t pretending to do anything but serve as voices for Moscow or Beijing.
Russia and far right spreading disinformation ahead of EU elections, investigators say
‘The goal here is bigger than any one election. It is to constantly divide, increase distrust and undermine our faith in institutions and democracy itself’
Matt Apuzzo, Adam Satariano 2019-05-12T13:13:04+01:00″
Facebook Inc.’s Instagram played a much bigger role in Russia’s manipulation of U.S. voters than the company has previously discussed, and will be a key Russian tool in the 2020 elections, according to a report commissioned by the Senate Intelligence Committee.
The Russian Internet Research Agency, the troll farm that has sought to divide Americans with misinformation and meme content around the 2016 election, received more engagement on Instagram than it did on any other social media platform, including Facebook, according to a joint report by three groups of researchers.
THERE’S A MEME on Instagram, circulated by a group called “Born Liberal.” “Born Liberal” was a creation of the Internet Research Agency, the Russian propaganda wing
Conversations around the IRA’s operations traditionally have focused on Facebook and Twitter, but like any hip millennial, the IRA was actually most obsessive about Instagram.
the IRA deployed 3,841 accounts, including several personas that “regularly played hashtag games.” That approach paid off; 1.4 million people engaged with the tweets, leading to nearly 73 million engagements. Most of this work was focused on news, while on Facebook and Instagram, the Russians prioritized “deeper relationships,” according to the researchers. On Facebook, the IRA notched a total of 3.3 million page followers, who engaged with their politically divisive content 76.5 million times. Russia’s most popular pages targeted the right wing and the black community. The trolls also knew their audiences; they deployed Pepe memes at pages intended for right-leaning millennials, but kept them away from posts directed at older conservative Facebook users. Not every attempt was a hit; while 33 of the 81 IRA Facebook pages had over 1,000 followers, dozens had none at all.
The report also points out new links between the IRA’s pages and Wikileaks, which helped disseminate hacked emails from Clinton campaign manager John Podesta
“While many people think of memes as “cat pictures with words,” the Defense Department and DARPA have studied them for years as a powerful tool of cultural influence, capable of reinforcing or even changing values and behavior.
“over the past five years, disinformation has evolved from a nuisance into high-stakes information war.” And yet, rather than fighting back effectively, Americans are battling each other over what to do about it.
A year after the Meme Warfare Center proposal was published, DARPA, the Pentagon agency that develops new military technology, commissioned a four-year study of memetics. The research was led by Dr. Robert Finkelstein, founder of the Robotic Technology Institute, and an academic with a background in physics and cybernetics.
Finkelstein’s study of “Military Memetics” centered on a basic problem in the field, determining “whether memetics can be established as a science with the ability to explain and predict phenomena.” It still had to be proved, in other words, that memes were actual components of reality and not just a nifty concept with great marketing.
At democracy’s heart lies a set of paradoxes: a delicate interplay of identity and anonymity, secrecy and transparency. To be sure you are eligible to vote and that you do so only once, the authorities need to know who you are. But when it comes time for you to mark a ballot, the government must guarantee your privacy and anonymity. After the fact, it also needs to provide some means for a third party to audit the election, while also preventing you from obtaining definitive proof of your choice, which could lead to vote selling or coercion.
Building a system that accomplishes all this at once — and does so securely — is challenging enough in the physical world. It’s even harder online, as the recent revelation that Russian intelligence operatives compromised voting systems in multiple states makes clear.
In the decade since the elusive Satoshi Nakamoto published an infamous white paper outlining the idea behind bitcoin, a “peer-to-peer electronic cash system” based on a mathematical “consensus mechanism,” more than 1,500 new cryptocurrencies have come into being.
definition: Nathan Heller in the New Yorker, in which he compares the blockchain to a scarf knit with a single ball of yarn. “It’s impossible to remove part of the fabric, or to substitute a swatch, without leaving some trace,” Heller wrote. Typically, blockchains are created by a set of stakeholders working to achieve consensus at every step, so it might be even more apt to picture a knitting collective creating that single scarf together, moving forward only when a majority agrees that a given knot is acceptable.
Unlike bitcoin, a public blockchain powered by thousands of miners around the world, most voting systems, including Votem’s, employ what’s known as a “permissioned ledger,” in which a handful of approved groups (political parties, election observers, government entities) would be allowed to validate the transactions.
there’s the issue of targeted denial-of-service (DoS) attacks, in which a hacker directs so much traffic at a server that it’s overwhelmed and ceases to function.
Although a distributed ledger itself would likely withstand such an attack, the rest of the system — from voters’ personal devices to the many servers a vote would pass through on its way to the blockchain — would remain vulnerable.
there’s the so-called penetration attack, like the University of Michigan incursion, in which an adversary gains control of a server and deliberately alters the outcome of an election.
While it’s true that information recorded on a blockchain cannot be changed, a determined hacker might well find another way to disrupt the process. Bitcoin itself has never been hacked, for instance, but numerous bitcoin “wallets” have been, resulting in billions of dollars in losses. In early June 2018, a South Korean cryptocurrency exchange was penetrated, causing the value of bitcoin to tumble and resulting in a loss of $42 billion in market value. So although recording the vote tally on a blockchain introduces a new obstacle to penetration attacks, it still leaves holes elsewhere in the system — like putting a new lock on your front door but leaving your basement windows open.
A blockchain is only as valuable as the data stored on it. And whereas traditional paper ballots preserve an indelible record of the actual intent of each voter, digital votes “don’t produce an original hard-copy record of any kind,”
In the end, democracy always depends on a certain leap of faith, and faith can never be reduced to a mathematical formula. The Economist Intelligence Unit regularly ranks the world’s most democratic counties. In 2017, the United States came in 21st place, after Uruguay and Malta. Meanwhile, it’s now widely believed that John F. Kennedy owed his 1960 win to election tampering in Chicago. The Supreme Court decision granting the presidency to George W. Bush rather than calling a do-over — despite Al Gore’s popular-vote win — still seems iffy. Significant doubts remain about the 2016 presidential race.
While little doubt remains that Russia favored Trump in the 2016 election, the Kremlin’s primary target appears to have been our trust in the system itself. So if the blockchain’s trendy allure can bolster trust in American democracy, maybe that’s a net positive for our national security. If someone manages to hack the system, hopefully they’ll do so quietly. Apologies to George Orwell, but sometimes ignorance really is strength.
Summary This short paper lays out an attempt to measure how much activity from Russian state-operated accounts released in the dataset made available by Twitter in October 2018 was targeted at the United Kingdom. Finding UK-related Tweets is not an easy task. By applying a combination of geographic inference, keyword analysis and classification by algorithm, we identified UK-related Tweets sent by these accounts and subjected them to further qualitative and quantitative analytic techniques.
There were three phases in Russian influence operations : under-the-radar account building, minor Brexit vote visibility, and larger-scale visibility during the London terror attacks.
Russian influence operations linked to the UK were most visible when discussing Islam . Tweets discussing Islam over the period of terror attacks between March and June 2017 were retweeted 25 times more often than their other messages.
The most widely-followed and visible troll account, @TEN_GOP, shared 109 Tweets related to the UK. Of these, 60 percent were related to Islam .
The topology of tweet activity underlines the vulnerability of social media users to disinformation in the wake of a tragedy or outrage.
Focus on the UK was a minor part of wider influence operations in this data . Of the nine million Tweets released by Twitter, 3.1 million were in English (34 percent). Of these 3.1 million, we estimate 83 thousand were in some way linked to the UK (2.7%). Those Tweets were shared 222 thousand times. It is plausible we are therefore seeing how the UK was caught up in Russian operations against the US .
Influence operations captured in this data show attempts to falsely amplify other news sources and to take part in conversations around Islam , and rarely show attempts to spread ‘fake news’ or influence at an electoral level.
On 17 October 2018, Twitter released data about 9 million tweets from 3,841 blocked accounts affiliated with the Internet Research Agency (IRA) – a Russian organisation founded in 2013 and based in St Petersburg, accused of using social media platforms to push pro-Kremlin propaganda and influence nation states beyond their borders, as well as being tasked with spreading pro-Kremlin messaging in Russia. It is one of the first major datasets linked to state-operated accounts engaging in influence operations released by a social media platform.
This report outlines the ways in which accounts linked to the Russian Internet ResearchAgency (IRA) carried out influence operations on social media and the ways their operationsintersected with the UK.The UK plays a reasonably small part in the wider context of this data. We see two possibleexplanations: either influence operations were primarily targeted at the US and British Twitterusers were impacted as collate, or this dataset is limited to US-focused operations whereevents in the UK were highlighted in an attempt to impact US public, rather than a concertedeffort against the UK. It is plausible that such efforts al so existed but are not reflected inthis dataset.Nevertheless, the data offers a highly useful window into how Russian influence operationsare carried out, as well as highlighting the moments when we might be most vulnerable tothem.Between 2011 and 2016, these state-operated accounts were camouflaged. Through manualand automated methods, they were able to quietly build up the trappings of an active andwell-followed Twitter account before eventually pivoting into attempts to influence the widerTwitter ecosystem. Their methods included engaging in unrelated and innocuous topics ofconversation, often through automated methods, and through sharing and engaging withother, more mainstream sources of news.Although this data shows levels of electoral and party-political influence operations to berelatively low, the day of the Brexit referendum results showed how messaging originatingfrom Russian state-controlled accounts might come to be visible – on June 24th 2016, we believe UK Twitter users discussing the Brexit Vote would have encountered messages originating from these accounts.As early as 2014, however, influence operations began taking part in conversations aroundIslam, and these accounts came to the fore during the three months of terror attacks thattook place between March and June 2017. In the immediate wake of these attacks, messagesrelated to Islam and circulated by Russian state-operated Twitter accounts were widelyshared, and would likely have been visible in the UK.The dataset released by Twitter begins to answer some questions about attempts by a foreignstate to interfere in British affairs online. It is notable that overt political or electoralinterference is poorly represented in this dataset: rather, we see attempts at stirring societaldivision, particularly around Islam in the UK, as the messages that resonated the most overthe period.What is perhaps most interesting about this moment is its portrayal of when we as socialmedia users are most vulnerable to the kinds of messages circulated by those looking toinfluence us. In the immediate aftermath of terror attacks, the data suggests, social mediausers were more receptive to this kind of messaging than at any other time.
It is clear that hostile states have identified the growth of online news and social media as aweak spot, and that significant effort has gone into attempting to exploit new media toinfluence its users. Understanding the ways in which these platforms have been used tospread division is an important first step to fighting it.Nevertheless, it is clear that this dataset provides just one window into the ways in whichforeign states have attempted to use online platforms as part of wider information warfare
and influence campaigns. We hope that other platforms will follow Twitter’s lead and release
similar datasets and encourage their users to proactively tackle those who would abuse theirplatforms.
The digital attack that brought Estonia to a standstill 10 years ago was the first shot in a cyberwar that has been raging between Moscow and the west ever since
It began at exactly 10pm on 26 April, 2007, when a Russian-speaking mob began rioting in the streets of Tallinn, the capital city of Estonia, killing one person and wounding dozens of others. That incident resonates powerfully in some of the recent conflicts in the US. In 2007, the Estonian government had announced that a bronze statue of a heroic second world war Soviet soldier was to be removed from a central city square. For ethnic Estonians, the statue had less to do with the war than with the Soviet occupation that followed it, which lasted until independence in 1991. For the country’s Russian-speaking minority – 25% of Estonia’s 1.3 million people – the removal of the memorial was another sign of ethnic discrimination.
That evening, Jaan Priisalu – a former risk manager for Estonia’s largest bank, Hansabank, who was working closely with the government on its cybersecurity infrastructure – was at home in Tallinn with his girlfriend when his phone rang. On the line was Hillar Aarelaid, the chief of Estonia’s cybercrime police.
“It’s going down,” Aarelaid declared. Alongside the street fighting, reports of digital attacks were beginning to filter in. The websites of the parliament, major universities, and national newspapers were crashing. Priisalu and Aarelaid had suspected something like this could happen one day. A digital attack on Estoniahad begun.
“The Russian theory of war allows you to defeat the enemy without ever having to touch him,” says Peter Pomerantsev, author of Nothing is True and Everything is Possible. “Estonia was an early experiment in that theory.”
Since then, Russia has only developed, and codified, these strategies. The techniques pioneered in Estonia are known as the “Gerasimov doctrine,” named after Valery Gerasimov, the chief of the general staff of the Russian military. In 2013, Gerasimov published an article in the Russian journal Military-Industrial Courier, articulating the strategy of what is now called “hybrid” or “nonlinear” warfare. “The lines between war and peace are blurred,” he wrote. New forms of antagonism, as seen in 2010’s Arab spring and the “colour revolutions” of the early 2000s, could transform a “perfectly thriving state, in a matter of months, and even days, into an arena of fierce armed conflict”.
Russia has deployed these strategies around the globe. Its 2008 war with Georgia, another former Soviet republic, relied on a mix of both conventional and cyber-attacks, as did the 2014 invasion of Crimea. Both began with civil unrest sparked via digital and social media – followed by tanks. Finland and Sweden have experienced near-constant Russian information operations. Russian hacks and social media operations have also occurred during recent elections in Holland, Germany, and France. Most recently, Spain’s leading daily, El País, reported on Russian meddling in the Catalonian independence referendum. Russian-supported hackers had allegedly worked with separatist groups, presumably with a mind to further undermining the EU in the wake of the Brexit vote.
The Kremlin has used the same strategies against its own people. Domestically, history books, school lessons, and media are manipulated, while laws are passed blocking foreign access to the Russian population’s online data from foreign companies – an essential resource in today’s global information-sharing culture. According to British military researcher Keir Giles, author of Nato’s Handbook of Russian Information Warfare, the Russian government, or actors that it supports, has even captured the social media accounts of celebrities in order to spread provocative messages under their names but without their knowledge. The goal, both at home and abroad, is to sever outside lines of communication so that people get their information only through controlled channels.
According to its detractors, RT is Vladimir Putin’s global disinformation service, countering one version of the truth with another in a bid to undermine the whole notion of empirical truth. And yet influential people from all walks of public life appear on it, or take its money. You can’t criticise RT’s standards, they say, if you don’t watch it. So I watched it. For a week.
My note; so this is why Oliver Stone in his “documentary” went gentle on Putin, so his son can have a job. #Nepotism #FakeNews
RT’s stated mission is to offer an “alternative perspective on major global events”, but the world according to RT is often downright surreal.
Peter Pomerantsev, author of Nothing Is True and Everything Is Possible, about Putin’s Russia, and now a senior visiting fellow in global affairs at the London School of Economics, was in Moscow working in television when Russia Today first started hiring graduates from Britain and the US. “The people were really bright, they were being paid well,” he says. But they soon found they were being ordered to change their copy, or instructed how to cover certain stories to reflect well on the Kremlin. “Everyone had their own moment when they first twigged that this wasn’t like the BBC,” he says. “That, actually, this is being dictated from above.” The coverage of Russia’s war with Georgia in 2008 was a lightbulb moment for many, he says. They quit.