At democracy’s heart lies a set of paradoxes: a delicate interplay of identity and anonymity, secrecy and transparency. To be sure you are eligible to vote and that you do so only once, the authorities need to know who you are. But when it comes time for you to mark a ballot, the government must guarantee your privacy and anonymity. After the fact, it also needs to provide some means for a third party to audit the election, while also preventing you from obtaining definitive proof of your choice, which could lead to vote selling or coercion.
Building a system that accomplishes all this at once — and does so securely — is challenging enough in the physical world. It’s even harder online, as the recent revelation that Russian intelligence operatives compromised voting systems in multiple states makes clear.
In the decade since the elusive Satoshi Nakamoto published an infamous white paper outlining the idea behind bitcoin, a “peer-to-peer electronic cash system” based on a mathematical “consensus mechanism,” more than 1,500 new cryptocurrencies have come into being.
definition: Nathan Heller in the New Yorker, in which he compares the blockchain to a scarf knit with a single ball of yarn. “It’s impossible to remove part of the fabric, or to substitute a swatch, without leaving some trace,” Heller wrote. Typically, blockchains are created by a set of stakeholders working to achieve consensus at every step, so it might be even more apt to picture a knitting collective creating that single scarf together, moving forward only when a majority agrees that a given knot is acceptable.
Unlike bitcoin, a public blockchain powered by thousands of miners around the world, most voting systems, including Votem’s, employ what’s known as a “permissioned ledger,” in which a handful of approved groups (political parties, election observers, government entities) would be allowed to validate the transactions.
there’s the issue of targeted denial-of-service (DoS) attacks, in which a hacker directs so much traffic at a server that it’s overwhelmed and ceases to function.
Although a distributed ledger itself would likely withstand such an attack, the rest of the system — from voters’ personal devices to the many servers a vote would pass through on its way to the blockchain — would remain vulnerable.
there’s the so-called penetration attack, like the University of Michigan incursion, in which an adversary gains control of a server and deliberately alters the outcome of an election.
While it’s true that information recorded on a blockchain cannot be changed, a determined hacker might well find another way to disrupt the process. Bitcoin itself has never been hacked, for instance, but numerous bitcoin “wallets” have been, resulting in billions of dollars in losses. In early June 2018, a South Korean cryptocurrency exchange was penetrated, causing the value of bitcoin to tumble and resulting in a loss of $42 billion in market value. So although recording the vote tally on a blockchain introduces a new obstacle to penetration attacks, it still leaves holes elsewhere in the system — like putting a new lock on your front door but leaving your basement windows open.
A blockchain is only as valuable as the data stored on it. And whereas traditional paper ballots preserve an indelible record of the actual intent of each voter, digital votes “don’t produce an original hard-copy record of any kind,”
In the end, democracy always depends on a certain leap of faith, and faith can never be reduced to a mathematical formula. The Economist Intelligence Unit regularly ranks the world’s most democratic counties. In 2017, the United States came in 21st place, after Uruguay and Malta. Meanwhile, it’s now widely believed that John F. Kennedy owed his 1960 win to election tampering in Chicago. The Supreme Court decision granting the presidency to George W. Bush rather than calling a do-over — despite Al Gore’s popular-vote win — still seems iffy. Significant doubts remain about the 2016 presidential race.
While little doubt remains that Russia favored Trump in the 2016 election, the Kremlin’s primary target appears to have been our trust in the system itself. So if the blockchain’s trendy allure can bolster trust in American democracy, maybe that’s a net positive for our national security. If someone manages to hack the system, hopefully they’ll do so quietly. Apologies to George Orwell, but sometimes ignorance really is strength.
Summary This short paper lays out an attempt to measure how much activity from Russian state-operated accounts released in the dataset made available by Twitter in October 2018 was targeted at the United Kingdom. Finding UK-related Tweets is not an easy task. By applying a combination of geographic inference, keyword analysis and classification by algorithm, we identified UK-related Tweets sent by these accounts and subjected them to further qualitative and quantitative analytic techniques.
There were three phases in Russian influence operations : under-the-radar account building, minor Brexit vote visibility, and larger-scale visibility during the London terror attacks.
Russian influence operations linked to the UK were most visible when discussing Islam . Tweets discussing Islam over the period of terror attacks between March and June 2017 were retweeted 25 times more often than their other messages.
The most widely-followed and visible troll account, @TEN_GOP, shared 109 Tweets related to the UK. Of these, 60 percent were related to Islam .
The topology of tweet activity underlines the vulnerability of social media users to disinformation in the wake of a tragedy or outrage.
Focus on the UK was a minor part of wider influence operations in this data . Of the nine million Tweets released by Twitter, 3.1 million were in English (34 percent). Of these 3.1 million, we estimate 83 thousand were in some way linked to the UK (2.7%). Those Tweets were shared 222 thousand times. It is plausible we are therefore seeing how the UK was caught up in Russian operations against the US .
Influence operations captured in this data show attempts to falsely amplify other news sources and to take part in conversations around Islam , and rarely show attempts to spread ‘fake news’ or influence at an electoral level.
On 17 October 2018, Twitter released data about 9 million tweets from 3,841 blocked accounts affiliated with the Internet Research Agency (IRA) – a Russian organisation founded in 2013 and based in St Petersburg, accused of using social media platforms to push pro-Kremlin propaganda and influence nation states beyond their borders, as well as being tasked with spreading pro-Kremlin messaging in Russia. It is one of the first major datasets linked to state-operated accounts engaging in influence operations released by a social media platform.
This report outlines the ways in which accounts linked to the Russian Internet ResearchAgency (IRA) carried out influence operations on social media and the ways their operationsintersected with the UK.The UK plays a reasonably small part in the wider context of this data. We see two possibleexplanations: either influence operations were primarily targeted at the US and British Twitterusers were impacted as collate, or this dataset is limited to US-focused operations whereevents in the UK were highlighted in an attempt to impact US public, rather than a concertedeffort against the UK. It is plausible that such efforts al so existed but are not reflected inthis dataset.Nevertheless, the data offers a highly useful window into how Russian influence operationsare carried out, as well as highlighting the moments when we might be most vulnerable tothem.Between 2011 and 2016, these state-operated accounts were camouflaged. Through manualand automated methods, they were able to quietly build up the trappings of an active andwell-followed Twitter account before eventually pivoting into attempts to influence the widerTwitter ecosystem. Their methods included engaging in unrelated and innocuous topics ofconversation, often through automated methods, and through sharing and engaging withother, more mainstream sources of news.Although this data shows levels of electoral and party-political influence operations to berelatively low, the day of the Brexit referendum results showed how messaging originatingfrom Russian state-controlled accounts might come to be visible – on June 24th 2016, we believe UK Twitter users discussing the Brexit Vote would have encountered messages originating from these accounts.As early as 2014, however, influence operations began taking part in conversations aroundIslam, and these accounts came to the fore during the three months of terror attacks thattook place between March and June 2017. In the immediate wake of these attacks, messagesrelated to Islam and circulated by Russian state-operated Twitter accounts were widelyshared, and would likely have been visible in the UK.The dataset released by Twitter begins to answer some questions about attempts by a foreignstate to interfere in British affairs online. It is notable that overt political or electoralinterference is poorly represented in this dataset: rather, we see attempts at stirring societaldivision, particularly around Islam in the UK, as the messages that resonated the most overthe period.What is perhaps most interesting about this moment is its portrayal of when we as socialmedia users are most vulnerable to the kinds of messages circulated by those looking toinfluence us. In the immediate aftermath of terror attacks, the data suggests, social mediausers were more receptive to this kind of messaging than at any other time.
It is clear that hostile states have identified the growth of online news and social media as aweak spot, and that significant effort has gone into attempting to exploit new media toinfluence its users. Understanding the ways in which these platforms have been used tospread division is an important first step to fighting it.Nevertheless, it is clear that this dataset provides just one window into the ways in whichforeign states have attempted to use online platforms as part of wider information warfare
and influence campaigns. We hope that other platforms will follow Twitter’s lead and release
similar datasets and encourage their users to proactively tackle those who would abuse theirplatforms.
Between the creation of a social rating system and street cameras with facial recognition capabilities, technology reports coming out of China have raised serious concerns for privacy advocates. These concerns are only heightened as Chinese investors turn their attention to the United States education technology space acquiring companies with millions of public school users.
A particularly notable deal this year centers on Edmodo, a cross between a social networking platform and a learning management system for schools that boasts having upwards of 90 million users. Net Dragon, a Chinese gaming company that is building a significant education division, bought Edmodo for a combination of cash and equity valued at $137.5 million earlier this month.
Edmodo began shifting to an advertising model last year, after years of struggling to generate revenue. This has left critics wondering why the Chinese firm chose to acquire Edmodo at such a price, some have gone as far as to call the move a data grab.
as data becomes a tool that governments such as Russia and China could use to influence voting systems or induce citizens into espionage, more legislators are turning their attention to the acquisitions of early-stage technology startups.
NetDragon officials, however, say they have no interest in these types of activities. Their main goal in acquiring United States edtech companies lies in building profitability, says Pep So, NetDragon’s Director of Corporate Development.
In 2015, the firm acquired the education technology platform, Promethean, a company that creates interactive displays for schools. NetDragon executives say that the Edmodo acquisition rounds out their education product portfolio—meaning the company will have tools for supporting multiple aspects of learning including; preparation, instructional delivery, homework, assignment grading, communication with parents students and teachers and a content marketplace.
NetDragon’s monetization plan for Edmodo focuses on building out content that gets sold via its platform. Similar to tools like TeachersPayTeachers, So hopes to see users putting up content on the platform’s marketplace, some free and others for a fee (including some virtual reality content), so that the community can buy, sell and review available educational tools.
As far as data privacy is concerned, So notes that NetDragon is still learning what it can and cannot do. He noted that the company will comply with Children’s Online Privacy Protection Act (COPPA), a federal regulation created in order to protect the privacy of children online, but says that the rules and regulations surrounding the law are confusing for all actors involved.
Historically, Chinese companies have faced trust and branding issues when moving into the United States market, and the reverse is also true for U.S. companies seeking to expand overseas. Companies have also struggled to learn the rules, regulations and operational procedures in place in other countries.
Moscow appears to have moved quickly into the market of cryptocurrencies and cryptography. Gref earlier warned that companies including Blockchain and Bitcoin should not be banned or hindered in their operations.
Russia’s Finance Ministry legalised cryptocurrency trading on January 25 with the Digital Assets Regulation Bill, despite vocal objections from the country’s Central Bank. The bill defined cryptocurrencies and tokens as digital financial assets that are not legal tender in Russia. The Central Bank, however, argued that digital currency trading rules should only be applied to tokens that would attract financial investments.
“They do plan,” said a senior Obama-administration official. “They’re not stupid at all. But the idea that they have this all perfectly planned and that Putin is an amazing chess player—that’s not quite it. He knows where he wants to end up, he plans the first few moves, and then he figures out the rest later. People ask if he plays chess or checkers. It’s neither: He plays blackjack. He has a higher acceptance of risk. Think about it. The election interference—that was pretty risky, what he did. If Hillary Clinton had won, there would’ve been hell to pay.”
Even the manner of the Russian attack was risky. The fact that the Russians didn’t really bother hiding their fingerprints is a testament to the change in Russia’s intent toward the U.S., Robert Hannigan, a former head of the Government Communications Headquarters, the British analogue to the National Security Agency, said at the Aspen Forum. “The brazen recklessness of it … the fact that they don’t seem to care that it’s attributed to them very publicly, is the biggest change.”
The digital attack that brought Estonia to a standstill 10 years ago was the first shot in a cyberwar that has been raging between Moscow and the west ever since
It began at exactly 10pm on 26 April, 2007, when a Russian-speaking mob began rioting in the streets of Tallinn, the capital city of Estonia, killing one person and wounding dozens of others. That incident resonates powerfully in some of the recent conflicts in the US. In 2007, the Estonian government had announced that a bronze statue of a heroic second world war Soviet soldier was to be removed from a central city square. For ethnic Estonians, the statue had less to do with the war than with the Soviet occupation that followed it, which lasted until independence in 1991. For the country’s Russian-speaking minority – 25% of Estonia’s 1.3 million people – the removal of the memorial was another sign of ethnic discrimination.
That evening, Jaan Priisalu – a former risk manager for Estonia’s largest bank, Hansabank, who was working closely with the government on its cybersecurity infrastructure – was at home in Tallinn with his girlfriend when his phone rang. On the line was Hillar Aarelaid, the chief of Estonia’s cybercrime police.
“It’s going down,” Aarelaid declared. Alongside the street fighting, reports of digital attacks were beginning to filter in. The websites of the parliament, major universities, and national newspapers were crashing. Priisalu and Aarelaid had suspected something like this could happen one day. A digital attack on Estoniahad begun.
“The Russian theory of war allows you to defeat the enemy without ever having to touch him,” says Peter Pomerantsev, author of Nothing is True and Everything is Possible. “Estonia was an early experiment in that theory.”
Since then, Russia has only developed, and codified, these strategies. The techniques pioneered in Estonia are known as the “Gerasimov doctrine,” named after Valery Gerasimov, the chief of the general staff of the Russian military. In 2013, Gerasimov published an article in the Russian journal Military-Industrial Courier, articulating the strategy of what is now called “hybrid” or “nonlinear” warfare. “The lines between war and peace are blurred,” he wrote. New forms of antagonism, as seen in 2010’s Arab spring and the “colour revolutions” of the early 2000s, could transform a “perfectly thriving state, in a matter of months, and even days, into an arena of fierce armed conflict”.
Russia has deployed these strategies around the globe. Its 2008 war with Georgia, another former Soviet republic, relied on a mix of both conventional and cyber-attacks, as did the 2014 invasion of Crimea. Both began with civil unrest sparked via digital and social media – followed by tanks. Finland and Sweden have experienced near-constant Russian information operations. Russian hacks and social media operations have also occurred during recent elections in Holland, Germany, and France. Most recently, Spain’s leading daily, El País, reported on Russian meddling in the Catalonian independence referendum. Russian-supported hackers had allegedly worked with separatist groups, presumably with a mind to further undermining the EU in the wake of the Brexit vote.
The Kremlin has used the same strategies against its own people. Domestically, history books, school lessons, and media are manipulated, while laws are passed blocking foreign access to the Russian population’s online data from foreign companies – an essential resource in today’s global information-sharing culture. According to British military researcher Keir Giles, author of Nato’s Handbook of Russian Information Warfare, the Russian government, or actors that it supports, has even captured the social media accounts of celebrities in order to spread provocative messages under their names but without their knowledge. The goal, both at home and abroad, is to sever outside lines of communication so that people get their information only through controlled channels.
According to its detractors, RT is Vladimir Putin’s global disinformation service, countering one version of the truth with another in a bid to undermine the whole notion of empirical truth. And yet influential people from all walks of public life appear on it, or take its money. You can’t criticise RT’s standards, they say, if you don’t watch it. So I watched it. For a week.
My note; so this is why Oliver Stone in his “documentary” went gentle on Putin, so his son can have a job. #Nepotism #FakeNews
RT’s stated mission is to offer an “alternative perspective on major global events”, but the world according to RT is often downright surreal.
Peter Pomerantsev, author of Nothing Is True and Everything Is Possible, about Putin’s Russia, and now a senior visiting fellow in global affairs at the London School of Economics, was in Moscow working in television when Russia Today first started hiring graduates from Britain and the US. “The people were really bright, they were being paid well,” he says. But they soon found they were being ordered to change their copy, or instructed how to cover certain stories to reflect well on the Kremlin. “Everyone had their own moment when they first twigged that this wasn’t like the BBC,” he says. “That, actually, this is being dictated from above.” The coverage of Russia’s war with Georgia in 2008 was a lightbulb moment for many, he says. They quit.
Kaspersky Lab advised those who do not use anti-virus products to restrict execution of certain files (C:\Windows\infpub.dat, C:\Windows\cscc.dat) and shut down the Windows Management Instrumentation (WMI) service. My note: let the wolf in the shed with sheep.
The source of the attack remained undetermined, but earlier this month the head of Microsoft, Brad Smith, pinned the blame for it on North Korea, which allegedly used cyber tools or weapons that were stolen from the National Security Agency in the United States. The top executive, however, did not provide evidence to back his claims.
New ransomware attack hits Russia and spreads around globe