Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., … Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998
Baum and David Lazer, M. A. (2017, May 11). Social media must be held to account on fake news. Winnipeg Free Press (MB). p. A7.
In a paper published in March in the journal Science, David Lazer, Matthew Baum and 14 co-authors consider what we do and don’t know about the science of fake news. They definefake news as “fabricated information that mimics news media content in form but not in organizational process or intent,” and they go on to discuss problems at multiple levels: individual, institutional and societal. What do we know about individuals’ exposure to fake news and its influence upon them? How can Internet platforms help limit the dissemination of fake news? And most fundamentally: How can we succeed in creating and perpetuating a culture that values and promotes truth?
Steven Sloman, professor of cognitive, linguistic and psychological sciences at Brown University, and one of the paper’s 16 authors. Sloman is also author of The Knowledge Illusion: Why We Never Think Alone, a book about the merits and failings of our collaborative minds, published in 2017 with co-author Philip Fernbach.
Sloman, S. A. (2017). The knowledge illusion: Why we never think alone. New York: Riverhead Books.
The anti-fake news bill, which must be approved by parliament, calls for penalising those who create, offer, circulate, print or publish fake news – or publications containing fake news – with a 10-year jail term, a fine of up to 500,000 ringgit (£90,000) or both.
The bill defines fake news as “any news, information, data and reports which is, or are, wholly or partly false whether in the form of features, visuals or audio recordings or in any other form capable of suggesting words or ideas”.
It covers all media and extends to foreigners outside Malaysia if Malaysia or its citizens are affected.
Tom Dickinson describes four different types of distributed ‘fake news’.
‘Fake news’ is lazy language. Be specific. Do you mean: A) Propaganda B) Disinformation C) Conspiracy theory D) Clickbait
The RAND Corporation, a US think-tank with strong ties to the military industrial complex, recently looked at the influence of the Russian Propaganda Model and how best to deal with it.
Three factors have been shown to increase the (limited) effectiveness of retractions and refutations: (1) warnings at the time of initial exposure to misinformation, (2) repetition of the retraction or refutation, and (3) corrections that provide an alternative story to help fill the resulting gap in understanding when false ‘facts’ are removed.
Critical thinking requires us to constantly question assumptions, especially our own. To develop these skills, questioning must be encouraged. This runs counter to most schooling and training practices. When do students or employees get to question underlying assumptions of their institutions? If they cannot do this, how can we expect them to challenge various and pervasive types of ‘fake news’?
“They were taking both sides of the argument this past weekend and pushing them out from their troll farms as much as they could to just raise the noise level in America and make a big issue seem like an even bigger issue as they’re trying to push divisiveness in the country,” as Sen. James Lankford, R-Okla., said in the fall.
Tools such as the German Marshall Fund’s “Hamilton 68” dashboard track Russian-linked accounts in real time to show what links, phrases and hashtags are circulating.
The Trump administration detailed the threat — without any specific mention of the 2016 interference — in the new National Security Strategy it released at the end of December.
The Republicans might have been tarnished by the St Petersburg troll factory, but Democratic fantasies about social media were rubbished in the process
The ads in question were memes, manufactured and posted to a number of bluntly named, pseudo-American Facebook accounts in 2016 by workers at a troll farm in St Petersburg, Russia. There were thousands of these ads, it seems, plus parallel efforts on Instagram and Twitter. Between them, they reached over 100 million people.
The memes were big news for a while because they showed what Russian interference in the 2016 election actually looked like, in vivid color. Eventually the story faded, though, in part because it was superseded by other stories, but also, I think, because the Russian ad story was deeply distasteful to both sides of our atrophied political debate.
The ads were clumsily written. They were rife with spelling errors and poor grammar. Their grasp of American history was awful. And over them all hovered a paranoid fear that the powerful were scheming to flip the world upside-down in the most outlandish ways: to turn our country over to the undocumented … to punish the hardworking … to crack down on patriots and Christians … to enact Sharia law right here at home.
The social media platforms aren’t neutral arbiters, selflessly serving the needs of society. As is all too obvious now, they are monopolies that manipulate us in a hundred different ways, selecting our news, steering us towards what we need to buy. The corporate entities behind them wield enormous power in Washington, too, filling Democratic campaign coffers and keeping the revolving door turning for trusted servants. Those who don’t comply get disciplined.
++++++++++++++
Russia calls for answers after Chechen leader’s Instagram is blocked
Internet watchdog demands explanation after Ramzan Kadyrov claimed Facebook also suspended him without explanation
Kadyrov has accused the US government of pressuring the social networks to disable his accounts, which he said were blocked on Saturday without explanation. The US imposed travel and financial sanctions on Kadyrov last week over numerous allegations of human rights abuses.
The former rebel fighter, who is now loyal to the Russian president, Vladimir Putin, is a fan of social media, particularly Instagram, which he has used in recent years to make barely veiled death threats against Kremlin critics.
Leonid Levin, the head of the Russian parliament’s information technologies and communications committee, suggested the move by Facebook and Instagram was an attack on freedom of speech.
Dzhambulat Umarov, the Chechen press and information minister, described the blocking of Kadyrov’s accounts as a “vile” cyber-attack by the US.
Neither Instagram nor Facebook had commented at the time of publication.
In 2015, Kadyrov urged Chechen men not to let their wives use the WhatsApp messaging service after an online outcry over the forced marriage of a 17-year-old Chechen to a 47-year-old police chief. “Do not write such things. Men, take your women out of WhatsApp,” he said.
As a team out of the University of Washington explains in a new paper titled “Synthesizing Obama: Learning Lip Sync from Audio,” they’ve made several fake videos of Obama.
+++++++++++++
++++++++++++++++++++++++++++++++++++++
Fake news: you ain’t seen nothing yet
Generating convincing audio and video of fake events, July 1, 2017
took only a few days to create the clip on a desktop computer using a generative adversarial network (GAN), a type of machine-learning algorithm.
Faith in written information is under attack in some quarters by the spread of what is loosely known as “fake news”. But images and sound recordings retain for many an inherent trustworthiness. GANs are part of a technological wave that threatens this credibility.
Amnesty International is already grappling with some of these issues. Its Citizen Evidence Lab verifies videos and images of alleged human-rights abuses. It uses Google Earth to examine background landscapes and to test whether a video or image was captured when and where it claims. It uses Wolfram Alpha, a search engine, to cross-reference historical weather conditions against those claimed in the video. Amnesty’s work mostly catches old videos that are being labelled as a new atrocity, but it will have to watch out for generated video, too. Cryptography could also help to verify that content has come from a trusted organisation. Media could be signed with a unique key that only the signing organisation—or the originating device—possesses.
Want to strengthen your own ability to tell real news from fake news? Start by asking these five questions of any news item:
Who wrote it?
identify whether the item you’re reading is a reported news article (written by a journalist with the intent to inform), a persuasive opinion piece (written by an industry expert with a point of view), or something else entirely.
What claims does it make? Real news — like these Pulitzer Prize winning articles — will include multiple primary sources when discussing a controversial claim. Fake news may include fake sources, false urls, and/or “alternative facts”
Where was it published? Real news is published by trustworthy media outlets with a strong factchecking record, such as the BBC, NPR, ProPublica, Mother Jones, and Wired. (To learn more about any media outlet, look at their About page and examine their published body of work.) If you get your news primarily via social media, try to verify that the information is accurate before you share it. (On Twitter, for example, you might look for the blue “verified” checkmark next to a media outlet name to doublecheck a publication source before sharing a link.)
How does it make you feel?Fake news, like all propaganda, is designed to make you feel strong emotions. So if you read a news item that makes you feel super angry, pause and take a deep breath.