Last year, researchers at Oxford University found that 70 countries had political disinformation campaigns over two years.
Perhaps the most notable of such campaigns was that initiated by a Russian propaganda group to influence the 2016 US election result.
he US Federal Communications Commission hosted a period in 2017 where the public could comment on its plans to repeal net neutrality. Harvard Kennedy School lecturer Bruce Schneier notes that while the agency received 22 million comments, many of them were made by fake identities.
Schneier argues that the escalating prevalence of computer-generated personas could “starve” people of democracy
Project Information Literacy, a nonprofit research institution that explores how college students find, evaluate and use information. It was commissioned by the John S. and James L. Knight Foundation and The Harvard Graduate School of Education.
focus groups and interviews with 103 undergraduates and 37 faculty members from eight U.S. colleges.
To better equip students for the modern information environment, the report recommends that faculty teach algorithm literacy in their classrooms. And given students’ reliance on learning from their peers when it comes to technology, the authors also suggest that students help co-design these learning experiences.
While informed and critically aware media users may see past the resulting content found in suggestions provided after conducting a search on YouTube, Facebook, or Google, those without these skills, particularly young or inexperienced users, fail to realize the culpability of underlying algorithms in the resultant filter bubbles and echo chambers (Cohen, 2018).
Media literacy education is more important than ever. It’s not just the overwhelming calls to understand the effects of fake news or addressing data breaches threatening personal information, it is the artificial intelligence systems being designed to predict and project what is perceived to be what consumers of social media want.
Literacy in today’s online and offline environments “means being able to use the dominant symbol systems of the culture for personal, aesthetic, cultural, social, and political goals” (Hobbs & Jensen, 2018, p 4).
Some news organisations, including the BBC, New York Times and Buzzfeed have made their own “deepfake” videos, ostensibly to spread awareness about the techniques. Those videos, while of varying quality, have all contained clear statements that they are fake.
The Poynter Institute – an enlightened non-profit in St. Petersburg, Fla., that has an ownership role in the Tampa Bay Times and provides research, training and educational resources on journalism – provides many excellent online modules to help citizens improve their news media literacy.
citizens should support local and regional publications that hew to ethical journalism standards and cover local government entities.
Media Literacy Now considers digital citizenship as part of media literacy — not the other way around
nine states — California, Colorado, Connecticut, Illinois, Massachusetts, Minnesota, New Jersey, Rhode Island and Utah — are identified as “emerging leaders” for “beginning the conversation” and consulting with experts and others.
Calls for increased attention to media literacy skills and demand from educators for training in this area increased following an outbreak of “fake news” reports associated with the 2016 presidential election. Studies and assessments showing students are easily misled by digital information have also contributed to a sense of urgency.
because the topic can fit into multiple content areas, it can also be overlooked because of other pressures on teachers. Media literacy, the group notes, also “encompasses the foundational skills of digital citizenship and internet safety including the norms of appropriate, responsible, ethical, and healthy behavior, and cyberbullying prevention.”
Lawmakers in Missouri and South Carolina have also pre-filed versions of Media Literacy Now’s model bill, the report noted, and legislation is expected in Hawaii and Arizona.
That’s the nickname given to computer-created artificial videos or other digital material in which images are combined to create new footage that depicts events that never actually happened. The term originates from the online message board Reddit.
One initial use of the fake videos was in amateur-created pornography, in which the faces of famous Hollywood actresses were digitally placed onto that of other performers to make it appear as though the stars themselves were performing.
How difficult is it to create fake media?
It can be done with specialized software, experts say, the same way that editing programs such as Photoshop have made it simpler to manipulate still images. And specialized software itself isn’t necessary for what have been dubbed “shallow fakes” or “cheap fakes.”
Researchers also say they are working on new ways to speed up systems aimed at helping establish when video or audio has been manipulated. But it’s been called a “cat and mouse” game in which there may seldom be exact parity between fabrication and detection.