Everybody’s talking about deepfakes in 2020, that’s not something we were talking about in 2016, maybe even in 2018.
more on news literate in this IMS blog
Filters have never been more prevalent – and it’s leading some people to have fillers, Botox and other procedures. What’s behind the obsessive pursuit of a flawless look?
The phenomenon of people requesting procedures to resemble their digital image has been referred to – sometimes flippantly, sometimes as a harbinger of end times – as “Snapchat dysmorphia”. The term was coined by the cosmetic doctor Tijion Esho, founder of the Esho clinics in London and Newcastle.
A recent report in the US medical journal JAMA Facial Plastic Surgery suggested that filtered images’ “blurring the line of reality and fantasy” could be triggering body dysmorphic disorder (BDD), a mental health condition where people become fixated on imagined defects in their appearance.
A 2017 study into “selfitis”, as the obsessive taking of selfies has been called, found a range of motivations, from seeking social status to shaking off depressive thoughts and – of course – capturing a memorable moment. Another study suggested that selfies served “a private and internal purpose”, with the majority never shared with anyone or posted anywhere – terabytes, even petabytes of photographs never to be seen by anyone other than their subject.
However, a 2017 study in the journal Cognitive Research: Principles and Implications found that people only recognised manipulated images 60%-65% of the time.
more on social media in this IMS blog
Computer Scientists Demonstrate The Potential For Faking Video
As a team out of the University of Washington explains in a new paper titled “Synthesizing Obama: Learning Lip Sync from Audio,” they’ve made several fake videos of Obama.
Fake news: you ain’t seen nothing yet
Generating convincing audio and video of fake events, July 1, 2017
took only a few days to create the clip on a desktop computer using a generative adversarial network (GAN), a type of machine-learning algorithm.
Faith in written information is under attack in some quarters by the spread of what is loosely known as “fake news”. But images and sound recordings retain for many an inherent trustworthiness. GANs are part of a technological wave that threatens this credibility.
Amnesty International is already grappling with some of these issues. Its Citizen Evidence Lab verifies videos and images of alleged human-rights abuses. It uses Google Earth to examine background landscapes and to test whether a video or image was captured when and where it claims. It uses Wolfram Alpha, a search engine, to cross-reference historical weather conditions against those claimed in the video. Amnesty’s work mostly catches old videos that are being labelled as a new atrocity, but it will have to watch out for generated video, too. Cryptography could also help to verify that content has come from a trusted organisation. Media could be signed with a unique key that only the signing organisation—or the originating device—possesses.
more on fake news in this IMS blog
Fake news infographics
The Global Critical Media Literacy Educators’ Resource Guide
Students will be introduced to exercises, experiences, and assignments, which focus on developing student’s classroom engagement, empowerment, critical awareness of media, civic engagement, and adoption of a social justice agenda. The guide enables students to work with faculty to produce GCMLP Webpage content, which can be consumed by the public to help expand citizen’s understanding of key events and processes in the global society. Furthermore, participating students will be granted academic and employment opportunities through the GCMLP, so they can be equitable participants in the 21st century economy.
more on fake news in this IMS blog
View post on imgur.com
Report: Digital Natives ‘Easily Duped’ by Information Online
By Sri Ravipati 12/07/16
Researchers at the Stanford Graduate School of Education assessed middle, high school and college students on the their civic online reasoning skills, or “the ability to judge the credibility of information that floods young people’s smartphones, tablets and computers.”
The Stanford History Education Group recently released a report that analyzes 7,804 responses collected from students across 12 states and varying economic lines, including well-resourced, under-resourced and inner-city schools.
when it comes to evaluating information that flows on social media channels like Facebook and Twitter, students “are easily duped” and have trouble discerning advertisements from news articles.
Many people assume that today’s students – growing up as “digital natives” – are intuitively perceptive online. The Stanford researchers found the opposite to be true and urge teachers to create curricula focused on developing students’ civil reasoning skills. They plan to produce “a series of high-quality web videos to showcase the depth of the problem” that will “demonstrate the link between digital literacy and citizenship,” according to the report.
The report, “Evaluating Information: The Cornerstone of Civic Online Reasoning,” can be found here.
more on information literacy in this IMS blog:
The Library Information Technology Association (LITA) (http://www.ala.org/lita/) listserv has great exchange of information on the phenomenon “fake news”. Excellent ideas and suggestions were shared:
Here is a link to the Twitter hashtag application: https://twitter.com/hashtag/fakenews?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Ehashtag
More on activism, civil disobedience in this IMS blog: