more on cybersecurity in this IMS blog
Computer Scientists Demonstrate The Potential For Faking Video
As a team out of the University of Washington explains in a new paper titled “Synthesizing Obama: Learning Lip Sync from Audio,” they’ve made several fake videos of Obama.
Fake news: you ain’t seen nothing yet
Generating convincing audio and video of fake events, July 1, 2017
took only a few days to create the clip on a desktop computer using a generative adversarial network (GAN), a type of machine-learning algorithm.
Faith in written information is under attack in some quarters by the spread of what is loosely known as “fake news”. But images and sound recordings retain for many an inherent trustworthiness. GANs are part of a technological wave that threatens this credibility.
Amnesty International is already grappling with some of these issues. Its Citizen Evidence Lab verifies videos and images of alleged human-rights abuses. It uses Google Earth to examine background landscapes and to test whether a video or image was captured when and where it claims. It uses Wolfram Alpha, a search engine, to cross-reference historical weather conditions against those claimed in the video. Amnesty’s work mostly catches old videos that are being labelled as a new atrocity, but it will have to watch out for generated video, too. Cryptography could also help to verify that content has come from a trusted organisation. Media could be signed with a unique key that only the signing organisation—or the originating device—possesses.
more on fake news in this IMS blog