Searching for "deep fakes"

deep fakes trusted more the real pictures

People trust AI fake faces more than real ones, research suggests

Researchers behind the study are calling for safeguards to prevent deep fakes.

https://www.freethink.com/technology/ai-fake-faces-trustwortiness

Ethical AI tools

Using AI responsibly is the “immediate challenge” facing the field of AI governance, the World Economic Forum says.

++++++++++++++++
more on deep fakes in this IMS blog
https://blog.stcloudstate.edu/ims?s=deep+fakes

Facebook deep fake

https://www.theguardian.com/technology/2020/jan/07/facebook-bans-deepfake-videos-in-run-up-to-us-election

Some news organisations, including the BBCNew York Times and Buzzfeed have made their own “deepfake” videos, ostensibly to spread awareness about the techniques. Those videos, while of varying quality, have all contained clear statements that they are fake.

++++++++++++
more on deep fake in this IMS blog
https://blog.stcloudstate.edu/ims?s=deep+fake

deepfake Zao

https://www.theguardian.com/technology/2019/sep/02/chinese-face-swap-app-zao-triggers-privacy-fears-viral

Released on Friday, the Zao app went viral as Chinese users seized on the chance to see themselves act out scenes from well-known movies using deepfake technology, which has already prompted concerns elsewhere over potential misuse.

As of Monday afternoon it remained the top free download in China, according to the app market data provider App Annie.

Concerns over deepfakes have grown since the 2016 US election campaign, which saw wide use of online misinformation, according to US investigations.

In June, Facebook’s chief executive, Mark Zuckerberg, said the social network was struggling to find ways to deal with deepfake videos, saying they may constitute “a completely different category” of misinformation than anything faced before.

++++++++++
more on deepfake in this IMS blog
https://blog.stcloudstate.edu/ims?s=deepfake

deepfake

Deepfake danger: what a viral clip of Bill Hader morphing into Tom Cruise tells us

Are deepfakes a threat to democracy? The creator of a series of viral clips says he is raising awareness of their subversive potential

Elle Hunt  August 13, 2019

https://www.theguardian.com/news/shortcuts/2019/aug/13/danger-deepfakes-viral-video-bill-hader-tom-cruise

deepfakes – doctored videos fabricating apparently real footage of people – and their potential to disrupt democracy.

+++++++++
more on #fakenews and audio/video in this this IMS blog
https://blog.stcloudstate.edu/ims/2017/07/15/fake-news-and-video/

https://blog.stcloudstate.edu/ims/2019/07/21/deep-fake-audio/

deep fake audio

https://www.axios.com/the-coming-deepfakes-threat-to-businesses-308432e8-f1d8-465e-b628-07498a7c1e2a.html

++++++++
more on cybersecurity in this IMS blog
https://blog.stcloudstate.edu/ims?s=cybersecurity
https://blog.stcloudstate.edu/ims?s=audio+video+fake+news

social media harms democracy

Pew research: Tech experts believe social media is harming democracy from r/technology

https://www.pewresearch.org/internet/2020/02/21/many-tech-experts-say-digital-disruption-will-hurt-democracy/

The years of almost unfettered enthusiasm about the benefits of the internet have been followed by a period of techlash as users worry about the actors who exploit the speed, reach and complexity of the internet for harmful purposes. Over the past four years – a time of the Brexit decision in the United Kingdom, the American presidential election and a variety of other elections – the digital disruption of democracy has been a leading concern.

Some think the information and trust environment will worsen by 2030 thanks to the rise of video deepfakescheapfakes and other misinformation tactics.

Power Imbalance: Democracy is at risk because those with power will seek to maintain it by building systems that serve them not the masses. Too few in the general public possess enough knowledge to resist this assertion of power.

EXPLOITING DIGITAL ILLITERACY

danah boyd, principal researcher at Microsoft Research and founder of Data & Society, wrote, “The problem is that technology mirrors and magnifies the good, bad AND ugly in everyday life. And right now, we do not have the safeguards, security or policies in place to prevent manipulators from doing significant harm with the technologies designed to connect people and help spread information.”

+++++++++++++++
more on social media in this IMS blog
https://blog.stcloudstate.edu/ims?s=social+media

smart anonymization

This startup claims its deepfakes will protect your privacy

But some experts say that D-ID’s “smart video anonymization” technique breaks the law.

https://www.technologyreview.com/s/614983/this-startup-claims-its-deepfakes-will-protect-your-privacy/

The upside for businesses is that this new, “anonymized” video no longer gives away the exact identity of a customer—which, Perry says, means companies using D-ID can “eliminate the need for consent” and analyze the footage for business and marketing purposes. A store might, for example, feed video of a happy-looking white woman to an algorithm that can surface the most effective ad for her in real time.

Three leading European privacy experts who spoke to MIT Technology Review voiced their concerns about D-ID’s technology and its intentions. All say that, in their opinion, D-ID actually violates GDPR.

Surveillance is becoming more and more widespread. A recent Pew study found that most Americans think they’re constantly being tracked but can’t do much about it, and the facial recognition market is expected to grow from around $4.5 billion in 2018 to $9 billion by 2024. Still, the reality of surveillance isn’t keeping activists from fighting back.

++++++++++++
more on deep fake in this IMS blog
https://blog.stcloudstate.edu/ims?s=deep+fake

news literate


https://www.edsurge.com/news/2019-09-24-the-challenge-of-teaching-news-literacy

Everybody’s talking about deepfakes in 2020, that’s not something we were talking about in 2016, maybe even in 2018.

+++++++++++
more on news literate in this IMS blog
https://blog.stcloudstate.edu/ims?s=news+literate

Fake Video Audio and the Election

https://www.npr.org/2019/09/02/754415386/what-you-need-to-know-about-fake-video-audio-and-the-2020-election

deep fake: definition

What are “deepfakes?”

That’s the nickname given to computer-created artificial videos or other digital material in which images are combined to create new footage that depicts events that never actually happened. The term originates from the online message board Reddit.

One initial use of the fake videos was in amateur-created pornography, in which the faces of famous Hollywood actresses were digitally placed onto that of other performers to make it appear as though the stars themselves were performing.

How difficult is it to create fake media?

It can be done with specialized software, experts say, the same way that editing programs such as Photoshop have made it simpler to manipulate still images. And specialized software itself isn’t necessary for what have been dubbed “shallow fakes” or “cheap fakes.”

Researchers also say they are working on new ways to speed up systems aimed at helping establish when video or audio has been manipulated. But it’s been called a “cat and mouse” game in which there may seldom be exact parity between fabrication and detection.

At least one state has considered legislation that would outlaw distributing election-oriented fake videos.

+++++++++++
more on fake news in this IMS blog
https://blog.stcloudstate.edu/ims?s=fake+news

psychology fake news

The Psychology Of Fake News

March 27, 201810:21 AM ET

https://www.npr.org/sections/13.7/2018/03/27/597263367/the-psychology-of-fake-news

During the past two years, fake news has been a frequent topic of real news, with articles considering the role of social media in spreading fake news, the advent of fake videos and the role these play in the political process.

Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., … Zittrain, J. L. (2018). The science of fake news. Science, 359(6380), 1094–1096. https://doi.org/10.1126/science.aao2998
Baum and David Lazer, M. A. (2017, May 11). Social media must be held to account on fake news. Winnipeg Free Press (MB). p. A7.
In a paper published in March in the journal Science, David Lazer, Matthew Baum and 14 co-authors consider what we do and don’t know about the science of fake news. They define fake news as “fabricated information that mimics news media content in form but not in organizational process or intent,” and they go on to discuss problems at multiple levels: individual, institutional and societal. What do we know about individuals’ exposure to fake news and its influence upon them? How can Internet platforms help limit the dissemination of fake news? And most fundamentally: How can we succeed in creating and perpetuating a culture that values and promotes truth?
 Steven Sloman, professor of cognitive, linguistic and psychological sciences at Brown University, and one of the paper’s 16 authors. Sloman is also author of The Knowledge Illusion: Why We Never Think Alone, a book about the merits and failings of our collaborative minds, published in 2017 with co-author Philip Fernbach.
Sloman, S. A. (2017). The knowledge illusion: Why we never think alone. New York: Riverhead Books.