Posts Tagged ‘deep fake’

recognizing deepfake

Reuters releases guide to recognizing deepfake profile photos from r/technology

https://graphics.reuters.com/CYBER-DEEPFAKE/ACTIVIST/nmovajgnxpa/index.html

GAN A generative adversarial network is the name given to dueling computer programs that run through a process of trial and error…  One program, the generator, sequentially fires out millions of attempts at a face; the second program, the discriminator, tries to sniff out whether the first program’s face is a fake. If the discriminator can’t tell, Li said, a deepfake is produced.

+++++++++++++
more on deepfake in this IMS blog
https://blog.stcloudstate.edu/ims?s=deepfake

social media harms democracy

Pew research: Tech experts believe social media is harming democracy from r/technology

Many Tech Experts Say Digital Disruption Will Hurt Democracy

The years of almost unfettered enthusiasm about the benefits of the internet have been followed by a period of techlash as users worry about the actors who exploit the speed, reach and complexity of the internet for harmful purposes. Over the past four years – a time of the Brexit decision in the United Kingdom, the American presidential election and a variety of other elections – the digital disruption of democracy has been a leading concern.

Some think the information and trust environment will worsen by 2030 thanks to the rise of video deepfakescheapfakes and other misinformation tactics.

Power Imbalance: Democracy is at risk because those with power will seek to maintain it by building systems that serve them not the masses. Too few in the general public possess enough knowledge to resist this assertion of power.

EXPLOITING DIGITAL ILLITERACY

danah boyd, principal researcher at Microsoft Research and founder of Data & Society, wrote, “The problem is that technology mirrors and magnifies the good, bad AND ugly in everyday life. And right now, we do not have the safeguards, security or policies in place to prevent manipulators from doing significant harm with the technologies designed to connect people and help spread information.”

+++++++++++++++
more on social media in this IMS blog
https://blog.stcloudstate.edu/ims?s=social+media

Facebook deep fake

https://www.theguardian.com/technology/2020/jan/07/facebook-bans-deepfake-videos-in-run-up-to-us-election

Some news organisations, including the BBCNew York Times and Buzzfeed have made their own “deepfake” videos, ostensibly to spread awareness about the techniques. Those videos, while of varying quality, have all contained clear statements that they are fake.

++++++++++++
more on deep fake in this IMS blog
https://blog.stcloudstate.edu/ims?s=deep+fake

deepfake facebook

https://www.npr.org/2020/01/07/794171662/facebook-issues-new-rules-on-deepfake-videos-targeting-manipulation

Facebook’s new ban targets videos that are manipulated to make it appear someone said words they didn’t actually say.

 

+++++++++++
more on deep fake in this IMS blog
https://blog.stcloudstate.edu/ims?s=deep+fake
more on facebook in this IMS blog
https://blog.stcloudstate.edu/ims?s=facebook

smart anonymization

This startup claims its deepfakes will protect your privacy

But some experts say that D-ID’s “smart video anonymization” technique breaks the law.

https://www.technologyreview.com/s/614983/this-startup-claims-its-deepfakes-will-protect-your-privacy/

The upside for businesses is that this new, “anonymized” video no longer gives away the exact identity of a customer—which, Perry says, means companies using D-ID can “eliminate the need for consent” and analyze the footage for business and marketing purposes. A store might, for example, feed video of a happy-looking white woman to an algorithm that can surface the most effective ad for her in real time.

Three leading European privacy experts who spoke to MIT Technology Review voiced their concerns about D-ID’s technology and its intentions. All say that, in their opinion, D-ID actually violates GDPR.

Surveillance is becoming more and more widespread. A recent Pew study found that most Americans think they’re constantly being tracked but can’t do much about it, and the facial recognition market is expected to grow from around $4.5 billion in 2018 to $9 billion by 2024. Still, the reality of surveillance isn’t keeping activists from fighting back.

++++++++++++
more on deep fake in this IMS blog
https://blog.stcloudstate.edu/ims?s=deep+fake

Likewar

LikeWar

https://www.npr.org/books/titles/655833359/likewar-the-weaponization-of-social-media

https://www.foreignaffairs.com/reviews/capsule-review/2018-10-16/likewar-weaponization-social-media

https://information-professionals.org/wp-content/uploads/2018-10-25-LikeWar.pdf

https://madsciblog.tradoc.army.mil/87-likewar-the-weaponization-of-social-media/

Social media had changed not just the message, but the dynamics of conflict. How information was being accessed, manipulated, and spread had taken on new power. Who was involved in the fight, where they were located, and even how they achieved victory had been twisted and transformed. Indeed, if what was online could swing the course of a battle — or eliminate the need for battle entirely — what, exactly, could be considered ‘war’ at all?

Even American gang members are entering the fray as super-empowered individuals, leveraging social media to instigate killing s via “Facebook drilling” in Chicago or “wallbanging” in Los Angeles.

illusory truth effect

When False Claims Are Repeated, We Start To Believe They Are True

When False Claims Are Repeated, We Start To Believe They Are True — Here’s How Behaving Like A Fact-Checker Can Help

September 12, 2019

This phenomenon, known as the “illusory truth effect”, is exploited by politicians and advertisers — and if you think you are immune to it, you’re probably wrong. In fact, earlier this year we reported on a study that found people are prone to the effect regardless of their particular cognitive profile.

study in Cognition has found that using our own knowledge to fact-check a false claim can prevent us from believing it is true when it is later repeated. But we might need a bit of a nudge to get there.

The researchers found that participants who had focussed on how interesting the statements were in the first part of the study showed the illusory truth effect

++++++++++++++
more on Fake News in this IMS blog

1 2