Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. recognition technology.
Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.
Clearview deployed current and former Republican officials to approach police forces, offering free trials and annual licenses for as little as $2,000. Mr. Schwartz tapped his political connections to help make government officials aware of the tool, according to Mr. Ton-That.
“We have no data to suggest this tool is accurate,” said Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, who has studied the government’s use of facial recognition. “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.”
Part of the problem stems from a lack of oversight. There has been no real public input into adoption of Clearview’s software, and the company’s ability to safeguard data hasn’t been tested in practice. Clearview itself remained highly secretive until late 2019.
The software also appears to explicitly violate policies at Facebook and elsewhere against collecting users’ images en masse.
while there’s underlying code that could theoretically be used for augmented reality glasses that could identify people on the street, Ton-That said there were no plans for such a design.
facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.
People can be identified at a distance by their heart beat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and irispatterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses.
The data broker industry is almost entirely unregulated; there’s only one law — passed in Vermont in 2018 — that requires data brokers to register and explain in broad terms what kind of data they collect.
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.
The upside for businesses is that this new, “anonymized” video no longer gives away the exact identity of a customer—which, Perry says, means companies using D-ID can “eliminate the need for consent” and analyze the footage for business and marketing purposes. A store might, for example, feed video of a happy-looking white woman to an algorithm that can surface the most effective ad for her in real time.
Three leading European privacy experts who spoke to MIT Technology Review voiced their concerns about D-ID’s technology and its intentions. All say that, in their opinion, D-ID actually violates GDPR.
A spokesman for the platform on Thursday blamed a “human moderation error” for the removal of a video by 17-year-old Feroza Aziz disguised as a makeup tutorial to avoid being censored.
Owned by the Beijing-based technology company ByteDance, TikTok is one of few Chinese apps that have gained popularity outside of China. TikTok has said that it does not apply Chinese censorship rules on the international version of its app.
AI computing involves two phases: training and inference. Training requires computers that can process enormous amounts of data. For example, getting an AI system to recognize what’s in photographs requires a computer to sort through billions of labeled photos to create a model. That model is used in the second step to infer, or identify, what’s in a specific photo.
Intel already sells its Nervana chips for training and inference to data centers packed with servers, computing infrastructure that often powers services at AI-heavy companies such as Google and Facebook. Intel is now shipping its larger, more expensive and power-hungry Nervana NNP-T chips for training and its smaller NNP-I chips for inference, the chipmaker announced.
China paradox in Australia? Take a look. Pro-China protestors in the country enjoying democratic freedoms – including speech and assembly – by harassing pro-#HongKong protestors who want to protect own similar democratic freedoms in Hong Kong. 🤷🏽♂️🤷🏽♂️🤷🏽♂️ #HongKongProtests#antiELABhttps://t.co/yDBpA5rG02
Chinese government uses WeChat to spy on everyone who uses it, and they use it in the same way Facebook uses fake news to mobilize far right groups. Hong Kong protestors avoid it like the plague https://t.co/KIvK7FyUHP
New York’s Lockport City School District, which is using public funds from a Smart Schools bond to help pay for a reported $3.8 million security system that uses facial recognition technology to identify individuals who don’t belong on campus
the Future of Privacy Forum (FPF), a nonprofit think tank based in Washington, D.C., published an animated video that illustrates the possible harm that surveillance technology can cause to children and the steps schools should take before making any decisions, such as identifying specific goals for the technology and establishing who will have access to the data and for how long.