Archive of ‘privacy’ category

cookies privacy EU

No cookie consent walls — and no, scrolling isn’t consent, says EU data protection body from r/programming

No cookie consent walls — and no, scrolling isn’t consent, says EU data protection body

unambiguous message from the European Data Protection Board (EDPB), which has published updated guidelines on the rules around online consent to process people’s data.

The regional cookie wall has been crumbling for some time, as we reported last year — when the Dutch DPA clarified its guidance to ban cookie walls.

as the EDPB puts it, “actions such as scrolling or swiping through a webpage or similar user activity will not under any circumstances satisfy the requirement of a clear and affirmative action”
++++++++++++++++++++++
more on privacy on this IMS blog
https://blog.stcloudstate.edu/ims?s=privacy

 

AI and privacy

The Secretive Company That Might End Privacy as We Know It: It’s taken 3 billion images from the internet to build a an AI driven database that allows US law enforcement agencies identify any stranger. from r/Futurology

Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial

But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. recognition technology.

Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.

Clearview deployed current and former Republican officials to approach police forces, offering free trials and annual licenses for as little as $2,000. Mr. Schwartz tapped his political connections to help make government officials aware of the tool, according to Mr. Ton-That.

“We have no data to suggest this tool is accurate,” said Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, who has studied the government’s use of facial recognition. “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.”

Law enforcement is using a facial recognition app with huge privacy issues Clearview AI’s software can find matches in billions of internet images. from r/technology

Part of the problem stems from a lack of oversight. There has been no real public input into adoption of Clearview’s software, and the company’s ability to safeguard data hasn’t been tested in practice. Clearview itself remained highly secretive until late 2019.

The software also appears to explicitly violate policies at Facebook and elsewhere against collecting users’ images en masse.

while there’s underlying code that could theoretically be used for augmented reality glasses that could identify people on the street, Ton-That said there were no plans for such a design.

Banning Facial Recognition Isn’t Enough from r/technology

https://www.nytimes.com/2020/01/20/opinion/facial-recognition-ban-privacy.html

In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activists are calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.

facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.

People can be identified at a distance by their heart beat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and iris patterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses.

China, for example, uses multiple identification technologies to support its surveillance state.

There is a huge — and almost entirely unregulated — data broker industry in the United States that trades on our information.

This is why many companies buy license plate data from states. It’s also why companies like Google are buying health records, and part of the reason Google bought the company Fitbit, along with all of its data.

The data broker industry is almost entirely unregulated; there’s only one law — passed in Vermont in 2018 — that requires data brokers to register and explain in broad terms what kind of data they collect.

The Secretive Company That Might End Privacy as We Know It from r/technews

https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.

+++++++++++++
on social credit system in this IMS blog
https://blog.stcloudstate.edu/ims?s=social+credit

smart anonymization

This startup claims its deepfakes will protect your privacy

But some experts say that D-ID’s “smart video anonymization” technique breaks the law.

https://www.technologyreview.com/s/614983/this-startup-claims-its-deepfakes-will-protect-your-privacy/

The upside for businesses is that this new, “anonymized” video no longer gives away the exact identity of a customer—which, Perry says, means companies using D-ID can “eliminate the need for consent” and analyze the footage for business and marketing purposes. A store might, for example, feed video of a happy-looking white woman to an algorithm that can surface the most effective ad for her in real time.

Three leading European privacy experts who spoke to MIT Technology Review voiced their concerns about D-ID’s technology and its intentions. All say that, in their opinion, D-ID actually violates GDPR.

Surveillance is becoming more and more widespread. A recent Pew study found that most Americans think they’re constantly being tracked but can’t do much about it, and the facial recognition market is expected to grow from around $4.5 billion in 2018 to $9 billion by 2024. Still, the reality of surveillance isn’t keeping activists from fighting back.

++++++++++++
more on deep fake in this IMS blog
https://blog.stcloudstate.edu/ims?s=deep+fake

corporate surveillance

Behind the One-Way Mirror: A Deep Dive Into the Technology of Corporate Surveillance
BY BENNETT CYPHERS DECEMBER 2, 2019

https://www.eff.org/wp/behind-the-one-way-mirror

Corporations have built a hall of one-way mirrors: from the inside, you can see only apps, web pages, ads, and yourself reflected by social media. But in the shadows behind the glass, trackers quietly take notes on nearly everything you do. These trackers are not omniscient, but they are widespread and indiscriminate. The data they collect and derive is not perfect, but it is nevertheless extremely sensitive.

A data-snorting company can just make low bids to ensure it never wins while pocketing your data for nothing. This is a flaw in the implied deal where you trade data for benefits.

You can limit what you give away by blocking tracking cookies. Unfortunately, you can still be tracked by other techniques. These include web beaconsbrowser fingerprinting and behavioural data such as mouse movements, pauses and clicks, or sweeps and taps.

The EU’s GDPR (General Data Protection Regulation) was a baby step in the right direction. BOWM also mentions Vermont’s data privacy law, the Illinois Biometric Information Protection Act (BIPA) and next year’s California Consumer Privacy Act (CCPA).

Tor, the original anti-surveillance browser, is based on an old, heavily modified version of Firefox.

Most other browsers are now, like Chrome, based on Google’s open source Chromium. Once enough web developers started coding for Chrome instead of for open standards, it became arduous and expensive to sustain alternative browser engines. Chromium-based browsers now include Opera, Vivaldi, Brave, the Epic Privacy Browser and next year’s new Microsoft Edge.

+++++++++++++
more on surveillance in this IMS blog
https://blog.stcloudstate.edu/ims?s=surveillance

Linda.com LinkedIn Microsoft Privacy

https://www.edsurge.com/news/2019-07-23-as-linkedin-learning-subsumes-lynda-com-library-groups-raise-privacy-concerns

The American Library Association said in a statement Monday that the planned changes to Lynda.com, which are slated to happen by the end of September 2019, “would significantly impair library users’ privacy rights.” That same day, the California State Library recommended that its users discontinue Lynda.com when it fully merges with LinkedIn Learning if it institutes the changes.

The library groups argue that by requiring users to create LinkedIn accounts to watch Lynda videos, the company is going from following best practices about privacy and identity protection to potentially asking libraries to violate a range of ethics codes they have pledged to uphold. The ALA’s Library Bill of Rights, for instance, states that: “All people, regardless of origin, age, background, or views, possess a right to privacy and confidentiality in their library use. Libraries should advocate for, educate about, and protect people’s privacy, safeguarding all library use data, including personally identifiable information.”

The change will not impact most colleges and university libraries or corporate users of Lynda.com services, who will not be required to force users to set up a LinkedIn profile. LinkedIn officials say that’s because colleges and corporations have more robust ways to identify users than public libraries do.

LinkedIn acquired Lynda.com in 2015 for $1.5 billion. The following June, Microsoft bought LinkedIn for $26.2 billion, the company’s largest-ever acquisition.

++++++++++++
more on privacy in this IMS blog
https://blog.stcloudstate.edu/ims?s=privacy

Blurred Lines Between Security Surveillance and Privacy

Edtech’s Blurred Lines Between Security, Surveillance and Privacy

By Tony Wan     Mar 5, 2019

https://www.edsurge.com/news/2019-03-05-edtech-s-blurred-lines-between-security-surveillance-and-privacy

Tony Wan, Bill Fitzgerald, Courtney Goodsell, Doug Levin, Stephanie Cerda

SXSW EDU https://schedule.sxswedu.com/

privacy advocates joined a school administrator and a school safety software product manager to offer their perspectives.

Navigating that fine line between ensuring security and privacy is especially tricky, as it concerns newer surveillance technologies available to schools. Last year, RealNetworks, a Seattle-based company, offered its facial recognition software to schools, and a few have pioneered the tool. https://blog.stcloudstate.edu/ims/2019/02/02/facial-recognition-technology-in-schools/

The increasing availability of these kinds of tools raise concerns and questions for Doug Levin, founder of EdTech Strategies.acial-recognition police tools have been decried as “staggeringly inaccurate.”

acial-recognition police tools have been decried as “staggeringly inaccurate.”School web filters can also impact low-income families inequitably, he adds, especially those that use school-issued devices at home. #equity.

Social-Emotional Learning: The New Surveillance?

Using data to profile students—even in attempts to reinforce positive behaviors—has Cerda concerned, especially in schools serving diverse demographics. #equity.

As in the insurance industry, much of the impetus (and sales pitches) in the school and online safety market can be driven by fear. But voicing such concerns and red flags can also steer the stakeholders toward dialogue and collaboration.

+++++++++++++
more in this IMS blog on
https://blog.stcloudstate.edu/ims?s=privacy
https://blog.stcloudstate.edu/ims?s=surveillance
https://blog.stcloudstate.edu/ims?s=security

Facebook’s Content Moderators

Propaganda, Hate Speech, Violence: The Working Lives Of Facebook’s Content Moderators

https://www.npr.org/2019/03/02/699663284/the-working-lives-of-facebooks-content-moderators

In a recent article for The Verge titled “The Trauma Floor: The secret lives of Facebook moderators in America,” a dozen current and former employees of one of the company’s contractors, Cognizant, talked to Newton about the mental health costs of spending hour after hour monitoring graphic content.

Perhaps the most surprising find from his investigation, the reporter said, was how the majority of the employees he talked to started to believe some of the conspiracy theories they reviewed.

 

++++++++++++++++
more on Facebook in this iMS blog
https://blog.stcloudstate.edu/ims?s=Facebook+privacy

 

Cybersecurity Risks in schools

FBI Warns Educators and Parents About Edtech’s Cybersecurity Risks

By Tina Nazerian     Sep 14, 2018

https://www.edsurge.com/news/2018-09-14-fbi-warns-educators-and-parents-about-edtech-s-cybersecurity-risks

The FBI has released a public service announcement warning educators and parents that edtech can create cybersecurity risks for students.

In April 2017, security researchers found a flaw in Schoolzilla’s data configuration settings. And in May 2017, a hacker reportedly stole 77 million user accounts from Edmodo.

Amelia Vance, the director of the Education Privacy Project at the Future of Privacy Forum, writes in an email to EdSurge that the FBI likely wanted to make sure that as the new school year starts, parents and schools are aware of potential security risks. And while she thinks it’s “great” that the FBI is bringing more attention to this issue, she wishes the public service announcement had also addressed another crucial challenge.

“Schools across the country lack funding to provide and maintain adequate security,” she writes. “Now that the FBI has focused attention on these concerns, policymakers must step up and fund impactful security programs.”

According to Vance, a better approach might involve encouraging parents to have conversations with their children’s’ school about how it keeps student data safe.

++++++++++
more on cybersecurity in this IMS blog
https://blog.stcloudstate.edu/ims?s=cybersecurity

AI tracks students writings

Schools are using AI to track what students write on their computers

By Simone Stolzoff August 19, 2018
50 million k-12 students in the US
Under the Children’s Internet Protection Act (CIPA), any US school that receives federal funding is required to have an internet-safety policy. As school-issued tablets and Chromebook laptops become more commonplace, schools must install technological guardrails to keep their students safe. For some, this simply means blocking inappropriate websites. Others, however, have turned to software companies like GaggleSecurly, and GoGuardian to surface potentially worrisome communications to school administrators
In an age of mass school-shootings and increased student suicides, SMPs Safety Management Platforms can play a vital role in preventing harm before it happens. Each of these companies has case studies where an intercepted message helped save lives.
Over 50% of teachers say their schools are one-to-one (the industry term for assigning every student a device of their own), according to a 2017 survey from Freckle Education
But even in an age of student suicides and school shootings, when do security precautions start to infringe on students’ freedoms?
When the Gaggle algorithm surfaces a word or phrase that may be of concern—like a mention of drugs or signs of cyberbullying—the “incident” gets sent to human reviewers before being passed on to the school. Using AI, the software is able to process thousands of student tweets, posts, and status updates to look for signs of harm.
SMPs help normalize surveillance from a young age. In the wake of the Cambridge Analytica scandal at Facebook and other recent data breaches from companies like Equifax, we have the opportunity to teach kids the importance of protecting their online data
in an age of increased school violence, bullying, and depression, schools have an obligation to protect their students. But the protection of kids’ personal information is also a matter of their safety

+++++++++
more on cybersecurity in this IMS blog
https://blog.stcloudstate.edu/ims?s=cybersecurity

more on surveillance  in this IMS blog
https://blog.stcloudstate.edu/ims?s=surveillance

more on privacy in this IMS blog
https://blog.stcloudstate.edu/ims?s=privacy

thermal imaging

***** thank you Tirthankar ! ******* : https://www.linkedin.com/feed/update/urn:li:activity:6424443573785235456

Recovering Keyboard Inputs through Thermal Imaging

https://www.schneier.com/blog/archives/2018/07/recovering_keyb.html

Researchers at the University of California, Irvine, are able to recover user passwords by way of thermal imaging. The tech is pretty straightforward, but it’s interesting to think about the types of scenarios in which it might be pulled off.

+++++++++++++++
more on cybersecurity in this IMS blog
https://blog.stcloudstate.edu/ims?s=cybersecurity

1 2 3 4 5