Last year, researchers at Oxford University found that 70 countries had political disinformation campaigns over two years.
Perhaps the most notable of such campaigns was that initiated by a Russian propaganda group to influence the 2016 US election result.
he US Federal Communications Commission hosted a period in 2017 where the public could comment on its plans to repeal net neutrality. Harvard Kennedy School lecturer Bruce Schneier notes that while the agency received 22 million comments, many of them were made by fake identities.
Schneier argues that the escalating prevalence of computer-generated personas could “starve” people of democracy
Project Information Literacy, a nonprofit research institution that explores how college students find, evaluate and use information. It was commissioned by the John S. and James L. Knight Foundation and The Harvard Graduate School of Education.
focus groups and interviews with 103 undergraduates and 37 faculty members from eight U.S. colleges.
To better equip students for the modern information environment, the report recommends that faculty teach algorithm literacy in their classrooms. And given students’ reliance on learning from their peers when it comes to technology, the authors also suggest that students help co-design these learning experiences.
While informed and critically aware media users may see past the resulting content found in suggestions provided after conducting a search on YouTube, Facebook, or Google, those without these skills, particularly young or inexperienced users, fail to realize the culpability of underlying algorithms in the resultant filter bubbles and echo chambers (Cohen, 2018).
Media literacy education is more important than ever. It’s not just the overwhelming calls to understand the effects of fake news or addressing data breaches threatening personal information, it is the artificial intelligence systems being designed to predict and project what is perceived to be what consumers of social media want.
it’s time to revisit the Eight Key Concepts of media literacy with an algorithmic focus.
Literacy in today’s online and offline environments “means being able to use the dominant symbol systems of the culture for personal, aesthetic, cultural, social, and political goals” (Hobbs & Jensen, 2018, p 4).
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. recognition technology.
Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.
Clearview deployed current and former Republican officials to approach police forces, offering free trials and annual licenses for as little as $2,000. Mr. Schwartz tapped his political connections to help make government officials aware of the tool, according to Mr. Ton-That.
“We have no data to suggest this tool is accurate,” said Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, who has studied the government’s use of facial recognition. “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.”
Part of the problem stems from a lack of oversight. There has been no real public input into adoption of Clearview’s software, and the company’s ability to safeguard data hasn’t been tested in practice. Clearview itself remained highly secretive until late 2019.
The software also appears to explicitly violate policies at Facebook and elsewhere against collecting users’ images en masse.
while there’s underlying code that could theoretically be used for augmented reality glasses that could identify people on the street, Ton-That said there were no plans for such a design.
In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activistsare calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.
facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.
People can be identified at a distance by their heart beat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and irispatterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses.
The data broker industry is almost entirely unregulated; there’s only one law — passed in Vermont in 2018 — that requires data brokers to register and explain in broad terms what kind of data they collect.
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.
PISA scores were recently released, and results of the international test revealed that only 14 percent of U.S. students were able to reliably distinguish between fact and opinion.
Even on seemingly-serious websites, credibility is not a given. When I was in middle and high school, we were taught that we could trust .org websites. Now, with the practice of astroturfing, responsible consumers of information must dig deeper and go further to verify the legitimacy of information. https://www.merriam-webster.com/dictionary/astroturfing
Experiences like these, where students are challenged to consider the validity of information and sort what’s real from what’s fake, would better prepare them not only to be savvier consumers of news, but also to someday digest contradictory information to make complicated decisions about their own health care, finances or civic engagement.
freely available resources to help educators teach how to vet information and think critically about real-world topics.
the service can be used for a variety of functions at schools and colleges, including verifying credentials, tracking donations and payments, or handling other student records.
a K-6 educational app called SpoonRead
Blockchain is a decentralized system where every record is linked and transparent, and any alterations leave a trail that supposedly can’t be hidden.
Some have questioned whether there is a need for blockchain in student records, considering that other kinds of encryption techniques already exist to protect and verify things like credentials.
A 2016 Pew Research Center study indicates that the digital divide in the United States is not solely about access to technology; it also is about the ability to use technology to get what we need.1 What does digital readiness mean; applying cumulative knowledge to real-world situations. Having a tech or STEM-related degree does not ensure digital readiness.
How Can We Encourage Digital Agility in the Liberal Arts?
Digital pedagogy often creates opportunities for instructors to create non-disposable assignments—assignments that are not designed to be thrown away but rather have a purpose past being required.3
“We need to marry the best of our academic work with the best of edtech. In other words, what would it look like if education technology were embedded in the everyday practice of academic disciplines?”4
Project-based learning fits well within the curricular flexibility of the liberal arts. In project-based work, students apply what they are learning in the context of an engaging experience.
Building off frameworks that are already in place, like the Association for College and Research Libraries (ACRL) Framework for Information Literacy,
External-facing work offers students real situations where, if we imagine what digital agility looks like, they have to adjust to possible new digital environments and approaches.
Reflection provides a way for meaning-making to happen across individual assignments, projects, and classes. Without the chance to assemble assignments into a larger narrative, each experience lives in its own void.
How Can Institutions Build Systems-Level Support?
Liberal arts colleges in particular are interested in the ways they prepare graduates to be agile and critical in a digital world—as seen in the Association of American Colleges & Universities (AAC&U) Valid Assessment of Learning in Undergraduate Education (VALUE) Rubrics.
he Bryn Mawr Digital Competencies Framework5 was followed by more formal conversations and the formation of a working group (including Carleton College,
Some news organisations, including the BBC, New York Times and Buzzfeed have made their own “deepfake” videos, ostensibly to spread awareness about the techniques. Those videos, while of varying quality, have all contained clear statements that they are fake.
1. Share your ideas and practice of badge distribution and/or microcredentialing
2. What is a digital badge/microcredentialing?
3. How to create and award D2L digital badges for your class?
4. How to motivate the students in earning digital badges?
5. How it aligns with COSE’s strategic plan 2022/Husky Compact?
What we hope to achieve
• Create a community of digital badgers
• Catalyze professional development opportunity for faculty/staff