Algorithmic test proctoring’s settings have discriminatory consequences across multiple identities and serious privacy implications.
While racist technology calibrated for white skin isn’t new (everything from photography to soap dispensers do this), we see it deployed through face detection and facial recognition used by algorithmic proctoring systems.
While some test proctoring companies develop their own facial recognition software, most purchase software developed by other companies, but these technologies generally function similarly and have shown a consistent inability to identify people with darker skin or even tell the difference between Chinese people. Facial recognition literally encodes the invisibility of Black people and the racist stereotype that all Asian people look the same.
As Os Keyes has demonstrated, facial recognition has a terrible history with gender. This means that a software asking students to verify their identity is compromising for students who identify as trans, non-binary, or express their gender in ways counter to cis/heteronormativity.
These features and settings create a system of asymmetric surveillance and lack of accountability, things which have always created a risk for abuse and sexual harassment. Technologies like these have a long history of being abused, largely by heterosexual men at the expense of women’s bodies, privacy, and dignity.
my note: I am repeating this for years
Sean Michael Morris and Jesse Stommel’s ongoing critique of Turnitin, a plagiarism detection software, outlines exactly how this logic operates in ed-tech and higher education: 1) don’t trust students, 2) surveil them, 3) ignore the complexity of writing and citation, and 4) monetize the data.
Technological Solutionism
Cheating is not a technological problem, but a social and pedagogical problem.
Our habit of believing that technology will solve pedagogical problems is endemic to narratives produced by the ed-tech community and, as Audrey Watters writes, is tied to the Silicon Valley culture that often funds it. Scholars have been dismantling the narrative of technological solutionism and neutrality for some time now. In her book “Algorithms of Oppression,” Safiya Umoja Noble demonstrates how the algorithms that are responsible for Google Search amplify and “reinforce oppressive social relationships and enact new modes of racial profiling.”
Anna Lauren Hoffmann, who coined the term “data violence” to describe the impact harmful technological systems have on people and how these systems retain the appearance of objectivity despite the disproportionate harm they inflict on marginalized communities.
This system of measuring bodies and behaviors, associating certain bodies and behaviors with desirability and others with inferiority, engages in what Lennard J. Davis calls the Eugenic Gaze.
Higher education is deeply complicit in the eugenics movement. Nazism borrowed many of its ideas about racial purity from the American school of eugenics, and universities were instrumental in supporting eugenics research by publishing copious literature on it, establishing endowed professorships, institutes, and scholarly societies that spearheaded eugenic research and propaganda.
Project Information Literacy, a nonprofit research institution that explores how college students find, evaluate and use information. It was commissioned by the John S. and James L. Knight Foundation and The Harvard Graduate School of Education.
focus groups and interviews with 103 undergraduates and 37 faculty members from eight U.S. colleges.
To better equip students for the modern information environment, the report recommends that faculty teach algorithm literacy in their classrooms. And given students’ reliance on learning from their peers when it comes to technology, the authors also suggest that students help co-design these learning experiences.
While informed and critically aware media users may see past the resulting content found in suggestions provided after conducting a search on YouTube, Facebook, or Google, those without these skills, particularly young or inexperienced users, fail to realize the culpability of underlying algorithms in the resultant filter bubbles and echo chambers (Cohen, 2018).
Media literacy education is more important than ever. It’s not just the overwhelming calls to understand the effects of fake news or addressing data breaches threatening personal information, it is the artificial intelligence systems being designed to predict and project what is perceived to be what consumers of social media want.
it’s time to revisit the Eight Key Concepts of media literacy with an algorithmic focus.
Literacy in today’s online and offline environments “means being able to use the dominant symbol systems of the culture for personal, aesthetic, cultural, social, and political goals” (Hobbs & Jensen, 2018, p 4).
predictive algorithms to better target students’ individual learning needs.
Personalized learning is a lofty aim, however you define it. To truly meet each student where they are, we would have to know their most intimate details, or discover it through their interactions with our digital tools. We would need to track their moods and preferences, their fears and beliefs…perhaps even their memories.
There’s something unsettling about capturing users’ most intimate details. Any prediction model based off historical records risks typecasting the very people it is intended to serve. Even if models can overcome the threat of discrimination, there is still an ethical question to confront – just how much are we entitled to know about students?
We can accept that tutoring algorithms, for all their processing power, are inherently limited in what they can account for. This means steering clear of mythical representations of what such algorithms can achieve. It may even mean giving up on personalization altogether. The alternative is to pack our algorithms to suffocation at the expense of users’ privacy. This approach does not end well.
There is only one way to resolve this trade-off: loop in the educators.
How algorithms impact our browsing behavior? browsing history? What is the connection between social media algorithms and fake news? Are there topic-detection algorithms as they are community-detection ones?
How can I change the content of a [Google] search return? Can I?
Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329-346. doi:10.1177/1461444815608807
CRUZ, J. D., BOTHOREL, C., & POULET, F. (2014). Community Detection and Visualization in Social Networks: Integrating Structural and Semantic Information. ACM Transactions On Intelligent Systems & Technology, 5(1), 1-26. doi:10.1145/2542182.2542193
Bai, X., Yang, P., & Shi, X. (2017). An overlapping community detection algorithm based on density peaks. Neurocomputing, 2267-15. doi:10.1016/j.neucom.2016.11.019
Zeng, J., & Zhang, S. (2009). Incorporating topic transition in topic detection and tracking algorithms. Expert Systems With Applications, 36(1), 227-232. doi:10.1016/j.eswa.2007.09.013
Zhou, E., Zhong, N., & Li, Y. (2014). Extracting news blog hot topics based on the W2T Methodology. World Wide Web, 17(3), 377-404. doi:10.1007/s11280-013-0207-7
The W2T (Wisdom Web of Things) methodology considers the information organization and management from the perspective of Web services, which contributes to a deep understanding of online phenomena such as users’ behaviors and comments in e-commerce platforms and online social networks. (https://link.springer.com/chapter/10.1007/978-3-319-44198-6_10)
ethics of algorithm
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679. https://doi.org/10.1177/2053951716679679
“YouTube, TikTok, Telegram, and Snapchat represent some of the largest and most influential platforms in the United States, and they provide almost no functional transparency into their systems. And as a result, they avoid nearly all of the scrutiny and criticism that comes with it.”
Cruz expressed great confusion about why he got relatively few new Twitter followers in the days before Elon Musk said he was going to buy it, but then got many more after the acquisition was announced.
The actual explanation is that Musk has lots of conservative fans, they flocked back to the platform when they heard he was buying it, and from there Twitter’s recommendation algorithms kicked into gear.
As usual, though, Europe is much further ahead of us. The Digital Services Act, which regulators reached an agreement on in April, includes provisions that would require big platforms to share data with qualified researchers. The law is expected to go into effect by next year. And so even if Congress dithers after today, transparency is coming to platforms one way or another. Here’s hoping it can begin to answer some very important questions.
argeted advertising based on an individual’s religion, sexual orientation, or ethnicity is banned. Minors cannot be subject to targeted advertising either.
“Dark patterns” — confusing or deceptive user interfaces designed to steer users into making certain choices — will be prohibited. The EU says that, as a rule, canceling subscriptions should be as easy as signing up for them.
Large online platforms like Facebook will have to make the working of their recommender algorithms (used for sorting content on the News Feed or suggesting TV shows on Netflix) transparent to users. Users should also be offered a recommender system “not based on profiling.” In the case of Instagram, for example, this would mean a chronological feed (as it introduced recently).
Hosting services and online platforms will have to explain clearly why they have removed illegal content as well as give users the ability to appeal such takedowns. The DSA itself does not define what content is illegal, though, and leaves this up to individual countries.
The largest online platforms will have to provide key data to researchers to “provide more insight into how online risks evolve.”
Online marketplaces must keep basic information about traders on their platform to track down individuals selling illegal goods or services.
Large platforms will also have to introduce new strategies for dealing with misinformation during crises (a provision inspired by the recent invasion of Ukraine).
hese tech companies have lobbied hard to water down the requirements in the DSA, particularly those concerning targeted advertising and handing over data to outside researchers.
“Facebook’s business model has evolved into social engineering via psychological warfare,” she declared. “The platform weaponizes user data to fuel algorithmic manipulation in order to maximize ad sales—not just for products, but for ideas like the disinformation that led to the conspiracy theories associated with the January 6 Capitol attack.”
“One thing is clear: Facebook and the other digital platforms that rely on an extractive business model will not change on their own,” the letter states. “Congress needs to step in.”
“The secretive collection, sale, and algorithmic manipulation of our personal data by platforms like Facebook must end,” he said. “It is a primary driver of the virality of the misinformation, hate speech, and online radicalization that people across the political spectrum are worried about.”
A recent Educause study found that 63 percent of colleges and universities in the U.S. and Canada mention the use of remote proctoring on their websites.
One reason colleges are holding onto proctoring tools, Urdan adds, is that many colleges plan to expand their online course offerings even after campus activities return to normal. And the pandemic also saw rapid growth of another tech trend: students using websites to cheat on exams.