New York’s Lockport City School District, which is using public funds from a Smart Schools bond to help pay for a reported $3.8 million security system that uses facial recognition technology to identify individuals who don’t belong on campus
the Future of Privacy Forum (FPF), a nonprofit think tank based in Washington, D.C., published an animated video that illustrates the possible harm that surveillance technology can cause to children and the steps schools should take before making any decisions, such as identifying specific goals for the technology and establishing who will have access to the data and for how long.
My note: same considerations were relayed to the SCSU SOE dean in regard of the purchase of Premethean and its installation in SOE building without discussion with faculty, who work with technology. This information was also shared with the dean: https://blog.stcloudstate.edu/ims/2018/10/31/students-data-privacy/
The phrase “school-to-prison pipeline” has long been used to describe how schools respond to disciplinary problems with excessively stringent policies that create prison-like environments and funnel children who don’t fall in line into the criminal justice system. Now, schools are investing in surveillance systems that will likely exacerbate existing disparities.
A number of tech companies are capitalizing on the growing market for student surveillance measures as various districts and school leaders committhemselves to preventing acts of violence. Rekor Systems, for instance, recently announced the launch of OnGuard, a program that claims to “advance student safety” by implementing countless surveillance and “threat assessment” mechanisms in and around schools.
While none of these methods have been proven to be effective in deterring violence, similarsystems have resulted in diverting resources away from enrichment opportunities, policing school communities to a point where students feel afraid to express themselves, and placing especially dangerous targets on students of color who are already disproportionately mislabeled and punished.ProPublica
Narrow AI is what we see all around us in computers today — intelligent systems that have been taught or have learned how to carry out specific tasks without being explicitly programmed how to do so.
General AI
General AI is very different and is the type of adaptable intellect found in humans, a flexible form of intelligence capable of learning how to carry out vastly different tasks, anything from haircutting to building spreadsheets or reasoning about a wide variety of topics based on its accumulated experience.
What can Narrow AI do?
There are a vast number of emerging applications for narrow AI:
Interpreting video feeds from drones carrying out visual inspections of infrastructure such as oil pipelines.
Organizing personal and business calendars.
Responding to simple customer-service queries.
Coordinating with other intelligent systems to carry out tasks like booking a hotel at a suitable time and location.
Flagging inappropriate content online, detecting wear and tear in elevators from data gathered by IoT devices.
Generating a 3D model of the world from satellite imagery… the list goes on and on.
What can General AI do?
A survey conducted among four groups of experts in 2012/13 by AI researchers Vincent C Müller and philosopher Nick Bostrom reported a 50% chance that Artificial General Intelligence (AGI) would be developed between 2040 and 2050, rising to 90% by 2075.
What is machine learning?
What are neural networks?
What are other types of AI?
Another area of AI research is evolutionary computation.
What is fueling the resurgence in AI?
What are the elements of machine learning?
As mentioned, machine learning is a subset of AI and is generally split into two main categories: supervised and unsupervised learning.
Supervised learning
Unsupervised learning
Which are the leading firms in AI?
Which AI services are available?
All of the major cloud platforms — Amazon Web Services, Microsoft Azure and Google Cloud Platform — provide access to GPU arrays for training and running machine-learning models, with Google also gearing up to let users use its Tensor Processing Units — custom chips whose design is optimized for training and running machine-learning models.
Which countries are leading the way in AI?
It’d be a big mistake to think the US tech giants have the field of AI sewn up. Chinese firms Alibaba, Baidu, and Lenovo, invest heavily in AI in fields ranging from e-commerce to autonomous driving. As a country, China is pursuing a three-step plan to turn AI into a core industry for the country, one that will be worth 150 billion yuan ($22bn) by the end of 2020 to become the world’s leading AI power by 2030.
How can I get started with AI?
While you could buy a moderately powerful Nvidia GPU for your PC — somewhere around the Nvidia GeForce RTX 2060 or faster — and start training a machine-learning model, probably the easiest way to experiment with AI-related services is via the cloud.
Algorithmic proctoring software has been around for several years, but its use exploded as the COVID-19 pandemic forced schools to quickly transition to remote learning. Proctoring companies cite studies estimating that between 50 and 70 percent of college students will attempt some form of cheating, and warn that cheating will be rampant if students are left unmonitored in their own homes.
Like many other tech companies, they also balk at the suggestion that they are responsible for how their software is used. While their algorithms flag behavior that the designers have deemed suspicious, these companies argue that the ultimate determination of whether cheating occurred rests in the hands of the class instructor.
As more evidence emerges about how the programs work, and fail to work, critics say the tools are bound to hurt low-income students, students with disabilities, students with children or other dependents, and other groups who already face barriers in higher education.
“Each academic department has almost complete agency to design their curriculum as far as I know, and each professor has the freedom to design their own exams and use whatever monitoring they see fit,” Rohan Singh, a computer engineering student at Michigan State University, told Motherboard.
after students approached faculty members at the University of California Santa Barbara, the faculty association sent a letter to the school’s administration raising concerns about whether ProctorU would share student data with third parties. In response, a ProctorU attorney threatened to sue the faculty association for defamation and violating copyright law (because the association had used the company’s name and linked to its website). He also accused the faculty association of “directly impacting efforts to mitigate civil disruption across the United States” by interfering with education during a national emergency, and said he was sending his complaint to the state’s Attorney General.
here is a link to a community discussion regarding this and similar software use:
Investment continues to flow to ed tech, with $803 million injected during the first six months of the year, according to the industry news website EdSurge. But half of that went to just six companies, including the celebrity tutorial provider MasterClass, the online learning platform Udemy and the school and college review site Niche.
From the outside, the ed-tech sector may appear as if “there’s a bonanza and it’s like the dot-com boom again and everybody’s printing money,” said Michael Hansen, CEO of the K-12 and higher education digital learning provider Cengage. “That is not the case.”
Even if they want to buy more ed-tech tools, meanwhile, schools and colleges are short on cash. Expenses for measures to deal with Covid-19 are up, while budgets are expected to be down.
Analysts and industry insiders now expect a wave of acquisitions as already-dominant brands like these seek to corner even more of the market by snatching up smaller players that provide services they don’t.
++++++++++++++++
Tech-based contact tracing could put schools in murky privacy territory
A white paper from the Surveillance Technology Oversight Project (STOP) suggests the use of contact tracing technology by schools could erode student privacy and may not be effective in preventing the spread of coronavirus.
Despite the pandemic, schools still must conform to the Family Educational Rights and Privacy Act (FERPA) and other laws governing student privacy. Districts can disclose information to public health officials, for example, but information can’t be released to the general public without written consent from parents.
The Safely Reopen Schools mobile app is one tool available for automating contact tracing. The idea is that if two mobile phones are close enough to connect via Bluetooth, the phone owners are close enough to transmit the virus. The app includes daily health check-ins and educational notifications, but no personal information is exchanged between the phones, and the app won’t disclose who tested positive.
Colleges are also using apps to help trace and track students’ exposure to coronavirus. In August, 20,000 participants from the University of Alabama at Birmingham were asked to test the GuideSafe mobile app, which will alert them if they’ve been in contact with someone who tested positive for COVID-19. The app determines the proximity of two people through cell phone signal strength. If someone reports they contracted the virus, an alert will be sent to anyone who has been within six feet of them for at least 15 minutes over the previous two weeks.
Critics of the technology claim these apps aren’t actually capable of contract tracing and could undermine manual efforts to do so.
digital ethics, which I define simply as “doing the right thing at the intersection of technology innovation and accepted social values.”
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, written by Cathy O’Neil in early 2016, continues to be relevant and illuminating. O’Neil’s book revolves around her insight that “algorithms are opinions embedded in code,” in distinct contrast to the belief that algorithms are based on—and produce—indisputable facts.
Safiya Umoja Noble’s book Algorithms of Oppression: How Search Engines Reinforce Racism
The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power
Algorithmic test proctoring’s settings have discriminatory consequences across multiple identities and serious privacy implications.
While racist technology calibrated for white skin isn’t new (everything from photography to soap dispensers do this), we see it deployed through face detection and facial recognition used by algorithmic proctoring systems.
While some test proctoring companies develop their own facial recognition software, most purchase software developed by other companies, but these technologies generally function similarly and have shown a consistent inability to identify people with darker skin or even tell the difference between Chinese people. Facial recognition literally encodes the invisibility of Black people and the racist stereotype that all Asian people look the same.
As Os Keyes has demonstrated, facial recognition has a terrible history with gender. This means that a software asking students to verify their identity is compromising for students who identify as trans, non-binary, or express their gender in ways counter to cis/heteronormativity.
These features and settings create a system of asymmetric surveillance and lack of accountability, things which have always created a risk for abuse and sexual harassment. Technologies like these have a long history of being abused, largely by heterosexual men at the expense of women’s bodies, privacy, and dignity.
my note: I am repeating this for years
Sean Michael Morris and Jesse Stommel’s ongoing critique of Turnitin, a plagiarism detection software, outlines exactly how this logic operates in ed-tech and higher education: 1) don’t trust students, 2) surveil them, 3) ignore the complexity of writing and citation, and 4) monetize the data.
Technological Solutionism
Cheating is not a technological problem, but a social and pedagogical problem.
Our habit of believing that technology will solve pedagogical problems is endemic to narratives produced by the ed-tech community and, as Audrey Watters writes, is tied to the Silicon Valley culture that often funds it. Scholars have been dismantling the narrative of technological solutionism and neutrality for some time now. In her book “Algorithms of Oppression,” Safiya Umoja Noble demonstrates how the algorithms that are responsible for Google Search amplify and “reinforce oppressive social relationships and enact new modes of racial profiling.”
Anna Lauren Hoffmann, who coined the term “data violence” to describe the impact harmful technological systems have on people and how these systems retain the appearance of objectivity despite the disproportionate harm they inflict on marginalized communities.
This system of measuring bodies and behaviors, associating certain bodies and behaviors with desirability and others with inferiority, engages in what Lennard J. Davis calls the Eugenic Gaze.
Higher education is deeply complicit in the eugenics movement. Nazism borrowed many of its ideas about racial purity from the American school of eugenics, and universities were instrumental in supporting eugenics research by publishing copious literature on it, establishing endowed professorships, institutes, and scholarly societies that spearheaded eugenic research and propaganda.
It’s safe to say that Zuckerberg’s politics are issue-specific and generally party-agnostic.
Zuckerberg dropped out of Harvard after two years. Zuckerberg has enrolled for the past decade at the University of Davos, where rich people pretend they are smart and smart people pander to the rich. If someone chooses to study world politics from Henry Kissinger, you can assume that he will have some twisted views of how the world works.
violent deaths in schools have stayed relatively constant over the last 30 years, according to data from the National Center for Education Statistics. But then there’s the emotive reality, which is that every time another event like Sandy Hook or Parkland occurs, many educators and students feel they are in peril when they go to school.
RealNetworks, a Seattle-based software company that was popular in the 1990s for its audio and video streaming services but has since expanded to offer other tools, including SAFR (Secure, Accurate Facial Recognition), its AI-supported facial recognition software.
After installing new security cameras, purchasing a few Apple devices and upgrading the school’s Wi-Fi, St. Therese was looking at a $24,000 technology tab.
The software is programmed to allow authorized users into the building with a smile.
“Facial recognition isn’t a panacea. It is just a tool,” says Collins, who focuses on education privacy issues.
Another part of the problem with tools like SAFR, is it provides a false sense of security.