Artificial intelligence (AI) and machine learning are no longer fantastical prospects seen only in science fiction. Products like Amazon Echo and Siri have brought AI into many homes,
Kelly Calhoun Williams, an education analyst for the technology research firm Gartner Inc., cautions there is a clear gap between the promise of AI and the reality of AI.
Artificial intelligence is a broad term used to describe any technology that emulates human intelligence, such as by understanding complex information, drawing its own conclusions and engaging in natural dialog with people.
Machine learning is a subset of AI in which the software can learn or adapt like a human can. Essentially, it analyzes huge amounts of data and looks for patterns in order to classify information or make predictions. The addition of a feedback loop allows the software to “learn” as it goes by modifying its approach based on whether the conclusions it draws are right or wrong.
AI can process far more information than a human can, and it can perform tasks much faster and with more accuracy. Some curriculum software developers have begun harnessing these capabilities to create programs that can adapt to each student’s unique circumstances.
GoGuardian, a Los Angeles company, uses machine learning technology to improve the accuracy of its cloud-based Internet filtering and monitoring software for Chromebooks. (My note: that smells Big Brother).Instead of blocking students’ access to questionable material based on a website’s address or domain name, GoGuardian’s software uses AI to analyze the actual content of a page in real time to determine whether it’s appropriate for students. (my note: privacy)
serious privacy concerns. It requires an increased focus not only on data quality and accuracy, but also on the responsible stewardship of this information. “School leaders need to get ready for AI from a policy standpoint,” Calhoun Williams said. For instance: What steps will administrators take to secure student data and ensure the privacy of this information?
Under the Children’s Internet Protection Act (CIPA), any US school that receives federal funding is required to have an internet-safety policy. As school-issued tablets and Chromebook laptops become more commonplace, schools must install technological guardrails to keep their students safe. For some, this simply means blocking inappropriate websites. Others, however, have turned to software companies like Gaggle, Securly, and GoGuardian to surface potentially worrisome communications to school administrators
Over 50% of teachers say their schools are one-to-one (the industry term for assigning every student a device of their own), according to a 2017 survey from Freckle Education
But even in an age of student suicides and school shootings, when do security precautions start to infringe on students’ freedoms?
When the Gaggle algorithm surfaces a word or phrase that may be of concern—like a mention of drugs or signs of cyberbullying—the “incident” gets sent to human reviewers before being passed on to the school. Using AI, the software is able to process thousands of student tweets, posts, and status updates to look for signs of harm.
SMPs help normalize surveillance from a young age. In the wake of the Cambridge Analytica scandal at Facebook and other recent data breaches from companies like Equifax, we have the opportunity to teach kids the importance of protecting their online data
in an age of increased school violence, bullying, and depression, schools have an obligation to protect their students. But the protection of kids’ personal information is also a matter of their safety
#FakeNews is a very timely and controversial issue. in 2-3 min choose your best source on this issue. 1. Mind the prevalence of resources in the 21st century 2. Mind the necessity to evaluate a) the veracity of your courses b) the quality of your sources (the fact that they are “true” does not mean that they are the best). Be prepared to name your source and defend its quality.
How do you determine your sources? How do you decide the reliability of your sources? Are you sure you can distinguish “good” from “bad?”
Compare this entry https://en.wikipedia.org/wiki/List_of_fake_news_websites
to this entry: https://docs.google.com/document/d/10eA5-mCZLSS4MQY5QGb5ewC3VAL6pLkT53V_81ZyitM/preview to understand the scope
what is social media (examples). why is called SM? why is so popular? what makes it so popular?
use SM tools for your research and education:
– Determining your topic. How to?
Digg http://digg.com/, Reddit https://www.reddit.com/ , Quora https://www.quora.com
Facebook, Twitter – hashtags (class assignment 2-3 min to search)
YouTube and Slideshare (class assignment 2-3 min to search)
Flickr, Instagram, Pinterest for visual aids (like YouTube they are media repositories)
The European Union‘s General Data Protection Regulation, or GDPR, goes into effect on May 25
The objective of the regulation, which passed in 2016, is to simplify and consolidate rules that companies need to follow in order to protect their data and to return control to EU citizens and residents over their personal information.
Individuals in the EU will have the right to access or request that companies erase or migrate their data elsewhere. When asked, companies must prove to authorities that they have satisfactory policies and procedures in place to protect their data, or they will face huge fines. How huge? If your company’s not compliant, the fines could be as large as 20 million Euros (about $24 million) or four percent of your annual global revenue, whichever is higher.
“A U.S. tourist who visits Germany for one day and returns to the U.S. has rights under the law if that person used [a service like] Facebook while on the trip,” Alex Stern, an attorney wrote on his firm’s blog.
Apps like WhatsApp, Facebook, Snapchat, Instagram, Twitter, LinkedIn, Viber
Felix Krause described in 2017 that when a user grants an app access to their camera and microphone, the app could do the following:
Access both the front and the back camera.
Record you at any time the app is in the foreground.
Take pictures and videos without telling you.
Upload the pictures and videos without telling you.
Upload the pictures/videos it takes immediately.
Run real-time face recognition to detect facial features or expressions.
Livestream the camera on to the internet.
Detect if the user is on their phone alone, or watching together with a second person.
Upload random frames of the video stream to your web service and run a proper face recognition software which can find existing photos of you on the internet and create a 3D model based on your face.
For instance, here’s a Find my Phone application which a documentary maker installed on a phone, then let someone steal it. After the person stole it, the original owner spied on every moment of the thief’s life through the phone’s camera and microphone.
Edward Snowden revealed an NSA program called Optic Nerves. The operation was a bulk surveillance program under which they captured webcam images every five minutes from Yahoo users’ video chats and then stored them for future use. It is estimated that between 3% and 11% of the images captured contained “undesirable nudity”.
Hackers can also gain access to your device with extraordinary ease via apps, PDF files, multimedia messages and even emojis.
An application called Metasploit on the ethical hacking platform Kali uses an Adobe Reader 9 (which over 60% of users still use) exploit to open a listener (rootkit) on the user’s computer. You alter the PDF with the program, send the user the malicious file, they open it, and hey presto – you have total control over their device remotely.
Once a user opens this PDF file, the hacker can then:
Install whatever software/app they like on the user’s device.
Use a keylogger to grab all of their passwords.
Steal all documents from the device.
Take pictures and stream videos from their camera.
Capture past or live audio from the microphone.
Upload incriminating images/documents to their PC, and notify the police.
And, if it’s not enough that your phone is tracking you – surveillance cameras in shops and streets are tracking you, too
You might even be on this website, InSeCam, which allows ordinary people online to watch surveillance cameras free of charge. It even allows you to search cameras by location, city, time zone, device manufacturer, and specify whether you want to see a kitchen, bar, restaurant or bedroom.
until recently, broadcasting and publishing were difficult and expensive affairs, their infrastructures riddled with bottlenecks and concentrated in a few hands.
When protests broke out in Ferguson, Missouri, in August 2014, a single livestreamer named Mustafa Hussein reportedly garnered an audience comparable in size to CNN’s for a short while. If a Bosnian Croat war criminal drinks poison in a courtroom, all of Twitter knows about it in minutes.
In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of free speech.
And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? My note: see the ability to create fake audio and video footage: http://blog.stcloudstate.edu/ims/2017/07/15/fake-news-and-video/
HERE’S HOW THIS golden age of speech actually works: In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure. It’s limited instead by one’s ability to garner and distribute attention. And right now, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms: Facebook, Google (which owns YouTube), and, to a lesser extent, Twitter.
at their core, their business is mundane: They’re ad brokers
They use massive surveillance of our behavior, online and off, to generate increasingly accurate, automated predictions of what advertisements we are most susceptible to and what content will keep us clicking, tapping, and scrolling down a bottomless feed.
in reality, posts are targeted and delivered privately, screen by screen by screen. Today’s phantom public sphere has been fragmented and submerged into billions of individual capillaries. Yes, mass discourse has become far easier for everyone to participate in—but it has simultaneously become a set of private conversations happening behind your back. Behind everyone’s backs.
It’s important to realize that, in using these dark posts, the Trump campaign wasn’t deviantly weaponizing an innocent tool. It was simply using Facebook exactly as it was designed to be used. The campaign did it cheaply, with Facebook staffers assisting right there in the office, as the tech company does for most large advertisers and political campaigns.