Social media is the best friend disinformation ever had, and the cure is far from obvious.
Anya Schiffrin is an adjunct faculty member at the School of International and Public Affairs at Columbia University. She worked in Hanoi from 1997 to 1999 as the bureau chief of Dow Jones Newswires.
Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics By Yochai Benkler, Robert Faris, & Hal Roberts
Oxford University Press
A Harvard law professor who is a well-known theorist of the digital age, Benkler and colleagues have produced an authoritative tome that includes multiple taxonomies and literature reviews as well as visualizations of the flow of disinformation.
clickbait fabricators
white supremacist and alt-right trolls
a history of the scholarship on propaganda, reminding the reader that much of the discussion began in the 1930s.
Benkler’s optimistic 2007 book, The Wealth of Networks, predicted that the Internet would bring people together and transform the way information is created and spread. Today, Benkler is far less sanguine and has become one of the foremost researchers of disinformation networks.
Fox News, Breitbart, The Daily Caller, InfoWars, and Zero Hedge
As a result, mainstream journalists repeat and amplify the falsehoods even as they debunk them.
There is no clear line, they argue, between Russian propaganda, Breitbart lies, and the Trump victory. They add that Fox News is probably more influential than Facebook.
after George Soros gave a speech in January 2018 calling for regulation of the social media platforms, Facebook hired a Republican opposition research firm to shovel dirt at George Soros.
The European Union has not yet tried to regulate disinformation (although they do have codes of practice for the platforms), instead focusing on taxation, competition regulation, and protection of privacy. But Germany has strengthened its regulations regarding online hate speech, including the liability of the social media platforms.
disclosure of the sources of online political advertising.It’s a bit toothless because, just as with offshore bank accounts, it may be possible to register which U.S. entity is paying for online political advertising, but it’s impossible to know whether that entity is getting its funds from overseas. Even the Honest Ads bill was too much for Facebook to take.
In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.
1. Companies will face increased pressure about the data AI-embedded services use.
2. Public concern will lead to AI regulations. But we must understand this tech too.
In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.
This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.
Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.
3. More companies will make AI a strategic initiative in corporate social responsibility.
“Formulating a product, you better know about ethics and understand legal frameworks.”
These days a growing number of people are concerned with bringing more talk of ethics into technology. One question is whether that will bring change to data-science curricula.
Following major data breaches and privacy scandals at tech companies like Facebook, universities including Stanford, the University of Texas and Harvard have all added ethics courses into computer science degree programs to address tech’s “ethical dark side,” the New York Times has reported.
As more college and universities consider incorporating humanities courses into technical degree programs, some are asking what kind of ethics should be taught.
Scholars want peers to find—and cite—their research, and these days that increasingly happens on social media. The old adage ‘publish or perish’ could soon go digital as ‘clicks or canned.’
Several platforms have emerged over the past decade, offering researchers the chance to share their work and connect with other scholars. But some of those services have a bad rap from academics who say commercial sites lack the integrity of institutional repositories run by traditional universities. (Among the most widely-villified are ResearchGate and Academia.edu, which is evident by griping on social media and elsewhere.)
a 2015 paper comparing services and tools offered by various academic social networks, says researchers must weigh the benefits and drawbacks of each. “They can be great tools to advance your research, especially social research,” she says. “But just like with Facebook or any other social network, we need to be aware of potential issues we might have with copyright or privacy.”
Academia.edu is the largest of the academic social networks.
Created as a reference-management tool to help users organize their research, Mendeley also includes a number of social-networking features.
Scholabrate. The service claims to provide a more Facebook-esque, visual experience for academics seeking to network with others in their field.
Similar to Mendeley, Zotero functions primarily as a research tool, allowing users to collect, save, cite and share materials from a wide range of sources. The site also maintains a significant community of academics who can connect through groups and forums, or through their search engine.
While we often get distracted by the latest device or platform release, video has quietly been riding the wave of all of these advancements, benefiting from broader access to phones, displays, cameras and, most importantly, bandwidth. In fact, 68 percent of teachers are using video in their classrooms, and 74 percent of middle schoolers are watching videos for learning. From social media streams chock-full of video and GIFs to FaceTime with friends to two-hour Twitch broadcasts, video mediates students’ relationships with each other and the world. Video is a key aspect of our always-online attention economy that’s impacting voting behavior, and fueling hate speech and trolling. Put simply: Video is a contested civic space.
We need to move from a conflation of digital citizenship with internet safety and protectionism to a view of digital citizenship that’s pro-active and prioritizes media literacy and savvy. A good digital citizen doesn’t just dodge safety and privacy pitfalls, but works to remake the world, aided by digital technology like video, so it’s more thoughtful, inclusive and just.
1. Help Students Identify the Intent of What They Watch
equip students with some essential questions they can use to unpack the intentions of anything they encounter. One way to facilitate this thinking is by using a tool like EdPuzzle to edit the videos you want students to watch by inserting these questions at particularly relevant points in the video.
2. Be Aware That the Web Is a Unique Beast
Compared to traditional media (like broadcast TV or movies), the web is the Wild West.
Whether your school or district has officially adopted social media or not, conversations are happening in and around your school on everything from Facebook to Snapchat. Schools must reckon with this reality and commit to supporting thoughtful and critical social media use among students, teachers and administrators. If not, schools and classrooms risk everything from digital distraction to privacy violations.
Key Elements to Include in a Social Media Policy
Create parent opt-out forms that specifically address social media use.Avoid blanket opt-outs that generalize all technology or obfuscate how specific social media platforms will be used. (See this example by the World Privacy Forum as a starting point.)
Use these opt-out forms as a way to have more substantive conversations with parents about what you’re doing and why.
Describe what platforms are being used, where, when and how.
Avoid making the consequences of opt-out selections punitive (e.g., student participation in sports, theater, yearbook, etc.).
Restrict location sharing: Train teachers and students on how to turn off geolocation features/location services on devices as well as in specific apps.
Minimize information shared in teacher’s social media profiles: Advise teachers to list only grade level and subject in their public profiles and not to include specific school or district information.
Make social media use transparent to students: Have teachers explain their social media plan, and find out how students feel about it.
Most important: As with any technology, attach social media use to clearly articulated goals for student learning. Emphasize in your guidelines that teachers should audit any potential use of social media in terms of student-centered pedagogy: (1) Does it forward student learning in a way impossible through other means? and (2) Is using social media in my best interests or in my students’?
Moving from Policy to Practice.
Social media policies, like policies in general, are meant to mitigate the risk and liability of institutions rather than guide and support sound pedagogy and student learning. They serve a valuable purpose, but not one that impacts classrooms. So how do we make these policies more relevant to classrooms?
First, it forces policy to get distilled into what impacts classroom instruction and administration. Second, social media changes monthly, and it’s much easier to update a faculty handbook than a policy document. Third, it allows you to align social media issues with other aspects of teaching (assessment, parent communication, etc.) versus separating it out in its own section.
Whether your school or district has officially adopted social media or not, conversations are happening in and around your school on everything from Facebook to Snapchat.
Use policy creation as an opportunity to take inventory of your students’ needs, how social media is already being used by your teachers, and how policy can support both responsibly.
1. Create parent opt-out forms that specifically address social media use.
4. Most important: As with any technology, attach social media use to clearly articulated goals for student learning
Moving from Policy to Practice
Social media isn’t a novel phenomenon requiring separate attention. Ed tech, and the tech world in general, wants to tout every new development as a revolution. Most, however, are an iteration. While we get caught up re-inventing everything to wrestle with a perceived social media sea change, our students see it simply as a part of school life.
Under the Children’s Internet Protection Act (CIPA), any US school that receives federal funding is required to have an internet-safety policy. As school-issued tablets and Chromebook laptops become more commonplace, schools must install technological guardrails to keep their students safe. For some, this simply means blocking inappropriate websites. Others, however, have turned to software companies like Gaggle, Securly, and GoGuardian to surface potentially worrisome communications to school administrators
In an age of mass school-shootings and increased student suicides, SMPs Safety Management Platforms can play a vital role in preventing harm before it happens. Each of these companies has casestudies where an intercepted message helped save lives.
Over 50% of teachers say their schools are one-to-one (the industry term for assigning every student a device of their own), according to a 2017 survey from Freckle Education
But even in an age of student suicides and school shootings, when do security precautions start to infringe on students’ freedoms?
When the Gaggle algorithm surfaces a word or phrase that may be of concern—like a mention of drugs or signs of cyberbullying—the “incident” gets sent to human reviewers before being passed on to the school. Using AI, the software is able to process thousands of student tweets, posts, and status updates to look for signs of harm.
SMPs help normalize surveillance from a young age. In the wake of the Cambridge Analytica scandal at Facebook and other recent data breaches from companies like Equifax, we have the opportunity to teach kids the importance of protecting their online data
in an age of increased school violence, bullying, and depression, schools have an obligation to protect their students. But the protection of kids’ personal information is also a matter of their safety
Preliminary Plan for Monday, Sept 10, 5:45 PM to 8 PM
Introduction – who are the students in this class. About myself: http://web.stcloudstate.edu/pmiltenoff/faculty Contact info, “embedded” librarian idea – I am available to help during the semester with research and papers
#FakeNews is a very timely and controversial issue. in 2-3 min choose your best source on this issue. 1. Mind the prevalence of resources in the 21st century 2. Mind the necessity to evaluate a) the veracity of your courses b) the quality of your sources (the fact that they are “true” does not mean that they are the best). Be prepared to name your source and defend its quality.
How do you determine your sources? How do you decide the reliability of your sources? Are you sure you can distinguish “good” from “bad?”
Compare this entry https://en.wikipedia.org/wiki/List_of_fake_news_websites
to this entry: https://docs.google.com/document/d/10eA5-mCZLSS4MQY5QGb5ewC3VAL6pLkT53V_81ZyitM/preview to understand the scope
Do you know any fact checking sites? Can you identify spot sponsored content? Do you understand syndication? What do you understand under “media literacy,” “news literacy,” “information literacy.” https://blog.stcloudstate.edu/ims/2017/03/28/fake-news-resources/
Why do we need to explore the “fake news” phenomenon? Do you find it relevant to your professional development?
So, how do we do academic research? Let’s play another Kahoot: https://play.kahoot.it/#/k/5e09bb66-4d87-44a5-af21-c8f3d7ce23de
If you to structure this Kahoot, what are the questions, you will ask? What are the main steps in achieving successful research for your paper?
Research using social media
what is social media (examples). why is called SM? why is so popular? what makes it so popular?
use SM tools for your research and education:
– Determining your topic. How to?
Digg http://digg.com/, Reddit https://www.reddit.com/ , Quora https://www.quora.com
Facebook, Twitter – hashtags (class assignment 2-3 min to search)
LinkedIn Groups
YouTube and Slideshare (class assignment 2-3 min to search)
Flickr, Instagram, Pinterest for visual aids (like YouTube they are media repositories)