An interactive discussion on the Innovating Pedagogy 2019 report from The Open University
About the Guest
Rebecca is a senior lecturer in the Institute of Educational Technology (IET) at The Open University in the UK and a senior fellow of the Higher Education Academy. Her primary research interests are educational futures, and how people learn together online and I supervise doctoral students in both these areas.
Rebecca worked for several years as a researcher and educator on the Schome project, which focuses on educational futures, and was also the research lead on the SocialLearn online learning platform, and learning analytics lead on the Open Science Lab (Outstanding ICT Initiative of the Year: THE Awards 2014). She is currently a pedagogic adviser to the FutureLearn MOOC platform, and evaluation lead on The Open University’s FutureLearn MOOCs. She is an active member of the Society for Learning Analytics Research, and have co-chaired many learning analytics events, included several associated with the Learning Analytics Community Exchange (LACE), European Project funded under Framework 7.
Rebecca’s most recent book, Augmented Education, was published by Palgrave in spring 2014.
Mor, Y., Ferguson, R., & Wasson, B. (2015). Editorial: Learning design, teacher inquiry into student learning and learning analytics: A call for action. British Journal of Educational Technology, 46(2), 221–229. https://doi.org/10.1111/bjet.12273
Hansen, C., Emin, V., Wasson, B., Mor, Y., Rodriguez-Triana, M., Dascalu, M., … Pernin, J. (2013). Towards an Integrated Model of Teacher Inquiry into Student Learning, Learning Design and Learning Analytics. Scaling up Learning for Sustained Impact – Proceedings of EC-TEL 2013, 8095, 605–606. https://doi.org/10.1007/978-3-642-40814-4_73
how to decolonize educational technology: MOOCs coming from the big colonial powers, not from small countries. Video games: many have very colonial perspective
strategies for innovative pedagogies: only certainly groups or aspects taking into account; rarely focus on support by management, scheduling, time tabling, tech support.
Summary This short paper lays out an attempt to measure how much activity from Russian state-operated accounts released in the dataset made available by Twitter in October 2018 was targeted at the United Kingdom. Finding UK-related Tweets is not an easy task. By applying a combination of geographic inference, keyword analysis and classification by algorithm, we identified UK-related Tweets sent by these accounts and subjected them to further qualitative and quantitative analytic techniques.
We find:
There were three phases in Russian influence operations : under-the-radar account building, minor Brexit vote visibility, and larger-scale visibility during the London terror attacks.
Russian influence operations linked to the UK were most visible when discussing Islam . Tweets discussing Islam over the period of terror attacks between March and June 2017 were retweeted 25 times more often than their other messages.
The most widely-followed and visible troll account, @TEN_GOP, shared 109 Tweets related to the UK. Of these, 60 percent were related to Islam .
The topology of tweet activity underlines the vulnerability of social media users to disinformation in the wake of a tragedy or outrage.
Focus on the UK was a minor part of wider influence operations in this data . Of the nine million Tweets released by Twitter, 3.1 million were in English (34 percent). Of these 3.1 million, we estimate 83 thousand were in some way linked to the UK (2.7%). Those Tweets were shared 222 thousand times. It is plausible we are therefore seeing how the UK was caught up in Russian operations against the US .
Influence operations captured in this data show attempts to falsely amplify other news sources and to take part in conversations around Islam , and rarely show attempts to spread ‘fake news’ or influence at an electoral level.
On 17 October 2018, Twitter released data about 9 million tweets from 3,841 blocked accounts affiliated with the Internet Research Agency (IRA) – a Russian organisation founded in 2013 and based in St Petersburg, accused of using social media platforms to push pro-Kremlin propaganda and influence nation states beyond their borders, as well as being tasked with spreading pro-Kremlin messaging in Russia. It is one of the first major datasets linked to state-operated accounts engaging in influence operations released by a social media platform.
Conclusion
This report outlines the ways in which accounts linked to the Russian Internet ResearchAgency (IRA) carried out influence operations on social media and the ways their operationsintersected with the UK.The UK plays a reasonably small part in the wider context of this data. We see two possibleexplanations: either influence operations were primarily targeted at the US and British Twitterusers were impacted as collate, or this dataset is limited to US-focused operations whereevents in the UK were highlighted in an attempt to impact US public, rather than a concertedeffort against the UK. It is plausible that such efforts al so existed but are not reflected inthis dataset.Nevertheless, the data offers a highly useful window into how Russian influence operationsare carried out, as well as highlighting the moments when we might be most vulnerable tothem.Between 2011 and 2016, these state-operated accounts were camouflaged. Through manualand automated methods, they were able to quietly build up the trappings of an active andwell-followed Twitter account before eventually pivoting into attempts to influence the widerTwitter ecosystem. Their methods included engaging in unrelated and innocuous topics ofconversation, often through automated methods, and through sharing and engaging withother, more mainstream sources of news.Although this data shows levels of electoral and party-political influence operations to berelatively low, the day of the Brexit referendum results showed how messaging originatingfrom Russian state-controlled accounts might come to be visible – on June 24th 2016, we believe UK Twitter users discussing the Brexit Vote would have encountered messages originating from these accounts.As early as 2014, however, influence operations began taking part in conversations aroundIslam, and these accounts came to the fore during the three months of terror attacks thattook place between March and June 2017. In the immediate wake of these attacks, messagesrelated to Islam and circulated by Russian state-operated Twitter accounts were widelyshared, and would likely have been visible in the UK.The dataset released by Twitter begins to answer some questions about attempts by a foreignstate to interfere in British affairs online. It is notable that overt political or electoralinterference is poorly represented in this dataset: rather, we see attempts at stirring societaldivision, particularly around Islam in the UK, as the messages that resonated the most overthe period.What is perhaps most interesting about this moment is its portrayal of when we as socialmedia users are most vulnerable to the kinds of messages circulated by those looking toinfluence us. In the immediate aftermath of terror attacks, the data suggests, social mediausers were more receptive to this kind of messaging than at any other time.
It is clear that hostile states have identified the growth of online news and social media as aweak spot, and that significant effort has gone into attempting to exploit new media toinfluence its users. Understanding the ways in which these platforms have been used tospread division is an important first step to fighting it.Nevertheless, it is clear that this dataset provides just one window into the ways in whichforeign states have attempted to use online platforms as part of wider information warfare
and influence campaigns. We hope that other platforms will follow Twitter’s lead and release
similar datasets and encourage their users to proactively tackle those who would abuse theirplatforms.
Falsehoods are spread due to biases in the brain, society, and computer algorithms (Ciampaglia & Menczer, 2018). A combined problem is “information overload and limited attention contribute to a degradation of the market’s discriminative power” (Qiu, Oliveira, Shirazi, Flammini, & Menczer, 2017). Falsehoods spread quickly in the US through social media because this has become Americans’ preferred way to read the news (59%) in the 21st century (Mitchell, Gottfried, Barthel, & Sheer, 2016). While a mature critical reader may recognize a hoax disguised as news, there are those who share it intentionally. A 2016 US poll revealed that 23% of American adults had shared misinformation unwittingly or on purpose; this poll reported high to moderate confidence in one’s ability to identify fake news with only 15% not very confident (Barthel, Mitchell, & Holcomb, 2016).
Hoaxy® takes it one step further and shows you who is spreading or debunking a hoax or disinformation on Twitter.
A Game, a Video, and a Framework for Teaching Website Evaluation
In this age of fake and misleading news being spread through social media, it is more important than ever to teach students how to view websites with a critical eye. Here are three good resources
The RADCAB website offers short explanations of each of the aspects of evaluation and why they are significant. The site also provides a rubric (link opens PDF) that you can download and print for your students to use to score the credibility of a website.
Preliminary Plan for Monday, Sept 10, 5:45 PM to 8 PM
Introduction – who are the students in this class. About myself: http://web.stcloudstate.edu/pmiltenoff/faculty Contact info, “embedded” librarian idea – I am available to help during the semester with research and papers
#FakeNews is a very timely and controversial issue. in 2-3 min choose your best source on this issue. 1. Mind the prevalence of resources in the 21st century 2. Mind the necessity to evaluate a) the veracity of your courses b) the quality of your sources (the fact that they are “true” does not mean that they are the best). Be prepared to name your source and defend its quality.
How do you determine your sources? How do you decide the reliability of your sources? Are you sure you can distinguish “good” from “bad?”
Compare this entry https://en.wikipedia.org/wiki/List_of_fake_news_websites
to this entry: https://docs.google.com/document/d/10eA5-mCZLSS4MQY5QGb5ewC3VAL6pLkT53V_81ZyitM/preview to understand the scope
Do you know any fact checking sites? Can you identify spot sponsored content? Do you understand syndication? What do you understand under “media literacy,” “news literacy,” “information literacy.” https://blog.stcloudstate.edu/ims/2017/03/28/fake-news-resources/
Why do we need to explore the “fake news” phenomenon? Do you find it relevant to your professional development?
So, how do we do academic research? Let’s play another Kahoot: https://play.kahoot.it/#/k/5e09bb66-4d87-44a5-af21-c8f3d7ce23de
If you to structure this Kahoot, what are the questions, you will ask? What are the main steps in achieving successful research for your paper?
Research using social media
what is social media (examples). why is called SM? why is so popular? what makes it so popular?
use SM tools for your research and education:
– Determining your topic. How to?
Digg http://digg.com/, Reddit https://www.reddit.com/ , Quora https://www.quora.com
Facebook, Twitter – hashtags (class assignment 2-3 min to search)
LinkedIn Groups
YouTube and Slideshare (class assignment 2-3 min to search)
Flickr, Instagram, Pinterest for visual aids (like YouTube they are media repositories)
high school students now create infographics, BuzzFeed-like quizzes and even virtual reality (VR) experiences to illustrate how they can research, write and express their thoughts.
technology — using sites like CoSpaces Edu and content learning system Schoology (my note: the equivalnet of D2L at SCSU) — to engage and empower her students.
Thinklink, during a session called “Virtually Not an Essay: Technological Alternatives to a standard essay assignment.” (see this blog materials on ThingLink and like here: https://blog.stcloudstate.edu/ims?s=thinglink. The author made typo by calling the app “ThinKlink, instead of ThinGlink. Also, to use Thinglink’s Video 360 editor, the free account is not sufficient and the $125/month upgrade is needed. Not a good solution for education)
Jamie: I would love to discuss with you #infographics and #Thinglink for use in your courses and the Departmental course.
Digital literacy (DL): options, ideas, possibilities
https://www-wired-com.cdn.ampproject.org/c/s/www.wired.com/story/187-things-the-blockchain-is-supposed-to-fix/amp
Blockchains, which use advanced cryptography to store information across networks of computers, could eliminate the need for trusted third parties, like banks, in transactions, legal agreements, and other contracts. The most ardent blockchain-heads believe it has the power to reshape the global financial system, and possibly even the internet as we know it.
Now, as the technology expands from a fringe hacker toy to legitimate business applications, opportunists have flooded the field. Some of the seekers are mercenaries pitching shady or fraudulent tokens, others are businesses looking to cash in on a hot trend, and still others are true believers in the revolutionary and disruptive powers of distributed networks.
Mentions of blockchains and digital currencies on corporate earnings calls doubled in 2017 over the year prior, according to Fortune. Last week at Consensus, the country’s largest blockchain conference, 100 sponsors, including top corporate consulting firms and law firms, hawked their wares.
Here is a noncomprehensive list of the ways blockchain promoters say they will change the world. They run the spectrum from industry-specific (a blockchain project designed to increase blockchain adoption) to global ambitions (fixing the global supply chain’s apparent $9 trillion cash flow issue).
Things Blockchain Technology Will Fix
Bots with nefarious intent
Skynet
People not taking their medicine
Device storage that could be used for bitcoin mining
To identify bots, the Center used a tool known as “Botometer,” developed by researchers at the University of Southern California and Indiana University.
It is important to note that bot accounts do not always clearly identify themselves as such in their profiles, and any bot classification system inevitably carries some risk of error. The Botometer system has been documented and validated in an array of academicpublications, and researchers from the Center conducted a number of independent validation measures of its results.8