Falsehoods are spread due to biases in the brain, society, and computer algorithms (Ciampaglia & Menczer, 2018). A combined problem is “information overload and limited attention contribute to a degradation of the market’s discriminative power” (Qiu, Oliveira, Shirazi, Flammini, & Menczer, 2017). Falsehoods spread quickly in the US through social media because this has become Americans’ preferred way to read the news (59%) in the 21st century (Mitchell, Gottfried, Barthel, & Sheer, 2016). While a mature critical reader may recognize a hoax disguised as news, there are those who share it intentionally. A 2016 US poll revealed that 23% of American adults had shared misinformation unwittingly or on purpose; this poll reported high to moderate confidence in one’s ability to identify fake news with only 15% not very confident (Barthel, Mitchell, & Holcomb, 2016).
Hoaxy® takes it one step further and shows you who is spreading or debunking a hoax or disinformation on Twitter.
It will be eons before AI thinks with a limbic brain, let alone has consciousness
AI programmes themselves generate additional computer programming code to fine-tune their algorithms—without the need for an army of computer programmers. In AI speak, this is now often referred to as “machine learning”.
An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
++++++++++++++++++
Stephen Hawking warns artificial intelligence could end mankind
By Rory Cellan-JonesTechnology correspondent,2 December 2014
Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.
Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”
Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs. Where there are choices to be made, that’s where ethics comes in.
our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable.
model-based programming, in which machines do most of the coding work and are able to test as they go.
As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit”
The venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work.
Under the Children’s Internet Protection Act (CIPA), any US school that receives federal funding is required to have an internet-safety policy. As school-issued tablets and Chromebook laptops become more commonplace, schools must install technological guardrails to keep their students safe. For some, this simply means blocking inappropriate websites. Others, however, have turned to software companies like Gaggle, Securly, and GoGuardian to surface potentially worrisome communications to school administrators
In an age of mass school-shootings and increased student suicides, SMPs Safety Management Platforms can play a vital role in preventing harm before it happens. Each of these companies has casestudies where an intercepted message helped save lives.
Over 50% of teachers say their schools are one-to-one (the industry term for assigning every student a device of their own), according to a 2017 survey from Freckle Education
But even in an age of student suicides and school shootings, when do security precautions start to infringe on students’ freedoms?
When the Gaggle algorithm surfaces a word or phrase that may be of concern—like a mention of drugs or signs of cyberbullying—the “incident” gets sent to human reviewers before being passed on to the school. Using AI, the software is able to process thousands of student tweets, posts, and status updates to look for signs of harm.
SMPs help normalize surveillance from a young age. In the wake of the Cambridge Analytica scandal at Facebook and other recent data breaches from companies like Equifax, we have the opportunity to teach kids the importance of protecting their online data
in an age of increased school violence, bullying, and depression, schools have an obligation to protect their students. But the protection of kids’ personal information is also a matter of their safety
Instagram has done some tweaking of their algorithm which is making competition for visibility on the platform much tougher. According to our Facebook connections, this is a deliberate move to help reduce the spammy and less relevant behavior of certain Instagram accounts.
a great study that analyzed the content performance of Instagram posts when the hashtags are placed in the post compared to when hashtags are placed in the comments.
Including hashtags in the post resulted in 9.84% more Likes and 29.4% more Reach. Placing the hashtags in the comments resulted in 19.3% more comments for some strange reason.
Placing hashtags in the Instagram post resulted in 18% better content performance metrics. While this might seem like a slim margin, you must consider the extra time and steps it would take to go back and remember to add hashtags into your posts’ comments.
In 2014 Tim Berners-Lee, inventor of the World Wide Web, proposed an online ‘Magna Carta’ to protect the Internet, as a neutral system, from government and corporate manipulation. He was responding after revelations that British and US spy agencies were carrying out mass surveillance programmes; the Cambridge Analytica scandal makes his proposal as relevant as ever.
Luciano Floridi, professor of Philosophy and Ethics of Information at the Oxford Internet Institute, explains that grey power is not ordinary socio-political or military power. It is not the ability to directly influence others, but rather the power to influence those who influence power. To see grey power, you need only look at the hundreds of high-level instances of revolving-door staffing patterns between Google and European governmentsand the U.S. Department of State.
And then there is ‘surveillance capitalism’. Shoshana Zuboff, Professor Emerita at Harvard Business School, proposes that surveillance capitalism is ‘a new logic of accumulation’. The incredible evolution of computer processing power, complex algorithms and leaps in data storage capabilities combine to make surveillance capitalism possible. It is the process of accumulation by dispossession of the data that people produce.
The respected security technologist Bruce Schneier recently applied the insights of surveillance capitalism to the Cambridge Analytica/Facebook crisis.
For Schneier, ‘regulation is the only answer.’ He cites the EU’s General Data Protection Regulation coming into effect next month, which stipulates that users must consent to what personal data can be saved and how it is used.
Publisher / Organization: Athabasca University Press
Year founded: 2000
Description: The International Review of Research in Open and Distributed Learning disseminates original research, theory, and best practice in open and distributed learning worldwide.
Publisher / Organization: The University of Illinois at Chicago- University Library
Year founded: 1996
Description: First Monday is among the very first open access journals in the EdTech field. The journal’s subject matter encompasses the full range of Internet issues, including educational technologies, social media and web search. Contributors are urged via author guidelines to use simple explanations and less complex sentences and to be mindful that a large proportion of their readers are not part of academia and do not have English as a first language.
Academic Management: University of Catalonia (UOC)
Year founded: 2004
Description: This journal aims to: provide a vehicle for scholarly presentation and exchange of information between professionals, researchers and practitioners in the technology-enhanced education field; contribute to the advancement of scientific knowledge regarding the use of technology and computers in higher education; and inform readers about the latest developments in the application of information technologies (ITs) in higher education learning, training, research and management.
Description: Online Learning promotes the development and dissemination of new knowledge at the intersection of pedagogy, emerging technology, policy, and practice in online environments. The journal has been published for over 20 years as the Journal of Asynchronous Learning Networks (JALN) and recently merged with the Journal of Online Learning and Teaching (JOLT).
Publisher / Organization: International Forum of Educational Technology & Society
Year founded:1998
Description: Educational Technology & Society seeks academic articles on the issues affecting the developers of educational systems and educators who implement and manage these systems. Articles should discuss the perspectives of both communities – the programmers and the instructors. The journal is currently still accepting submissions for ongoing special issues, but will cease publication in the future as the editors feel that the field of EdTech is saturated with high quality publications.
Description: The Australasian Journal of Educational Technology aims to promote research and scholarship on the integration of technology in tertiary education, promote effective practice, and inform policy. The goal is to advance understanding of educational technology in post-school education settings, including higher and further education, lifelong learning, and training.
DESCRIPTION: The Internet and Higher Education is devoted to addressing contemporary issues and future developments related to online learning, teaching, and administration on the Internet in post-secondary settings. Articles should significantly address innovative deployments of Internet technology in instruction and report on research to demonstrate the effects of information technology on instruction in various contexts in higher education.
Publisher / Organization: British Educational Research Association (BERA)
YEAR FOUNDED: 1970
DESCRIPTION: The journal publishes theoretical perspectives, methodological developments and empirical research that demonstrate whether and how applications of instructional/educational technology systems, networks, tools and resources lead to improvements in formal and non-formal education at all levels, from early years through to higher, technical and vocational education, professional development and corporate training.
Description: Computers & Education aims to increase knowledge and understanding of ways in which digital technology can enhance education, through the publication of high quality research, which extends theory and practice.
Description: TechTrends targets professionals in the educational communication and technology field. It provides a vehicle that fosters the exchange of important and current information among professional practitioners. Among the topics addressed are the management of media and programs, the application of educational technology principles and techniques to instructional programs, and corporate and military training.
Description: Advances in technology and the growth of e-learning to provide educators and trainers with unique opportunities to enhance learning and teaching in corporate, government, healthcare, and higher education. IJEL serves as a forum to facilitate the international exchange of information on the current research, development, and practice of e-learning in these sectors.
Led by an Editorial Review Board of leaders in the field of e-Learning, the Journal is designed for the following audiences: researchers, developers, and practitioners in corporate, government, healthcare, and higher education. IJEL is a peer-reviewed journal.
Description: JCMST is a highly respected scholarly journal which offers an in-depth forum for the interchange of information in the fields of science, mathematics, and computer science. JCMST is the only periodical devoted specifically to using information technology in the teaching of mathematics and science.
Just as researchers build reputation over time that can be depicted (in part) through quantitative measures such as h-index and i10-index, journals are also compared based on the number of citations they receive..
Description: The Journal of Interactive Learning Research (JILR) publishes papers related to the underlying theory, design, implementation, effectiveness, and impact on education and training of the following interactive learning environments: authoring systems, cognitive tools for learning computer-assisted language learning computer-based assessment systems, computer-based training computer-mediated communications, computer-supported collaborative learning distributed learning environments, electronic performance support systems interactive learning environments, interactive multimedia systems interactive simulations and games, intelligent agents on the Internet intelligent tutoring systems, microworlds, virtual reality based learning systems.
Description: JEMH is designed to provide a multi-disciplinary forum to present and discuss research, development and applications of multimedia and hypermedia in education. It contributes to the advancement of the theory and practice of learning and teaching in environments that integrate images, sound, text, and data.
Publisher / Organization: Society for Information Technology and Teacher Education (SITE)
Year founded: 1997
Description: JTATE serves as a forum for the exchange of knowledge about the use of information technology in teacher education. Journal content covers preservice and inservice teacher education, graduate programs in areas such as curriculum and instruction, educational administration, staff development instructional technology, and educational computing.
Publisher / Organization: Association for the Advancement of Computing in Education (AACE)
YEAR FOUNDED: 2015
DESCRIPTION: The Journal of Online Learning Research (JOLR) is a peer-reviewed, international journal devoted to the theoretical, empirical, and pragmatic understanding of technologies and their impact on primary and secondary pedagogy and policy in primary and secondary (K-12) online and blended environments. JOLR is focused on publishing manuscripts that address online learning, catering particularly to the educators who research, practice, design, and/or administer in primary and secondary schooling in online settings. However, the journal also serves those educators who have chosen to blend online learning tools and strategies in their face-to-face classroom.
The most commonly used index to measure the relative importance of journals is the annual Journal Citation Reports (JCR). This report is published by Clarivate Analytics (previously Thomson Reuters).
SCImago Journal Rank (SJR indicator) measures the influence of journals based on the number of citations the articles in the journal receive and the importance or prestige of the journals where such citations come from. The SJR indicator is a free journal metric which uses an algorithm similar to PageRank and provides an open access alternative to the journal impact factor in the Web of Science Journal Citation Report. The portal draws from the information contained in the Scopus database (Elsevier B.V.).
Introduced by Google in 2004, Scholar is a freely accessible search engine that indexes the full text or metadata of scholarly publications across an array of publishing formats and disciplines.
Introduced by Elsevier in 2004, Scopus is an abstract and citation database that covers nearly 18,000 titles from more than 5,000 publishers. It offers journal metrics that go beyond just journals to include most serial titles, including supplements, special issues and conference proceedings. Scopus offers useful information such as the total number of citations, the total number of articles published, and the percent of articles cited.
Anne-Wil Harzing:
“Citations are not just a reflection of the impact that a particular piece of academic work has generated. Citations can be used to tell stories about academics, journals and fields of research, but they can also be used to distort stories”.
Harzing, A.-W. (2013). The publish or perish book: Your guide to effective and responsible citation analysis. http://harzing.com/popbook/index.htm
ResearchGate is a social networking site for scientists and researchers to share papers, ask and answer questions, and find collaborators. The community was founded in May 2008. Today it has over 14 million members.
Google Scholar allows users to search for digital or physical copies of articles, whether online or in libraries. It indexes “full-text journal articles, technical reports, preprints, theses, books, and other documents, including selected Web pages that are deemed to be ‘scholarly. It comprises an estimated 160 million documents.
Academia.edu is a social-networking platform for academics to share research papers. You can upload your own work, and follow the updates of your peers. Founded in 2008, the network currently has 59 million users, and adding 20 million documents.
The ORCHID (Open Researcher and Contributor ID) is a nonproprietary alphanumeric code to uniquely identify scientific and other academic authors and contributors. It provides a persistent identity for humans, similar to content-related entities on digital networks that utilize digital object identifiers (DOIs). The organization offers an open and independent registry intended to be the de facto standard for contributor identification in research and academic publishing.
The Scopus Author Identifier assigns a unique number to groups of documents written by the same author via an algorithm that matches authorship based on a certain criteria. If a document cannot be confidently matched with an author identifier, it is grouped separately. In this case, you may see more than one entry for the same author.
+++++++++++++++++
more on metrics in this iMS blog
Combine the superfast calculational capacities of Big Compute with the oceans of specific personal information comprising Big Data — and the fertile ground for computational propaganda emerges. That’s how the small AI programs called bots can be unleashed into cyberspace to target and deliver misinformation exactly to the people who will be most vulnerable to it. These messages can be refined over and over again based on how well they perform (again in terms of clicks, likes and so on). Worst of all, all this can be done semiautonomously, allowing the targeted propaganda (like fake news stories or faked images) to spread like viruses through communities most vulnerable to their misinformation.
According to Bolsover and Howard, viewing computational propaganda only from a technical perspective would be a grave mistake. As they explain, seeing it just in terms of variables and algorithms “plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it.”
Computational propaganda is a new thing. People just invented it. And they did so by realizing possibilities emerging from the intersection of new technologies (Big Compute, Big Data) and new behaviors those technologies allowed (social media). But the emphasis on behavior can’t be lost.
People are not machines. We do things for a whole lot of reasons including emotions of loss, anger, fear and longing. To combat computational propaganda’s potentially dangerous effects on democracy in a digital age, we will need to focus on both its howand its why.
I asked Tinder for my data. It sent me 800 pages of my deepest, darkest secrets
The dating app knows me better than I do, but these reams of intimate information are just the tip of the iceberg. What if my data is hacked – or sold?
Every European citizen is allowed to do so under EU data protection law, yet very few actually do, according to Tinder.
With the help of privacy activist Paul-Olivier Dehaye from personaldata.io and human rights lawyer Ravi Naik, I emailed Tinder requesting my personal data and got back way more than I bargained for.
Some 800 pages came back containing information such as my Facebook “likes”, links to where my Instagram photos would have been had I not previously deleted the associated account, my education, the age-rank of men I was interested in, how many Facebook friends I had, when and where every online conversation with every single one of my matches happened … the list goes on.
Reading through the 1,700 Tinder messages I’ve sent since 2013, I took a trip into my hopes, fears, sexual preferences and deepest secrets. Tinder knows me so well. It knows the real, inglorious version of me who copy-pasted the same joke to match 567, 568, and 569; who exchanged compulsively with 16 different people simultaneously one New Year’s Day, and then ghosted 16 of them.
“What you are describing is called secondary implicit disclosed information,” explains Alessandro Acquisti, professor of information technology at Carnegie Mellon University. “Tinder knows much more about you when studying your behaviour on the app. It knows how often you connect and at which times; the percentage of white men, black men, Asian men you have matched; which kinds of people are interested in you; which words you use the most; how much time people spend on your picture before swiping you, and so on. Personal data is the fuel of the economy. Consumers’ data is being traded and transacted for the purpose of advertising.”.
In May, an algorithm was used to scrape 40,000 profile images from the platform in order to build an AI to “genderise” faces. A few months earlier, 70,000 profiles from OkCupid (owned by Tinder’s parent company Match Group) were made public by a Danish researcher some commentators have labelled a “white supremacist”, who used the data to try to establish a link between intelligence and religious beliefs. The data is still out there.