People who behaved in accordance with them—for example, by staying away from the overgrown pond bank where someone said there was a viper—were more likely to survive than those who did not.
Compounding the problem is the proliferation of online information. Viewing and producing blogs, videos, tweets and other units of information called memes has become so cheap and easy that the information marketplace is inundated. My note: folksonomy in its worst.
At the University of Warwick in England and at Indiana University Bloomington’s Observatory on Social Media (OSoMe, pronounced “awesome”), our teams are using cognitive experiments, simulations, data mining and artificial intelligence to comprehend the cognitive vulnerabilities of social media users. developing analytical and machine-learning aids to fight social media manipulation.
As Nobel Prize–winning economist and psychologist Herbert A. Simon noted, “What information consumes is rather obvious: it consumes the attention of its recipients.”
attention economy
Our models revealed that even when we want to see and share high-quality information, our inability to view everything in our news feeds inevitably leads us to share things that are partly or completely untrue.
Frederic Bartlett
Cognitive biases greatly worsen the problem.
We now know that our minds do this all the time: they adjust our understanding of new information so that it fits in with what we already know. One consequence of this so-called confirmation bias is that people often seek out, recall and understand information that best confirms what they already believe.
This tendency is extremely difficult to correct.
Making matters worse, search engines and social media platforms provide personalized recommendations based on the vast amounts of data they have about users’ past preferences.
pollution by bots
Social Herding
social groups create a pressure toward conformity so powerful that it can overcome individual preferences, and by amplifying random early differences, it can cause segregated groups to diverge to extremes.
Social media follows a similar dynamic. We confuse popularity with quality and end up copying the behavior we observe.
information is transmitted via “complex contagion”: when we are repeatedly exposed to an idea, typically from many sources, we are more likely to adopt and reshare it.
In addition to showing us items that conform with our views, social media platforms such as Facebook, Twitter, YouTube and Instagram place popular content at the top of our screens and show us how many people have liked and shared something.Few of us realize that these cues do not provide independent assessments of quality.
programmers who design the algorithms for ranking memes on social media assume that the “wisdom of crowds” will quickly identify high-quality items; they use popularity as a proxy for quality. My note: again, ill-conceived folksonomy.
Echo Chambers
the political echo chambers on Twitter are so extreme that individual users’ political leanings can be predicted with high accuracy: you have the same opinions as the majority of your connections. This chambered structure efficiently spreads information within a community while insulating that community from other groups.
socially shared information not only bolsters our biases but also becomes more resilient to correction.
machine-learning algorithms to detect social bots. One of these, Botometer, is a public tool that extracts 1,200 features from a given Twitter account to characterize its profile, friends, social network structure, temporal activity patterns, language and other features. The program compares these characteristics with those of tens of thousands of previously identified bots to give the Twitter account a score for its likely use of automation.
Some manipulators play both sides of a divide through separate fake news sites and bots, driving political polarization or monetization by ads.
recently uncovered a network of inauthentic accounts on Twitter that were all coordinated by the same entity. Some pretended to be pro-Trump supporters of the Make America Great Again campaign, whereas others posed as Trump “resisters”; all asked for political donations.
a mobile app called Fakey that helps users learn how to spot misinformation. The game simulates a social media news feed, showing actual articles from low- and high-credibility sources. Users must decide what they can or should not share and what to fact-check. Analysis of data from Fakey confirms the prevalence of online social herding: users are more likely to share low-credibility articles when they believe that many other people have shared them.
Hoaxy, shows how any extant meme spreads through Twitter. In this visualization, nodes represent actual Twitter accounts, and links depict how retweets, quotes, mentions and replies propagate the meme from account to account.
Free communication is not free. By decreasing the cost of information, we have decreased its value and invited its adulteration.
Beulr is a bot that attends Zoom class on your behalf. Beulr will join your Zoom meetings through a web browser on the cloud, displaying your information. You can schedule weeks in advance, and tell the bot exactly when to arrive and when to leave.
Elon Musk’s brain-computer startup is getting ready to blow your mind
Musk reckons his brain-computer interface could one day help humans merge with AI, record their memories, or download their consciousness. Could he be right?
The idea is to solve these problems with an implantable digital device that can interpret, and possibly alter, the electrical signals made by neurons in the brain.
the latest iteration of the company’s hardware: a small, circular device that attaches to the surface of the brain, gathering data from the cortex and passing it on to external computing systems for analysis.
Several different types of working brain-computer interfaces already exist, gathering data on electrical signals from the user’s brain and translating them into data that can be interpreted by machines.
++++++++++++
If we put computers in our brains, strange things might happen to our minds
Using a brain-computer interface can fundamentally change our grey matter, a view of ourselves and even how fast our brains can change the world.
They then worked with a deepfake artist who used an open-source algorithm to swap in Putin’s and Kim’s faces. A post-production crew cleaned up the leftover artifacts of the algorithm to make the video look more realistic. All in all the process took only 10 days. Attempting the equivalent with CGI likely would have taken months, the team says. It also could have been prohibitively expensive.
Last year, Australia’s Chief Scientist Alan Finkel suggested that we in Australia should become “human custodians”. This would mean being leaders in technological development, ethics, and human rights.
A recent report from the Australian Council of Learned Academies (ACOLA) brought together experts from scientific and technical fields as well as the humanities, arts and social sciences to examine key issues arising from artificial intelligence.
A similar vision drives Stanford University’s Institute for Human-Centered Artificial Intelligence. The institute brings together researchers from the humanities, education, law, medicine, business and STEM to study and develop “human-centred” AI technologies.
Meanwhile, across the Atlantic, the Future of Humanity Institute at the University of Oxford similarly investigates “big-picture questions” to ensure “a long and flourishing future for humanity”.
The IT sector is also wrestling with the ethical issues raised by rapid technological advancement. Microsoft’s Brad Smith and Harry Shum wrote in their 2018 book The Future Computed that one of their “most important conclusions” was that the humanities and social sciences have a crucial role to play in confronting the challenges raised by AI
Without training in ethics, human rights and social justice, the people who develop the technologies that will shape our future could make poor decisions.
digital ethics, which I define simply as “doing the right thing at the intersection of technology innovation and accepted social values.”
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, written by Cathy O’Neil in early 2016, continues to be relevant and illuminating. O’Neil’s book revolves around her insight that “algorithms are opinions embedded in code,” in distinct contrast to the belief that algorithms are based on—and produce—indisputable facts.
Safiya Umoja Noble’s book Algorithms of Oppression: How Search Engines Reinforce Racism
The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power
Got a new open access article out on the ways AI is embedding in education research. Well-funded precision education experts and learning engineers aim to collect psychodata, brain data and biodata as evidence of the embodied substrates of learning. https://t.co/CbdHReXUiz
This article presents an examination of how education research is being remade as an experimental data-intensive science. AI is combining with learning science in new ‘digital laboratories’ where ownership over data, and power and authority over educational knowledge production, are being redistributed to research assemblages of computational machines and scientific expertise.
Research across the sciences, humanities and social sciences is increasingly conducted through digital knowledge machines that are reconfiguring the ways knowledge is generated, circulated and used (Meyer and Schroeder, 2015).
Knowledge infrastructures, such as those of statistical institutes or research-intensive universities, have undergone significant digital transformation with the arrival of data-intensive technologies, with knowledge production now enacted in myriad settings, from academic laboratories and research institutes to commercial research and development studios, think tanks and consultancies. Datafied knowledge infrastructures have become hubs of command and control over the creation, analysis and exchange of data (Bigo et al., 2019).
The combination of AI and learning science into an AILSci research assemblage consists of particular forms of scientific expertise embodied by knowledge actors – individuals and organizations – identified by categories including science of learning, AIED, precision education and learning engineering.
Precision education overtly uses psychological, neurological and genomic data to tailor or personalize learning around the unique needs of the individual (Williamson, 2019). Precision education approaches include cognitive tracking, behavioural monitoring, brain imaging and DNA analysis.
Expert power is therefore claimed by those who can perform big data analyses, especially those able to translate and narrate the data for various audiences. Likewise, expert power in education is now claimed by those who can enact data-intensive science of learning, precision education and learning engineering research and development, and translate AILSci findings into knowledge for application in policy and practitioner settings.
the thinking of a thinking infrastructure is not merely a conscious human cognitive process, but relationally performed across humans and socio-material strata, wherein interconnected technical devices and other forms ‘organize thinking and thought and direct action’.
As an infrastructure for AILSci analyses, these technologies at least partly structure how experts think: they generate new understandings and knowledge about processes of education and learning that are only thinkable and knowable due to the computational machinery of the research enterprise.
Big data-based molecular genetics studies are part of a bioinformatics-led transformation of biomedical sciences based on analysing exceptional volumes of data (Parry and Greenhough, 2018), which has transformed the biological sciences to focus on structured and computable data rather than embodied evidence itself.
Isin and Ruppert (2019) have recently conceptualized an emergent form of power that they characterize as sensory power. Building on Foucault, they note how sovereign power gradually metamorphosed into disciplinary power and biopolitical forms of statistical regulation over bodies and populations. Sensory power marks a shift to practices of data-intensive sensing, and to the quantified tracking, recording and representing of living pulses, movements and sentiments through devices such as wearable fitness monitors, online natural-language processing and behaviour-tracking apps. Davies (2019: 515–20) designates these as ‘techno-somatic real-time sensing’ technologies that capture the ‘rhythms’ and ‘metronomic vitality’ of human bodies, and bring about ‘new cyborg-type assemblages of bodies, codes, screens and machines’ in a ‘constant cybernetic loop of action, feedback and adaptation’.
Techno-somatic modes of neural sensing, using neurotechnologies for brain imaging and neural analysis, are the next frontier in AILSci. Real-time brainwave sensing is being developed and trialled in multiple expert settings.
International Data Corporation says it expects the number of AI jobs globally to grow 16% this year.
a new report released Wednesday, IBM found the majority (85%) of AI professionals think the industry has become more diverse over recent years
3,200 people surveyed across North America, Europe and India, 86% said they are now confident in AI systems’ ability to make decisions without bias.
A plurality of men (46%) said they became interested in a tech career in high school or earlier, while a majority of women (53%) only considered it a possible path during their undergraduate degree or grad school.