Twenty years have passed since renowned Harvard Professor Larry Lessig coined the phrase “Code is Law”, suggesting that in the digital age, computer code regulates behavior much like legislative code traditionally did.These days, the computer code that powers artificial intelligence (AI) is a salient example of Lessig’s statement.
Good AI requires sound data.One of the principles,some would say the organizing principle, of privacy and data protection frameworks is data minimization.Data protection laws require organizations to limit data collection to the extent strictly necessary and retain data only so long as it is needed for its stated goal.
Preventing discrimination – intentional or not.
When is a distinction between groups permissible or even merited and when is it untoward? How should organizations address historically entrenched inequalities that are embedded in data? New mathematical theories such as “fairness through awareness” enable sophisticated modeling to guarantee statistical parity between groups.
Assuring explainability – technological due process.In privacy and freedom of information frameworks alike, transparency has traditionally been a bulwark against unfairness and discrimination.As Justice Brandeis once wrote, “Sunlight is the best of disinfectants.”
Deep learning means that iterative computer programs derive conclusions for reasons that may not be evident even after forensic inquiry.
Yet even with code as law and a rising need for law in code, policymakers do not need to become mathematicians, engineers and coders.Instead, institutions must develop and enhance their technical toolbox by hiring experts and consulting with top academics, industry researchers and civil society voices.Responsible AI requires access to not only lawyers, ethicists and philosophers but also to technical leaders and subject matter experts to ensure an appropriate balance between economic and scientific benefits to society on the one hand and individual rights and freedoms on the other hand.
It will be eons before AI thinks with a limbic brain, let alone has consciousness
AI programmes themselves generate additional computer programming code to fine-tune their algorithms—without the need for an army of computer programmers. In AI speak, this is now often referred to as “machine learning”.
An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
Stephen Hawking warns artificial intelligence could end mankind
By Rory Cellan-JonesTechnology correspondent,2 December 2014
The most popular approaches today focus on Big Data, ormimicking humansthat already know how to do some task. But sheer mimicry breaks down when one gives a machine new tasks, and,as I explained a few weeks ago, Big Data approaches tend to excel at finding correlations without necessarily being able to induce the rules of the game. If Big Data alone is not a powerful enough tool to induce a strategy in a complex but well-defined game like chess, then that’s a problem, since the real world is vastly more open-ended, and considerably more complicated.
To say that someone is or is not intelligent has never been merely a comment on their mental faculties. It is always also a judgment on what they are permitted to do. Intelligence, in other words, is political.
The problem has taken an interesting 21st-century twist with the rise of Artificial Intelligence (AI).
The term ‘intelligence’ itself has never been popular with English-language philosophers. Nor does it have a direct translation into German or ancient Greek, two of the other great languages in the Western philosophical tradition. But that doesn’t mean philosophers weren’t interested in it. Indeed, they were obsessed with it, or more precisely a part of it: reason or rationality. The term ‘intelligence’ managed to eclipse its more old-fashioned relative in popular and political discourse only with the rise of the relatively new-fangled discipline of psychology, which claimed intelligence for itself.
Plato conclude, in The Republic, that the ideal ruler is ‘the philosopher king’, as only a philosopher can work out the proper order of things. This idea was revolutionary at the time. Athens had already experimented with democracy, the rule of the people – but to count as one of those ‘people’ you just had to be a male citizen, not necessarily intelligent. Elsewhere, the governing classes were made up of inherited elites (aristocracy), or by those who believed they had received divine instruction (theocracy), or simply by the strongest (tyranny).
Plato’s novel idea fell on the eager ears of the intellectuals, including those of his pupil Aristotle. Aristotle was always the more practical, taxonomic kind of thinker. He took the notion of the primacy of reason and used it to establish what he believed was a natural social hierarchy.
So at the dawn of Western philosophy, we have intelligence identified with the European, educated, male human. It becomes an argument for his right to dominate women, the lower classes, uncivilised peoples and non-human animals. While Plato argued for the supremacy of reason and placed it within a rather ungainly utopia, only one generation later, Aristotle presents the rule of the thinking man as obvious and natural.
The late Australian philosopher and conservationist Val Plumwood has argued that the giants of Greek philosophy set up a series of linked dualisms that continue to inform our thought. Opposing categories such as intelligent/stupid, rational/emotional and mind/body are linked, implicitly or explicitly, to others such as male/female, civilised/primitive, and human/animal. These dualisms aren’t value-neutral, but fall within a broader dualism, as Aristotle makes clear: that of dominant/subordinate or master/slave. Together, they make relationships of domination, such as patriarchy or slavery, appear to be part of the natural order of things.
Descartes rendered nature literally mindless, and so devoid of intrinsic value – which thereby legitimated the guilt-free oppression of other species.
For Kant, only reasoning creatures had moral standing. Rational beings were to be called ‘persons’ and were ‘ends in themselves’. Beings that were not rational, on the other hand, had ‘only a relative value as means, and are therefore called things’. We could do with them what we liked.
This line of thinking was extended to become a core part of the logic of colonialism. The argument ran like this: non-white peoples were less intelligent; they were therefore unqualified to rule over themselves and their lands. It was therefore perfectly legitimate – even a duty, ‘the white man’s burden’ – to destroy their cultures and take their territory.
The same logic was applied to women, who were considered too flighty and sentimental to enjoy the privileges afforded to the ‘rational man’.
Galton believe that intellectual ability was hereditary and could be enhanced through selective breeding. He decided to find a way to scientifically identify the most able members of society and encourage them to breed – prolifically, and with each other. The less intellectually capable should be discouraged from reproducing, or indeed prevented, for the sake of the species. Thus eugenics and the intelligence test were born together.
From David Hume to Friedrich Nietzsche, and Sigmund Freud through to postmodernism, there are plenty of philosophical traditions that challenge the notion that we’re as intelligent as we’d like to believe, and that intelligence is the highest virtue.
From 2001: A Space Odyssey to the Terminator films, writers have fantasised about machines rising up against us. Now we can see why. If we’re used to believing that the top spots in society should go to the brainiest, then of course we should expect to be made redundant by bigger-brained robots and sent to the bottom of the heap.
Natural stupidity, rather than artificial intelligence, remains the greatest risk.
Information literacies (media literacy, Research Literacy, digital literacy, visual literacy, financial literacy, health literacy, cyber wellness, infographics, information behavior, trans-literacy, post-literacy)
Information Literacy and academic libraries
Information Literacy and adult education
Information Literacy and blended learning
Information Literacy and distance learning
Information Literacy and mobile devices
Information Literacy and Gamification
Information Literacy and public libraries
Information Literacy in Primary and Secondary Schools
Information Literacy and the Knowledge Economy
Information Literacy and Lifelong Learning
Information Literacy and the Information Society
Information Literacy and the Multimedia Society
Information Literacy and the Digital Society
Information Literacy in the modern world (e.g trends, emerging technologies and innovation, growth of digital resources, digital reference tools, reference services).
The future of Information Literacy
Workplace Information Literacy
Librarians as support to the lifelong learning process
Digital literacy, Digital Citizenship
Digital pedagogy and Information Literacy
Information Literacy Needs in the Electronic Resource Environment
Integrating Information Literacy into the curriculum
Putting Information Literacy theory into practice
Information Literacy training and instruction
Instructional design and performance for Information Literacy (e.g. teaching practice, session design, lesson plans)
Information Literacy and online learning (e.g. self-paced IL modules, online courses, Library Guides)
Information Literacy and Virtual Learning Environments
Supporting users need through library 2.0 and beyond
Digital empowerment and reference work
Information Literacy across the disciplines
Information Literacy and digital preservation
Innovative IL approaches
Student engagement with Information Literacy
Information Literacy, Copyright and Intellectual Property
Information Literacy and Academic Writing
Media and Information Literacy – theoretical approaches (standards, assessment, collaboration, etc.)
The Digital Competence Framework 2.0
Information Literacy theory (models, standards, indicators, Moscow Declaration etc.)
Information Literacy and Artificial intelligence
Information Literacy and information behavior
Information Literacy and reference services: cyber reference services, virtual reference services, mobile reference services
Information Literacy cultural and contextual approaches
Information Literacy and Threshold concepts
Information Literacy evaluation and assessment
Information Literacy in different cultures and countries including national studies
Information Literacy project management
Measuring in Information Literacy instruction assessment
New aspects of education/strategic planning, policy, and advocacy for Information Literacy in a digital age
Information Literacy and the Digital Divide
Policy and Planning for Information Literacy
Branding, promotion and marketing for Information Literacy
Cross –sectorial; and interdisciplinary collaboration and partnerships for Information Literacy
Leadership and Governance for Information Literacy
Strategic planning for IL
Strategies in e-learning to promote self-directed and sustainable learning in the area of Information Literacy skills.
Project Information Literacy, a nonprofit research institution that explores how college students find, evaluate and use information. It was commissioned by the John S. and James L. Knight Foundation and The Harvard Graduate School of Education.
focus groups and interviews with 103 undergraduates and 37 faculty members from eight U.S. colleges.
To better equip students for the modern information environment, the report recommends that faculty teach algorithm literacy in their classrooms. And given students’ reliance on learning from their peers when it comes to technology, the authors also suggest that students help co-design these learning experiences.
While informed and critically aware media users may see past the resulting content found in suggestions provided after conducting a search on YouTube, Facebook, or Google, those without these skills, particularly young or inexperienced users, fail to realize the culpability of underlying algorithms in the resultant filter bubbles and echo chambers (Cohen, 2018).
Media literacy education is more important than ever. It’s not just the overwhelming calls to understand the effects of fake news or addressing data breaches threatening personal information, it is the artificial intelligence systems being designed to predict and project what is perceived to be what consumers of social media want.
Literacy in today’s online and offline environments “means being able to use the dominant symbol systems of the culture for personal, aesthetic, cultural, social, and political goals” (Hobbs & Jensen, 2018, p 4).
Blended Reality, a cross-curricular applied research program through which they create interactive experiences using virtual reality, augmented reality and 3D printing tools. Yale is one of about 20 colleges participating in the HP/Educause Campus of the Future project investigating the use of this technology in higher education.
Interdisciplinary student and professor teams at Yale have developed projects that include using motion capture and artificial intelligence to generate dance choreography, converting museum exhibits into detailed digital replicas, and making an app that uses augmented reality to simulate injuries on the mannequins medical students use for training.
The perspectives and skills of art and humanities students have been critical to the success of these efforts, says Justin Berry, faculty member at the Yale Center for Collaborative Arts and Media and principal investigator for the HP Blended Reality grant.
Artificial intelligence and mixed reality have driven demand in learning games around the world, according to a new report by Metaari. A five-year forecast has predicted that educational gaming will reach $24 billion by 2024, with a compound annual growth rate of 33 percent and a quadrupling of revenues. Metaari is an analyst firm that tracks advanced learning technology.
what i find most important: Future IT Workforce: Deploying a broad array of modern recruitment, retention, and employment practices to develop a resilient IT talent pipeline for the institution
Digital Integrations: Ensuring system interoperability, scalability, and extensibility, as well as data integrity, security, standards, and governance, across multiple applications and platforms
Engaged Learning: Incorporating technologies that enable students to create content and engage in active learning in course curricula
Student Retention and Completion: Developing the capabilities and systems to incorporate artificial intelligence into student services to provide personalized, timely support
Administrative Simplification: Applying user-centered design, process improvement, and system reengineering to reduce redundant or unnecessary efforts and improve end-user experiences
Improved Enrollment: Using technology, data, and analytics to develop an inclusive and financially sustainable enrollment strategy to serve more and new learners by personalizing recruitment, enrollment, and learning experiences
Workforce of the Future: Using technology to develop curriculum, content, and learning experiences that prepare students for the evolving workforce
Holistic Student Success: Applying technology and data, including artificial intelligence, to understand and address the numerous contributors to student success, from finances to health and wellness to academic performance and degree planning (my note: this is what Christine Waisner, Mark Gill and Plamen Miltenoff are trying to do with their VR research)
Improved Teaching: Strengthening engagement among faculty, technologists, and researchers to achieve the true and expanding potential of technology to improve teaching
Student-Centric Higher Education: Creating a student-services ecosystem to support the entire student life cycle, from prospecting to enrollment, learning, job placement, alumni engagement, and continuing education