Twenty years have passed since renowned Harvard Professor Larry Lessig coined the phrase “Code is Law”, suggesting that in the digital age, computer code regulates behavior much like legislative code traditionally did.These days, the computer code that powers artificial intelligence (AI) is a salient example of Lessig’s statement.
Good AI requires sound data.One of the principles,some would say the organizing principle, of privacy and data protection frameworks is data minimization.Data protection laws require organizations to limit data collection to the extent strictly necessary and retain data only so long as it is needed for its stated goal.
Preventing discrimination – intentional or not.
When is a distinction between groups permissible or even merited and when is it untoward? How should organizations address historically entrenched inequalities that are embedded in data? New mathematical theories such as “fairness through awareness” enable sophisticated modeling to guarantee statistical parity between groups.
Assuring explainability – technological due process.In privacy and freedom of information frameworks alike, transparency has traditionally been a bulwark against unfairness and discrimination.As Justice Brandeis once wrote, “Sunlight is the best of disinfectants.”
Deep learning means that iterative computer programs derive conclusions for reasons that may not be evident even after forensic inquiry.
Yet even with code as law and a rising need for law in code, policymakers do not need to become mathematicians, engineers and coders.Instead, institutions must develop and enhance their technical toolbox by hiring experts and consulting with top academics, industry researchers and civil society voices.Responsible AI requires access to not only lawyers, ethicists and philosophers but also to technical leaders and subject matter experts to ensure an appropriate balance between economic and scientific benefits to society on the one hand and individual rights and freedoms on the other hand.
It will be eons before AI thinks with a limbic brain, let alone has consciousness
AI programmes themselves generate additional computer programming code to fine-tune their algorithms—without the need for an army of computer programmers. In AI speak, this is now often referred to as “machine learning”.
An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
Stephen Hawking warns artificial intelligence could end mankind
By Rory Cellan-JonesTechnology correspondent,2 December 2014
The most popular approaches today focus on Big Data, ormimicking humansthat already know how to do some task. But sheer mimicry breaks down when one gives a machine new tasks, and,as I explained a few weeks ago, Big Data approaches tend to excel at finding correlations without necessarily being able to induce the rules of the game. If Big Data alone is not a powerful enough tool to induce a strategy in a complex but well-defined game like chess, then that’s a problem, since the real world is vastly more open-ended, and considerably more complicated.
To say that someone is or is not intelligent has never been merely a comment on their mental faculties. It is always also a judgment on what they are permitted to do. Intelligence, in other words, is political.
The problem has taken an interesting 21st-century twist with the rise of Artificial Intelligence (AI).
The term ‘intelligence’ itself has never been popular with English-language philosophers. Nor does it have a direct translation into German or ancient Greek, two of the other great languages in the Western philosophical tradition. But that doesn’t mean philosophers weren’t interested in it. Indeed, they were obsessed with it, or more precisely a part of it: reason or rationality. The term ‘intelligence’ managed to eclipse its more old-fashioned relative in popular and political discourse only with the rise of the relatively new-fangled discipline of psychology, which claimed intelligence for itself.
Plato conclude, in The Republic, that the ideal ruler is ‘the philosopher king’, as only a philosopher can work out the proper order of things. This idea was revolutionary at the time. Athens had already experimented with democracy, the rule of the people – but to count as one of those ‘people’ you just had to be a male citizen, not necessarily intelligent. Elsewhere, the governing classes were made up of inherited elites (aristocracy), or by those who believed they had received divine instruction (theocracy), or simply by the strongest (tyranny).
Plato’s novel idea fell on the eager ears of the intellectuals, including those of his pupil Aristotle. Aristotle was always the more practical, taxonomic kind of thinker. He took the notion of the primacy of reason and used it to establish what he believed was a natural social hierarchy.
So at the dawn of Western philosophy, we have intelligence identified with the European, educated, male human. It becomes an argument for his right to dominate women, the lower classes, uncivilised peoples and non-human animals. While Plato argued for the supremacy of reason and placed it within a rather ungainly utopia, only one generation later, Aristotle presents the rule of the thinking man as obvious and natural.
The late Australian philosopher and conservationist Val Plumwood has argued that the giants of Greek philosophy set up a series of linked dualisms that continue to inform our thought. Opposing categories such as intelligent/stupid, rational/emotional and mind/body are linked, implicitly or explicitly, to others such as male/female, civilised/primitive, and human/animal. These dualisms aren’t value-neutral, but fall within a broader dualism, as Aristotle makes clear: that of dominant/subordinate or master/slave. Together, they make relationships of domination, such as patriarchy or slavery, appear to be part of the natural order of things.
Descartes rendered nature literally mindless, and so devoid of intrinsic value – which thereby legitimated the guilt-free oppression of other species.
For Kant, only reasoning creatures had moral standing. Rational beings were to be called ‘persons’ and were ‘ends in themselves’. Beings that were not rational, on the other hand, had ‘only a relative value as means, and are therefore called things’. We could do with them what we liked.
This line of thinking was extended to become a core part of the logic of colonialism. The argument ran like this: non-white peoples were less intelligent; they were therefore unqualified to rule over themselves and their lands. It was therefore perfectly legitimate – even a duty, ‘the white man’s burden’ – to destroy their cultures and take their territory.
The same logic was applied to women, who were considered too flighty and sentimental to enjoy the privileges afforded to the ‘rational man’.
Galton believe that intellectual ability was hereditary and could be enhanced through selective breeding. He decided to find a way to scientifically identify the most able members of society and encourage them to breed – prolifically, and with each other. The less intellectually capable should be discouraged from reproducing, or indeed prevented, for the sake of the species. Thus eugenics and the intelligence test were born together.
From David Hume to Friedrich Nietzsche, and Sigmund Freud through to postmodernism, there are plenty of philosophical traditions that challenge the notion that we’re as intelligent as we’d like to believe, and that intelligence is the highest virtue.
From 2001: A Space Odyssey to the Terminator films, writers have fantasised about machines rising up against us. Now we can see why. If we’re used to believing that the top spots in society should go to the brainiest, then of course we should expect to be made redundant by bigger-brained robots and sent to the bottom of the heap.
Natural stupidity, rather than artificial intelligence, remains the greatest risk.
Many educational institutions maintain their own data centers. “We need to minimize the amount of work we do to keep systems up and running, and spend more energy innovating on things that matter to people.”
what’s the difference between machine learning (ML) and artificial intelligence (AI)?
Jeff Olson: That’s actually the setup for a joke going around the data science community. The punchline? If it’s written in Python or R, it’s machine learning. If it’s written in PowerPoint, it’s AI.
machine learning is in practical use in a lot of places, whereas AI conjures up all these fantastic thoughts in people.
What is serverless architecture, and why are you excited about it?
Instead of having a machine running all the time, you just run the code necessary to do what you want—there is no persisting server or container. There is only this fleeting moment when the code is being executed. It’s called Function as a Service, and AWS pioneered it with a service called AWS Lambda. It allows an organization to scale up without planning ahead.
How do you think machine learning and Function as a Service will impact higher education in general?
The radical nature of this innovation will make a lot of systems that were built five or 10 years ago obsolete. Once an organization comes to grips with Function as a Service (FaaS) as a concept, it’s a pretty simple step for that institution to stop doing its own plumbing. FaaS will help accelerate innovation in education because of the API economy.
If the campus IT department will no longer be taking care of the plumbing, what will its role be?
I think IT will be curating the inter-operation of services, some developed locally but most purchased from the API economy.
As a result, you write far less code and have fewer security risks, so you can innovate faster. A succinct machine-learning algorithm with fewer than 500 lines of code can now replace an application that might have required millions of lines of code. Second, it scales. If you happen to have a gigantic spike in traffic, it deals with it effortlessly. If you have very little traffic, you incur a negligible cost.
We can build robot teachers, or even robot teaching assistants. But should we?
the Chinese government has declared a national goal of surpassing the U.S. in AI technology by the year 2030, so there is almost a Sputnik-like push for the tech going on right now in China. At the same time, China is also facing a shortage of qualified teachers in many rural areas, and there’s a huge demand for high-quality language teachers and tutors throughout the country.
President Donald Trump on Monday directed federal agencies to improve the nation’s artificial intelligence abilities — and help people whose jobs are displaced by the automation it enables.
t’s good for the US government to focus on AI, said Daniel Castro, chief executive of the Center for Data Innovation, a technology-focused think tank that supports the initiative.
Silicon Valley has been investing heavily in AI in recent years, but the path hasn’t always been an easy one. In October, for instance, Google withdrew from competition for a $10 billion Pentagon cloud computing contract, saying it might conflict with its principles for ethical use of AI.
Both jazz and classical art forms require not only music literacy, but for the musician to be at the top of their game in technical proficiency, tonal quality and creativity in the case of the jazz idiom. Jazz masters like John Coltrane would practice six to nine hours a day, often cutting his practice only because his inner lower lip would be bleeding from the friction caused by his mouth piece against his gums and teeth. His ability to compose and create new styles and directions for jazz was legendary. With few exceptions such as Wes Montgomery or Chet Baker, if you couldn’t read music, you couldn’t play jazz.
Besides the decline of music literacy and participation, there has also been a decline in the quality of music which has been proven scientifically by Joan Serra, a postdoctoral scholar at the Artificial Intelligence Research Institute of the Spanish National Research Council in Barcelona. Joan and his colleagues looked at 500,000 pieces of music between 1955-2010, running songs through a complex set of algorithms examining three aspects of those songs:
1. Timbre- sound color, texture and tone quality
2. Pitch- harmonic content of the piece, including its chords, melody, and tonal arrangements
3. Loudness- volume variance adding richness and depth
In an interview, Billy Joel was asked what has made him a standout. He responded his ability to read and compose music made him unique in the music industry, which as he explained, was troubling for the industry when being musically literate makes you stand out. An astonishing amount of today’s popular music is written by two people: Lukasz Gottwald of the United States and Max Martin from Sweden, who are both responsible for dozens of songs in the top 100 charts. You can credit Max and Dr. Luke for most the hits of these stars:
Katy Perry, Britney Spears, Kelly Clarkson, Taylor Swift, Jessie J., KE$HA, Miley Cyrus, Avril Lavigne, Maroon 5, Taio Cruz, Ellie Goulding, NSYNC, Backstreet Boys, Ariana Grande, Justin Timberlake, Nick Minaj, Celine Dion, Bon Jovi, Usher, Adam Lambert, Justin Bieber, Domino, Pink, Pitbull, One Direction, Flo Rida, Paris Hilton, The Veronicas, R. Kelly, Zebrahead
Way back in 1983, I identified A.I. as one of 20 exponential technologies that would increasingly drive economic growth for decades to come.
Artificial intelligence applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, decision trees and machine learning to recognize patterns from vast amounts of data, provide insights, predict outcomes and make complex decisions. A.I. can be applied to pattern recognition, object classification, language translation, data translation, logistical modeling and predictive modeling, to name a few. It’s important to understand that all A.I. relies on vast amounts of quality data and advanced analytics technology. The quality of the data used will determine the reliability of the A.I. output.
Machine learning is a subset of A.I. that utilizes advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon’s Alexa, Apple’s Siri, or any of the others from companies like Google and Microsoft all get better every year thanks to all of the use we give them and the machine learning that takes place in the background.
Deep learning is a subset of machine learning that uses advanced algorithms to enable an A.I. system to train itself to perform tasks by exposing multi-layered neural networks to vast amounts of data, then using what has been learned to recognize new patterns contained in the data. Learning can be Human Supervised Learning, Unsupervised Learningand/or Reinforcement Learning like Google used with DeepMind to learn how to beat humans at the complex game Go. Reinforcement learning will drive some of the biggest breakthroughs.
Autonomous computing uses advanced A.I. tools such as deep learning to enable systems to be self-governing and capable of acting according to situational data without human command. A.I. autonomy includes perception, high-speed analytics, machine-to-machine communications and movement. For example, autonomous vehicles use all of these in real time to successfully pilot a vehicle without a human driver.
Augmented thinking: Over the next five years and beyond, A.I. will become increasingly embedded at the chip level into objects, processes, products and services, and humans will augment their personal problem-solving and decision-making abilities with the insights A.I. provides to get to a better answer faster.
Technology is not good or evil, it is how we as humans apply it. Since we can’t stop the increasing power of A.I., I want us to direct its future, putting it to the best possible use for humans.