Posts Tagged ‘artifical intelligence’

ethics and AI

Ethik und Künstliche Intelligenz: Die Zeit drängt – wir müssen handeln

8/7/2108 Prof. Dr. theol. habil. Arne Manzeschke,3451885

Das Europäische Parlament hat es im vergangenen Jahr ganz drastisch formuliert. Eine neue industrielle Revolution steht an
1954 wurdeUnimate, der erste Industrieroboter , von George Devol entwickelt [1]. Insbesondere in den 1970er Jahren haben viele produzierende Gewerbe eine Roboterisierung ihrer Arbeit erfahren (beispielsweise die Automobil- und Druckindustrie).
Definition eines Industrieroboters in der ISO 8373 (2012) vergegenwärtigt: »Ein Roboter ist ein frei und wieder programmierbarer, multifunktionaler Manipulator mit mindestens drei unabhängigen Achsen, um Materialien, Teile, Werkzeuge oder spezielle Geräte auf programmierten, variablen Bahnen zu bewegen zur Erfüllung der verschiedensten Aufgaben«.

Ethische Überlegungen zu Robotik und Künstlicher Intelligenz

Versucht man sich einen Überblick über die verschiedenen ethischen Probleme zu verschaffen, die mit dem Aufkommen von ›intelligenten‹ und in jeder Hinsicht (Präzision, Geschwindigkeit, Kraft, Kombinatorik und Vernetzung) immer mächtigeren Robotern verbunden sind, so ist es hilfreich, diese Probleme danach zu unterscheiden, ob sie

1. das Vorfeld der Ethik,

2. das bisherige Selbstverständnis menschlicher Subjekte (Anthropologie) oder

3. normative Fragen im Sinne von: »Was sollen wir tun?« betreffen.

Die folgenden Überlegungen geben einen kurzen Aufriss, mit welchen Fragen wir uns jeweils beschäftigen sollten, wie die verschiedenen Fragenkreise zusammenhängen, und woran wir uns in unseren Antworten orientieren können.

Aufgabe der Ethik ist es, solche moralischen Meinungen auf ihre Begründung und Geltung hin zu befragen und so zu einem geschärften ethischen Urteil zu kommen, das idealiter vor der Allgemeinheit moralischer Subjekte verantwortet werden kann und in seiner Umsetzung ein »gelungenes Leben mit und für die Anderen, in gerechten Institutionen« [8] ermöglicht. Das ist eine erste vage Richtungsangabe.

Normative Fragen lassen sich am Ende nur ganz konkret anhand einer bestimmten Situation bearbeiten. Entsprechend liefert die Ethik hier keine pauschalen Urteile wie: »Roboter sind gut/schlecht«, »Künstliche Intelligenz dient dem guten Leben/ist dem guten Leben abträglich«.

more on Artificial Intelligence in this IMS blog

AI AR customers

Can A.I. and AR Turn Your Prospects Into Customers?

These technologies are the next step in business. Here are three ways to grow.

1. Enhance the retail experience with AR.

Tech-savvy retailers and e-commerce e-tailers are incorporating augmented reality technology to enhance the customer experience. Given that 61% of consumers prefer stores which provide AR experiences, integrating AR technology is an effective way to improve customer experiences with your brand and turn prospects into customers.

2. Identify and follow up on leads through AI.

AI technology can be used by businesses as an ultra-reliable sales assistant. An AI-enhanced assistant can collect and analyze data about the lead, and it can remind you when to follow up on leads and ensures no vital stones are left unturned. Even better, AI can help you focus on the leads that are more likely to turn into sales and prompt you when you should take specific actions.

One example is Zia, the Zoho Intelligent Assistant built into the Zoho CRM application. Zia can predict which leads are more likely to close, so you can prioritize your sales rep time and better forecast sales.

3. Improve marketing campaigns with augmented reality.

AR can enable businesses to deliver marketing strategies in real time. This means customers can experience your products or services as they are meant to be. In the retail sector, savvy brands are using AR as a powerful form of marketing. Timberland, for example, invested in Lemon and Orange’s virtual fitting room technology, to allow customers to ‘try before they buy’ remotely.


more on artificial intelligence in this IMS blog

more on augmented reality in this IMS blog

AI and China education

China’s children are its secret weapon in the global AI arms race

China wants to be the world leader in artificial intelligence by 2030. To get there, it’s reinventing the way children are taught

despite China’s many technological advances, in this new cyberspace race, the West had the lead.

Xi knew he had to act. Within twelve months he revealed his plan to make China a science and technology superpower. By 2030 the country would lead the world in AI, with a sector worth $150 billion. How? By teaching a generation of young Chinese to be the best computer scientists in the world.

Today, the US tech sector has its pick of the finest minds from across the world, importing top talent from other countries – including from China. Over half of Bay Area workers are highly-skilled immigrants. But with the growth of economies worldwide and a Presidential administration hell-bent on restricting visas, it’s unclear that approach can last.

In the UK the situation is even worse. Here, the government predicts there’ll be a shortfall of three million employees for high-skilled jobs by 2022 – even before you factor in the immigration crunch of Brexit. By contrast, China is plotting a homegrown strategy of local and national talent development programs. It may prove a masterstroke.

In 2013 the city’s teenagers gained global renown when they topped the charts in the PISA tests administered every three years by the OECD to see which country’s kids are the smartest in the world. Aged 15, Shanghai students were on average three full years ahead of their counterparts in the UK or US in maths and one-and-a-half years ahead in science.

Teachers, too, were expected to be learners. Unlike in the UK, where, when I began to teach a decade ago, you might be working on full-stops with eleven-year-olds then taking eighteen-year-olds through the finer points of poetry, teachers in Shanghai specialised not only in a subject area, but also an age-group.

Shanghai’s success owed a lot to Confucian tradition, but it fitted precisely the best contemporary understanding of how expertise is developed. In his book Why Don’t Kids Like School? cognitive Dan Willingham explains that complex mental skills like creativity and critical thinking depend on our first having mastered the simple stuff. Memorisation and repetition of the basics serve to lay down the neural architecture that creates automaticity of thought, ultimately freeing up space in our working memory to think big.

Seung-bin Lee, a seventeen-year-old high school graduate, told me of studying fourteen hours a day, seven days a week, for the three years leading up to the Suneung, the fearsome SAT exam taken by all Korean school leavers on a single Thursday each November, for which all flights are grounded so as not to break students’ concentration during the 45 minutes of the English listening paper.
Korea’s childhoods were being lost to a relentless regime of studying, crushed in a top-down system that saw them as cyphers rather than kids.

A decade ago, we consoled ourselves that although kids in China and Korea worked harder and did better on tests than ours, it didn’t matter. They were compliant, unthinking drones, lacking the creativity, critical thinking or entrepreneurialism needed to succeed in the world. No longer. Though there are still issues with Chinese education – urban centres like Shanghai and Hong Kong are positive outliers – the country knows something that we once did: education is the one investment on which a return is guaranteed. China is on course to becoming the first education superpower.

Troublingly, where education in the UK and US has been defined by creativity and independent thinking – Shanghai teachers told me of visits to our schools to learn about these qualities – our direction of travel is now away from those strengths and towards exams and standardisation, with school-readiness tests in the pipeline and UK schools minister Nick Gibb suggesting kids can beat exam stress by sitting more of them. Centres of excellence remain, but increasingly, it feels, we’re putting our children at risk of losing out to the robots, while China is building on its strong foundations to ask how its young people can be high-tech pioneers. They’re thinking big – we’re thinking of test scores.

soon “digital information processing” would be included as a core subject on China’s national graduation exam – the Gaokao – and pictured classrooms in which students would learn in cross-disciplinary fashion, designing mobile phones for example, in order to develop design, engineering and computing skills. Focusing on teaching kids to code was short-sighted, he explained. “We still regard it as a language between human and computer.” (My note: they are practically implementing the Finland’s attempt to rebuild curricula)

“If your plan is for one year,” went an old Chinese saying, “plant rice. If your plan is for ten years, plant trees. If your plan is for 100 years, educate children.” Two and half thousand years later chancellor Gwan Zhong might update his proverb, swapping rice for bitcoin and trees for artificial intelligence, but I’m sure he’d stand by his final point.

more on AR in this IMS blog

more on China education in this IMS blog

intelligence measure

Intelligence: a history

Intelligence has always been used as fig-leaf to justify domination and destruction. No wonder we fear super-smart robots

Stephen Cave

To say that someone is or is not intelligent has never been merely a comment on their mental faculties. It is always also a judgment on what they are permitted to do. Intelligence, in other words, is political.

The problem has taken an interesting 21st-century twist with the rise of Artificial Intelligence (AI).

The term ‘intelligence’ itself has never been popular with English-language philosophers. Nor does it have a direct translation into German or ancient Greek, two of the other great languages in the Western philosophical tradition. But that doesn’t mean philosophers weren’t interested in it. Indeed, they were obsessed with it, or more precisely a part of it: reason or rationality. The term ‘intelligence’ managed to eclipse its more old-fashioned relative in popular and political discourse only with the rise of the relatively new-fangled discipline of psychology, which claimed intelligence for itself.

Plato conclude, in The Republic, that the ideal ruler is ‘the philosopher king’, as only a philosopher can work out the proper order of things. This idea was revolutionary at the time. Athens had already experimented with democracy, the rule of the people – but to count as one of those ‘people’ you just had to be a male citizen, not necessarily intelligent. Elsewhere, the governing classes were made up of inherited elites (aristocracy), or by those who believed they had received divine instruction (theocracy), or simply by the strongest (tyranny).

Plato’s novel idea fell on the eager ears of the intellectuals, including those of his pupil Aristotle. Aristotle was always the more practical, taxonomic kind of thinker. He took the notion of the primacy of reason and used it to establish what he believed was a natural social hierarchy.

So at the dawn of Western philosophy, we have intelligence identified with the European, educated, male human. It becomes an argument for his right to dominate women, the lower classes, uncivilised peoples and non-human animals. While Plato argued for the supremacy of reason and placed it within a rather ungainly utopia, only one generation later, Aristotle presents the rule of the thinking man as obvious and natural.

The late Australian philosopher and conservationist Val Plumwood has argued that the giants of Greek philosophy set up a series of linked dualisms that continue to inform our thought. Opposing categories such as intelligent/stupid, rational/emotional and mind/body are linked, implicitly or explicitly, to others such as male/female, civilised/primitive, and human/animal. These dualisms aren’t value-neutral, but fall within a broader dualism, as Aristotle makes clear: that of dominant/subordinate or master/slave. Together, they make relationships of domination, such as patriarchy or slavery, appear to be part of the natural order of things.

Descartes rendered nature literally mindless, and so devoid of intrinsic value – which thereby legitimated the guilt-free oppression of other species.

For Kant, only reasoning creatures had moral standing. Rational beings were to be called ‘persons’ and were ‘ends in themselves’. Beings that were not rational, on the other hand, had ‘only a relative value as means, and are therefore called things’. We could do with them what we liked.

This line of thinking was extended to become a core part of the logic of colonialism. The argument ran like this: non-white peoples were less intelligent; they were therefore unqualified to rule over themselves and their lands. It was therefore perfectly legitimate – even a duty, ‘the white man’s burden’ – to destroy their cultures and take their territory.

The same logic was applied to women, who were considered too flighty and sentimental to enjoy the privileges afforded to the ‘rational man’.

Galton believe that intellectual ability was hereditary and could be enhanced through selective breeding. He decided to find a way to scientifically identify the most able members of society and encourage them to breed – prolifically, and with each other. The less intellectually capable should be discouraged from reproducing, or indeed prevented, for the sake of the species. Thus eugenics and the intelligence test were born together.

From David Hume to Friedrich Nietzsche, and Sigmund Freud through to postmodernism, there are plenty of philosophical traditions that challenge the notion that we’re as intelligent as we’d like to believe, and that intelligence is the highest virtue.

From 2001: A Space Odyssey to the Terminator films, writers have fantasised about machines rising up against us. Now we can see why. If we’re used to believing that the top spots in society should go to the brainiest, then of course we should expect to be made redundant by bigger-brained robots and sent to the bottom of the heap.

Natural stupidity, rather than artificial intelligence, remains the greatest risk.

more on intelligence in this IMS blog