“Formulating a product, you better know about ethics and understand legal frameworks.”
These days a growing number of people are concerned with bringing more talk of ethics into technology. One question is whether that will bring change to data-science curricula.
Following major data breaches and privacy scandals at tech companies like Facebook, universities including Stanford, the University of Texas and Harvard have all added ethics courses into computer science degree programs to address tech’s “ethical dark side,” the New York Times has reported.
As more college and universities consider incorporating humanities courses into technical degree programs, some are asking what kind of ethics should be taught.
China has started ranking citizens with a creepy ‘social credit’ system — here’s what you can do wrong, and the embarrassing, demeaning ways they can punish you
It will be eons before AI thinks with a limbic brain, let alone has consciousness
AI programmes themselves generate additional computer programming code to fine-tune their algorithms—without the need for an army of computer programmers. In AI speak, this is now often referred to as “machine learning”.
An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
++++++++++++++++++
Stephen Hawking warns artificial intelligence could end mankind
By Rory Cellan-JonesTechnology correspondent,2 December 2014
Künstliche Intelligenzen und Roboter werden in unserem Leben immer selbstverständlicher. Was erwarten wir von den intelligenten Maschinen, wie verändert ihre Präsenz in unserem Alltag und die Interaktion mit ihnen unser Selbstverständnis und unseren Umgang mit anderen Menschen? Müssen wir Roboter als eine Art menschliches Gegenüber anerkennen? Und welche Freiheiten wollen wir den Maschinen einräumen? Es ist dringend an der Zeit, die ethischen und rechtlichen Fragen zu klären.
1954 wurdeUnimate, der erste Industrieroboter , von George Devol entwickelt [1]. Insbesondere in den 1970er Jahren haben viele produzierende Gewerbe eine Roboterisierung ihrer Arbeit erfahren (beispielsweise die Automobil- und Druckindustrie).
Definition eines Industrieroboters in der ISO 8373 (2012) vergegenwärtigt: »Ein Roboter ist ein frei und wieder programmierbarer, multifunktionaler Manipulator mit mindestens drei unabhängigen Achsen, um Materialien, Teile, Werkzeuge oder spezielle Geräte auf programmierten, variablen Bahnen zu bewegen zur Erfüllung der verschiedensten Aufgaben«.
Ethische Überlegungen zu Robotik und Künstlicher Intelligenz
Versucht man sich einen Überblick über die verschiedenen ethischen Probleme zu verschaffen, die mit dem Aufkommen von ›intelligenten‹ und in jeder Hinsicht (Präzision, Geschwindigkeit, Kraft, Kombinatorik und Vernetzung) immer mächtigeren Robotern verbunden sind, so ist es hilfreich, diese Probleme danach zu unterscheiden, ob sie
1. das Vorfeld der Ethik,
2. das bisherige Selbstverständnis menschlicher Subjekte (Anthropologie) oder
3. normative Fragen im Sinne von: »Was sollen wir tun?« betreffen.
Die folgenden Überlegungen geben einen kurzen Aufriss, mit welchen Fragen wir uns jeweils beschäftigen sollten, wie die verschiedenen Fragenkreise zusammenhängen, und woran wir uns in unseren Antworten orientieren können.
Aufgabe der Ethik ist es, solche moralischen Meinungen auf ihre Begründung und Geltung hin zu befragen und so zu einem geschärften ethischen Urteil zu kommen, das idealiter vor der Allgemeinheit moralischer Subjekte verantwortet werden kann und in seiner Umsetzung ein »gelungenes Leben mit und für die Anderen, in gerechten Institutionen« [8] ermöglicht. Das ist eine erste vage Richtungsangabe.
Normative Fragen lassen sich am Ende nur ganz konkret anhand einer bestimmten Situation bearbeiten. Entsprechend liefert die Ethik hier keine pauschalen Urteile wie: »Roboter sind gut/schlecht«, »Künstliche Intelligenz dient dem guten Leben/ist dem guten Leben abträglich«.
At a Ford Foundation conference dubbed Fairness by Design, officials, academics and advocates discussed how to address the problem of encoding human bias in algorithmic analysis. The White House recently issued a report on the topic to accelerate research into the issue.
U.S. CTO Megan Smith said the government has been “creating a seat for these techies,” but that training future generations of data scientists to tackle these issues depends on what we do today. “It’s how did we teach our children?” she said. “Why don’t we teach math and science the way we teach P.E. and art and music and make it fun?”
“Ethics is not just an elective, but some portion of the main core curriculum.”
SOCIO-INT15- 2nd INTERNATIONAL CONFERENCE ON EDUCATION SOCIAL SCIENCES AND HUMANITIES will be held in Istanbul (Turkey), on the 8th, 9th and 10th of June 2015 is an interdisciplinary international conference that invites academics, independent scholars and researchers from around the world to meet and exchange the latest ideas and discuss issues concerning all fields of Education, Social Sciences and Humanities.
SOCIO-INT15 provides the ideal opportunity to bring together professors, researchers and high education students of different disciplines, discuss new issues, and discover the most recent developments, new trends and researches in education, social sciences and humanities.
Academics making efforts in education, subfields of which might include higher education, early childhood education, adult education, special education, e-learning, language education, etc. are highly welcomed. People without papers can also participate in this conference as audience so long as they find it interesting and meaningful.
Due to the nature of the conference with its focus on innovative ideas and developments, papers also related to all areas of social sciences including communication, accounting, finance, economics, management, business, marketing, education, sociology, psychology, political science, law and other areas of social sciences; also all areas of humanities including anthropology, archaelogy, architecture, art, ethics, folklore studies, history, language studies, literature, methodological studies, music, philosophy, poetry and theater are invited for the international conference.
Submitted papers will be subject to peer review and evaluated based on originality and clarity of exposition.
Google’s chief executive has expressed concern that we don’t trust big companies with our data – but may be dismayed at Facebook’s latest venture into manipulation
The field of learning analytics isn’t just about advancing the understanding of learning. It’s also being applied in efforts to try to influence and predict student behavior.
Learning analytics has yet to demonstrate its big beneficial breakthrough, its “penicillin,” in the words of Reich. Nor has there been a big ethical failure to creep lots of people out.
“There’s a difference,” Pistilli says, “between what we can do and what we should do.”