Posts Tagged ‘ethics’

McMindfulness

McMindfulness: how capitalism hijacked the Buddhist teaching of mindfulness

https://www.cbc.ca/radio/tapestry/mcmindfulness-and-the-case-for-small-talk-1.5369984/mcmindfulness-how-capitalism-hijacked-the-buddhist-teaching-of-mindfulness-1.5369991

On McMindfulness

dthic

quote the former Buddhist monk Clark Strand here. This was in a review of your work. “None of us dreamed that mindfulness would become so popular or even lucrative, much less that it would be used as a way to keep millions of us sleeping soundly through some of the worst cultural excesses in human history, all while fooling us into thinking we were awake and quiet.”

corporate mindfulness programs are now quite popular. And as we all know, most employees these days are extremely stressed out. The Gallup poll that came out about four or five years ago said that corporations — and this is in the U.S. — are losing approximately 300 billion dollars a year from stress-related absences and seven out of ten employees report being disengaged from their work.

The remedy has now become mindfulness, where employees are then trained individually to learn how to cope and adjust to these toxic corporate conditions rather than launching kind of a diagnosis of the systemic causes of stress not only in corporations but in our society at large. That sort of dialogue, that sort of inquiry, is not happening.

An integrity bubble is where there is a small oasis within a corporation –  for example let’s take Google because that’s a great example of it.

You have a small group of engineers who are getting individual level benefits from corporate mindfulness training. They’re learning how to de-stress. Google engineers [are] working 60-70 hours a week – very stressful. So they’re getting individual level benefits while not questioning the digital distraction technologies [that] Google engineers are actually trying to work on. Those issues are not taken into account in a kind of mindful way.

So you become mindful, to become more productive, to produce technologies of mass distraction, which is quite an irony in many ways. A sad irony actually.

mindfulness could be revolutionized in a way that does not denigrate the therapeutic benefits of self-care, but it becomes interdependent with these causes and conditions of suffering which go beyond just individuals.

+++++++++++
more on mindfulness in this IMS blog
http://blog.stcloudstate.edu/ims?s=mindfulness

Education and Ethics

4 Ways AI Education and Ethics Will Disrupt Society in 2019

By Tara Chklovski     Jan 28, 2019

https://www.edsurge.com/news/2019-01-28-4-ways-ai-education-and-ethics-will-disrupt-society-in-2019

In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.

Meanwhile, the public was amazed at technological advances like Boston Dynamic’s Atlas robot doing parkour, while simultaneously being outraged at the thought of our data no longer being ours and Alexa listening in on all our conversations.

1. Companies will face increased pressure about the data AI-embedded services use.

2. Public concern will lead to AI regulations. But we must understand this tech too.

In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.

This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.

Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.

3. More companies will make AI a strategic initiative in corporate social responsibility.

Google invested $25 million in AI for Good and Microsoft added an AI for Humanitarian Action to its prior commitment. While these are positive steps, the tech industry continues to have a diversity problem

4. Funding for AI literacy and public education will skyrocket.

Ryan Calo from the University of Washington explains that it matters how we talk about technologies that we don’t fully understand.

 

 

 

ethics and exact sciences

University Data Science Programs Turn to Ethics and the Humanities

By Sydney Johnson     Jan 11, 2019

https://www.edsurge.com/news/2019-01-11-university-data-science-programs-turn-to-ethics-and-the-humanities

“Formulating a product, you better know about ethics and understand legal frameworks.”

These days a growing number of people are concerned with bringing more talk of ethics into technology. One question is whether that will bring change to data-science curricula.

Following major data breaches and privacy scandals at tech companies like Facebook, universities including Stanford, the University of Texas and Harvard have all added ethics courses into computer science degree programs to address tech’s “ethical dark side,” the New York Times has reported.

As more college and universities consider incorporating humanities courses into technical degree programs, some are asking what kind of ethics should be taught.

 

++++++++++++++
more on ethics in this IMS blog
http://blog.stcloudstate.edu/ims?s=ethics

China electric cars

Spy cars in China

The Chinese government is now set to control everything you do with your car.

Posted by PlayGround + on Friday, December 7, 2018

+++++++++++
more on ethics and AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=ethics

Social Credit System

Social Credit System

https://en.wikipedia.org/wiki/Social_Credit_System

China ‘social credit’: Beijing sets up huge system

26 October 2015 https://www.bbc.com/news/world-asia-china-34592186

China’s “Social Credit System” Will Rate How Valuable You Are as a Human

What people can and can’t do will depend on how high their “citizen score” is.

Dom GaleonDecember 2nd 2017 https://futurism.com/china-social-credit-system-rate-human-value/

China has started ranking citizens with a creepy ‘social credit’ system — here’s what you can do wrong, and the embarrassing, demeaning ways they can punish you

Alexandra Ma Oct. 29, 2018, 12:06 PM https://www.businessinsider.com/china-social-credit-system-punishments-and-rewards-explained-2018-4/

How does China’s social credit system work?

China is taking digital control of its people to chilling lengths

https://www.theguardian.com/commentisfree/2018/may/27/china-taking-digital-control-of-its-people-to-unprecedented-and-chilling-lengths
+++++++++++++++++++++

Social credit system from AP DealFlow

China’s Social Credit System: The Quantification of Citizenship from Morgan Reede

Digital Surveillance in China: From the Great Firewall to the Social Credit System from Aarhus University

AI and ethics

Live Facebook discussion at SCSU VizLab on ethics and technology:

Join our discussion on #technology and #ethics. share your opinions, suggestions, ideas

Posted by InforMedia Services on Thursday, November 1, 2018

Heard on Marketplace this morning (Oct. 22, 2018): ethics of artificial intelligence with John Havens of the Institute of Electrical and Electronics Engineers, which has developed a new ethics certification process for AI: https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec_bios.pdf

Ethics and AI

***** The student club, the Philosophical Society, has now been recognized by SCSU as a student organization ***

https://ed.ted.com/lessons/the-ethical-dilemma-of-self-driving-cars-patrick-lin

Could it be the case that a random decision is still better then predetermined one designed to minimize harm?

similar ethical considerations are raised also:

in this sitcom

https://www.theatlantic.com/sponsored/hpe-2018/the-ethics-of-ai/1865/ (full movie)

This TED talk:

http://blog.stcloudstate.edu/ims/2017/09/19/social-media-algorithms/

http://blog.stcloudstate.edu/ims/2018/10/02/social-media-monopoly/

 

 

+++++++++++++++++++
IoT (Internet of Things), Industry 4.0, Big Data, BlockChain,

+++++++++++++++++++
IoT (Internet of Things), Industry 4.0, Big Data, BlockChain, Privacy, Security, Surveilance

http://blog.stcloudstate.edu/ims?s=internet+of+things

peer-reviewed literature;

Keyword search: ethic* + Internet of Things = 31

Baldini, G., Botterman, M., Neisse, R., & Tallacchini, M. (2018). Ethical Design in the Internet of Things. Science & Engineering Ethics24(3), 905–925. https://doi-org.libproxy.stcloudstate.edu/10.1007/s11948-016-9754-5

Berman, F., & Cerf, V. G. (2017). Social and Ethical Behavior in the Internet of Things. Communications of the ACM60(2), 6–7. https://doi-org.libproxy.stcloudstate.edu/10.1145/3036698

Murdock, G. (2018). Media Materialties: For A Moral Economy of Machines. Journal of Communication68(2), 359–368. https://doi-org.libproxy.stcloudstate.edu/10.1093/joc/jqx023

Carrier, J. G. (2018). Moral economy: What’s in a name. Anthropological Theory18(1), 18–35. https://doi-org.libproxy.stcloudstate.edu/10.1177/1463499617735259

Kernaghan, K. (2014). Digital dilemmas: Values, ethics and information technology. Canadian Public Administration57(2), 295–317. https://doi-org.libproxy.stcloudstate.edu/10.1111/capa.12069

Koucheryavy, Y., Kirichek, R., Glushakov, R., & Pirmagomedov, R. (2017). Quo vadis, humanity? Ethics on the last mile toward cybernetic organism. Russian Journal of Communication9(3), 287–293. https://doi-org.libproxy.stcloudstate.edu/10.1080/19409419.2017.1376561

Keyword search: ethic+ + autonomous vehicles = 46

Cerf, V. G. (2017). A Brittle and Fragile Future. Communications of the ACM60(7), 7. https://doi-org.libproxy.stcloudstate.edu/10.1145/3102112

Fleetwood, J. (2017). Public Health, Ethics, and Autonomous Vehicles. American Journal of Public Health107(4), 632–537. https://doi-org.libproxy.stcloudstate.edu/10.2105/AJPH.2016.303628

HARRIS, J. (2018). Who Owns My Autonomous Vehicle? Ethics and Responsibility in Artificial and Human Intelligence. Cambridge Quarterly of Healthcare Ethics27(4), 599–609. https://doi-org.libproxy.stcloudstate.edu/10.1017/S0963180118000038

Keeling, G. (2018). Legal Necessity, Pareto Efficiency & Justified Killing in Autonomous Vehicle Collisions. Ethical Theory & Moral Practice21(2), 413–427. https://doi-org.libproxy.stcloudstate.edu/10.1007/s10677-018-9887-5

Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science & Engineering Ethics21(3), 619–630. https://doi-org.libproxy.stcloudstate.edu/10.1007/s11948-014-9565-5

Getha-Taylor, H. (2017). The Problem with Automated Ethics. Public Integrity19(4), 299–300. https://doi-org.libproxy.stcloudstate.edu/10.1080/10999922.2016.1250575

Keyword search: ethic* + artificial intelligence = 349

Etzioni, A., & Etzioni, O. (2017). Incorporating Ethics into Artificial Intelligence. Journal of Ethics21(4), 403–418. https://doi-org.libproxy.stcloudstate.edu/10.1007/s10892-017-9252-2

Köse, U. (2018). Are We Safe Enough in the Future of Artificial Intelligence? A Discussion on Machine Ethics and Artificial Intelligence Safety. BRAIN: Broad Research in Artificial Intelligence & Neuroscience9(2), 184–197. Retrieved from http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3daph%26AN%3d129943455%26site%3dehost-live%26scope%3dsite

++++++++++++++++
http://www.cts.umn.edu/events/conference/2018

2018 CTS Transportation Research Conference

Keynote presentations will explore the future of driving and the evolution and potential of automated vehicle technologies.

+++++++++++++++++++
http://blog.stcloudstate.edu/ims/2016/02/26/philosophy-and-technology/

+++++++++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims/2018/09/07/limbic-thought-artificial-intelligence/

AI and autonomous cars as ALA discussion topic
http://blog.stcloudstate.edu/ims/2018/01/11/ai-autonomous-cars-libraries/

and privacy concerns
http://blog.stcloudstate.edu/ims/2018/09/14/ai-for-education/

the call of the German scientists on ethics and AI
http://blog.stcloudstate.edu/ims/2018/09/01/ethics-and-ai/

AI in the race for world dominance
http://blog.stcloudstate.edu/ims/2018/04/21/ai-china-education/

Limbic thought and artificial intelligence

Limbic thought and artificial intelligence

September 5, 2018  Siddharth (Sid) Pai

https://www.linkedin.com/pulse/limbic-thought-artificial-intelligence-siddharth-sid-pai/

An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
++++++++++++++++++

Stephen Hawking warns artificial intelligence could end mankind

https://www.bbc.com/news/technology-30290540
++++++++++++++++++++
thank you Sarnath Ramnat (sarnath@stcloudstate.edu) for the finding

An AI Wake-Up Call From Ancient Greece

  https://www.project-syndicate.org/commentary/artificial-intelligence-pandoras-box-by-adrienne-mayor-2018-10

++++++++++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artifical+intelligence

ethics and AI

Ethik und Künstliche Intelligenz: Die Zeit drängt – wir müssen handeln

8/7/2108 Prof. Dr. theol. habil. Arne Manzeschke

https://www.pcwelt.de/a/ethik-und-ki-die-zeit-draengt-wir-muessen-handeln,3451885

Das Europäische Parlament hat es im vergangenen Jahr ganz drastisch formuliert. Eine neue industrielle Revolution steht an
1954 wurdeUnimate, der erste Industrieroboter , von George Devol entwickelt [1]. Insbesondere in den 1970er Jahren haben viele produzierende Gewerbe eine Roboterisierung ihrer Arbeit erfahren (beispielsweise die Automobil- und Druckindustrie).
Definition eines Industrieroboters in der ISO 8373 (2012) vergegenwärtigt: »Ein Roboter ist ein frei und wieder programmierbarer, multifunktionaler Manipulator mit mindestens drei unabhängigen Achsen, um Materialien, Teile, Werkzeuge oder spezielle Geräte auf programmierten, variablen Bahnen zu bewegen zur Erfüllung der verschiedensten Aufgaben«.

Ethische Überlegungen zu Robotik und Künstlicher Intelligenz

Versucht man sich einen Überblick über die verschiedenen ethischen Probleme zu verschaffen, die mit dem Aufkommen von ›intelligenten‹ und in jeder Hinsicht (Präzision, Geschwindigkeit, Kraft, Kombinatorik und Vernetzung) immer mächtigeren Robotern verbunden sind, so ist es hilfreich, diese Probleme danach zu unterscheiden, ob sie

1. das Vorfeld der Ethik,

2. das bisherige Selbstverständnis menschlicher Subjekte (Anthropologie) oder

3. normative Fragen im Sinne von: »Was sollen wir tun?« betreffen.

Die folgenden Überlegungen geben einen kurzen Aufriss, mit welchen Fragen wir uns jeweils beschäftigen sollten, wie die verschiedenen Fragenkreise zusammenhängen, und woran wir uns in unseren Antworten orientieren können.

Aufgabe der Ethik ist es, solche moralischen Meinungen auf ihre Begründung und Geltung hin zu befragen und so zu einem geschärften ethischen Urteil zu kommen, das idealiter vor der Allgemeinheit moralischer Subjekte verantwortet werden kann und in seiner Umsetzung ein »gelungenes Leben mit und für die Anderen, in gerechten Institutionen« [8] ermöglicht. Das ist eine erste vage Richtungsangabe.

Normative Fragen lassen sich am Ende nur ganz konkret anhand einer bestimmten Situation bearbeiten. Entsprechend liefert die Ethik hier keine pauschalen Urteile wie: »Roboter sind gut/schlecht«, »Künstliche Intelligenz dient dem guten Leben/ist dem guten Leben abträglich«.

+++++++++++
more on Artificial Intelligence in this IMS blog
http://blog.stcloudstate.edu/ims?s=artifical+intelligence

1 2

Skip to toolbar