Posts Tagged ‘ethics’

ethics computers brain

+++The Ethical Challenges of Connecting Our Brains to Computers from r/tech

https://www.scientificamerican.com/article/the-ethical-challenges-of-connecting-our-brains-to-computers/

Although brain-computer interfaces (BCIs) are the heart of neurotech, it is more broadly defined as technology able to collect, interpret, infer or modify information generated by any part of the nervous system.

There are different types of it—some is invasive, some isn’t. Invasive brain-computer interfaces involve placing microelectrodes or other kinds of neurotech materials directly onto the brain or even embedding them into the neural tissue. The idea is to directly sense or modulate neural activity.

Noninvasive neurotech is also used for pain management. Together with Boston Scientific, IBM researchers are applying machine learning, the internet of things, and neurotech to improve chronic pain therapy.

As new, emerging technology, neurotech challenges corporations, researchers and individuals to reaffirm our commitment to responsible innovation. It’s essential to enforce guardrails so that they lead to beneficial long-term outcomes—on company, national and international levels. We need to ensure that researchers and manufacturers of neurotech as well as policymakers and consumers approach it responsibly and ethically.

++++++++++
more on ethics in this IMS blog
https://blog.stcloudstate.edu/ims?s=ethics

office in VR

https://www.lifewire.com/your-next-office-could-be-in-virtual-reality-5079457

  • The pandemic is driving interest in using virtual reality for business.
  • Facebook’s Oculus 2 VR headset will support an application called Infinite Office that allows people to work in a virtual office.
  • Advances are needed before VR can replace real-life interactions, experts say.

+++++++++++
more on VR in this IMS blog
https://blog.stcloudstate.edu/ims?s=vr+virtual+reality

more on XR in this IMS blog
https://blog.stcloudstate.edu/ims?s=extended+reality

more on ASVR in this IMS blog
https://blog.stcloudstate.edu/ims?s=asvr

ethics and arts against digital apocalypse

To stop a tech apocalypse we need ethics and the arts from r/philosophy

https://theconversation.com/to-stop-a-tech-apocalypse-we-need-ethics-and-the-arts-128235

Last year, Australia’s Chief Scientist Alan Finkel suggested that we in Australia should become “human custodians”. This would mean being leaders in technological development, ethics, and human rights.

A recent report from the Australian Council of Learned Academies (ACOLA) brought together experts from scientific and technical fields as well as the humanities, arts and social sciences to examine key issues arising from artificial intelligence.

A similar vision drives Stanford University’s Institute for Human-Centered Artificial Intelligence. The institute brings together researchers from the humanities, education, law, medicine, business and STEM to study and develop “human-centred” AI technologies.

Meanwhile, across the Atlantic, the Future of Humanity Institute at the University of Oxford similarly investigates “big-picture questions” to ensure “a long and flourishing future for humanity”.

The IT sector is also wrestling with the ethical issues raised by rapid technological advancement. Microsoft’s Brad Smith and Harry Shum wrote in their 2018 book The Future Computed that one of their “most important conclusions” was that the humanities and social sciences have a crucial role to play in confronting the challenges raised by AI

Without training in ethics, human rights and social justice, the people who develop the technologies that will shape our future could make poor decisions.

++++++++++++
more on ethics in this IMS blog
https://blog.stcloudstate.edu/ims?s=ethics

digital ethics

O’Brien, J. (2020). Digital Ethics in Higher Education: 2020. Educause Review. https://er.educause.edu/articles/2020/5/digital-ethics-in-higher-education-2020

digital ethics, which I define simply as “doing the right thing at the intersection of technology innovation and accepted social values.”
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, written by Cathy O’Neil in early 2016, continues to be relevant and illuminating. O’Neil’s book revolves around her insight that “algorithms are opinions embedded in code,” in distinct contrast to the belief that algorithms are based on—and produce—indisputable facts.
Safiya Umoja Noble’s book Algorithms of Oppression: How Search Engines Reinforce Racism
The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power

+++++++++++++++++

International Dialogue on “The Ethics of Digitalisation” Kicks Off in Berlin | Berkman Klein Center. (2020, August 20). [Harvard University]. Berkman Klein Center. https://cyber.harvard.edu/story/2020-08/international-dialogue-ethics-digitalisation-kicks-berlin

+++++++++++++++++
more on ethics in this IMS blog
https://blog.stcloudstate.edu/ims?s=ethics

McMindfulness

McMindfulness: how capitalism hijacked the Buddhist teaching of mindfulness

https://www.cbc.ca/radio/tapestry/mcmindfulness-and-the-case-for-small-talk-1.5369984/mcmindfulness-how-capitalism-hijacked-the-buddhist-teaching-of-mindfulness-1.5369991

On McMindfulness

dthic

quote the former Buddhist monk Clark Strand here. This was in a review of your work. “None of us dreamed that mindfulness would become so popular or even lucrative, much less that it would be used as a way to keep millions of us sleeping soundly through some of the worst cultural excesses in human history, all while fooling us into thinking we were awake and quiet.”

corporate mindfulness programs are now quite popular. And as we all know, most employees these days are extremely stressed out. The Gallup poll that came out about four or five years ago said that corporations — and this is in the U.S. — are losing approximately 300 billion dollars a year from stress-related absences and seven out of ten employees report being disengaged from their work.

The remedy has now become mindfulness, where employees are then trained individually to learn how to cope and adjust to these toxic corporate conditions rather than launching kind of a diagnosis of the systemic causes of stress not only in corporations but in our society at large. That sort of dialogue, that sort of inquiry, is not happening.

An integrity bubble is where there is a small oasis within a corporation –  for example let’s take Google because that’s a great example of it.

You have a small group of engineers who are getting individual level benefits from corporate mindfulness training. They’re learning how to de-stress. Google engineers [are] working 60-70 hours a week – very stressful. So they’re getting individual level benefits while not questioning the digital distraction technologies [that] Google engineers are actually trying to work on. Those issues are not taken into account in a kind of mindful way.

So you become mindful, to become more productive, to produce technologies of mass distraction, which is quite an irony in many ways. A sad irony actually.

mindfulness could be revolutionized in a way that does not denigrate the therapeutic benefits of self-care, but it becomes interdependent with these causes and conditions of suffering which go beyond just individuals.

+++++++++++
more on mindfulness in this IMS blog
https://blog.stcloudstate.edu/ims?s=mindfulness

Education and Ethics

4 Ways AI Education and Ethics Will Disrupt Society in 2019

By Tara Chklovski     Jan 28, 2019

https://www.edsurge.com/news/2019-01-28-4-ways-ai-education-and-ethics-will-disrupt-society-in-2019

In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.

Meanwhile, the public was amazed at technological advances like Boston Dynamic’s Atlas robot doing parkour, while simultaneously being outraged at the thought of our data no longer being ours and Alexa listening in on all our conversations.

1. Companies will face increased pressure about the data AI-embedded services use.

2. Public concern will lead to AI regulations. But we must understand this tech too.

In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.

This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.

Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.

3. More companies will make AI a strategic initiative in corporate social responsibility.

Google invested $25 million in AI for Good and Microsoft added an AI for Humanitarian Action to its prior commitment. While these are positive steps, the tech industry continues to have a diversity problem

4. Funding for AI literacy and public education will skyrocket.

Ryan Calo from the University of Washington explains that it matters how we talk about technologies that we don’t fully understand.

 

 

 

1 2