Searching for "ethics"

Education and Ethics

4 Ways AI Education and Ethics Will Disrupt Society in 2019

By Tara Chklovski     Jan 28, 2019

https://www.edsurge.com/news/2019-01-28-4-ways-ai-education-and-ethics-will-disrupt-society-in-2019

In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.

Meanwhile, the public was amazed at technological advances like Boston Dynamic’s Atlas robot doing parkour, while simultaneously being outraged at the thought of our data no longer being ours and Alexa listening in on all our conversations.

1. Companies will face increased pressure about the data AI-embedded services use.

2. Public concern will lead to AI regulations. But we must understand this tech too.

In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.

This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.

Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.

3. More companies will make AI a strategic initiative in corporate social responsibility.

Google invested $25 million in AI for Good and Microsoft added an AI for Humanitarian Action to its prior commitment. While these are positive steps, the tech industry continues to have a diversity problem

4. Funding for AI literacy and public education will skyrocket.

Ryan Calo from the University of Washington explains that it matters how we talk about technologies that we don’t fully understand.

 

 

 

ethics and exact sciences

University Data Science Programs Turn to Ethics and the Humanities

By Sydney Johnson     Jan 11, 2019

https://www.edsurge.com/news/2019-01-11-university-data-science-programs-turn-to-ethics-and-the-humanities

“Formulating a product, you better know about ethics and understand legal frameworks.”

These days a growing number of people are concerned with bringing more talk of ethics into technology. One question is whether that will bring change to data-science curricula.

Following major data breaches and privacy scandals at tech companies like Facebook, universities including Stanford, the University of Texas and Harvard have all added ethics courses into computer science degree programs to address tech’s “ethical dark side,” the New York Times has reported.

As more college and universities consider incorporating humanities courses into technical degree programs, some are asking what kind of ethics should be taught.

 

++++++++++++++
more on ethics in this IMS blog
http://blog.stcloudstate.edu/ims?s=ethics

AI and ethics

Live Facebook discussion at SCSU VizLab on ethics and technology:

Join our discussion on #technology and #ethics. share your opinions, suggestions, ideas

Posted by InforMedia Services on Thursday, November 1, 2018

Heard on Marketplace this morning (Oct. 22, 2018): ethics of artificial intelligence with John Havens of the Institute of Electrical and Electronics Engineers, which has developed a new ethics certification process for AI: https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec_bios.pdf

Ethics and AI

***** The student club, the Philosophical Society, has now been recognized by SCSU as a student organization ***

https://ed.ted.com/lessons/the-ethical-dilemma-of-self-driving-cars-patrick-lin

Could it be the case that a random decision is still better then predetermined one designed to minimize harm?

similar ethical considerations are raised also:

in this sitcom

https://www.theatlantic.com/sponsored/hpe-2018/the-ethics-of-ai/1865/ (full movie)

This TED talk:

http://blog.stcloudstate.edu/ims/2017/09/19/social-media-algorithms/

http://blog.stcloudstate.edu/ims/2018/10/02/social-media-monopoly/

 

 

+++++++++++++++++++
IoT (Internet of Things), Industry 4.0, Big Data, BlockChain,

+++++++++++++++++++
IoT (Internet of Things), Industry 4.0, Big Data, BlockChain, Privacy, Security, Surveilance

http://blog.stcloudstate.edu/ims?s=internet+of+things

peer-reviewed literature;

Keyword search: ethic* + Internet of Things = 31

Baldini, G., Botterman, M., Neisse, R., & Tallacchini, M. (2018). Ethical Design in the Internet of Things. Science & Engineering Ethics24(3), 905–925. https://doi-org.libproxy.stcloudstate.edu/10.1007/s11948-016-9754-5

Berman, F., & Cerf, V. G. (2017). Social and Ethical Behavior in the Internet of Things. Communications of the ACM60(2), 6–7. https://doi-org.libproxy.stcloudstate.edu/10.1145/3036698

Murdock, G. (2018). Media Materialties: For A Moral Economy of Machines. Journal of Communication68(2), 359–368. https://doi-org.libproxy.stcloudstate.edu/10.1093/joc/jqx023

Carrier, J. G. (2018). Moral economy: What’s in a name. Anthropological Theory18(1), 18–35. https://doi-org.libproxy.stcloudstate.edu/10.1177/1463499617735259

Kernaghan, K. (2014). Digital dilemmas: Values, ethics and information technology. Canadian Public Administration57(2), 295–317. https://doi-org.libproxy.stcloudstate.edu/10.1111/capa.12069

Koucheryavy, Y., Kirichek, R., Glushakov, R., & Pirmagomedov, R. (2017). Quo vadis, humanity? Ethics on the last mile toward cybernetic organism. Russian Journal of Communication9(3), 287–293. https://doi-org.libproxy.stcloudstate.edu/10.1080/19409419.2017.1376561

Keyword search: ethic+ + autonomous vehicles = 46

Cerf, V. G. (2017). A Brittle and Fragile Future. Communications of the ACM60(7), 7. https://doi-org.libproxy.stcloudstate.edu/10.1145/3102112

Fleetwood, J. (2017). Public Health, Ethics, and Autonomous Vehicles. American Journal of Public Health107(4), 632–537. https://doi-org.libproxy.stcloudstate.edu/10.2105/AJPH.2016.303628

HARRIS, J. (2018). Who Owns My Autonomous Vehicle? Ethics and Responsibility in Artificial and Human Intelligence. Cambridge Quarterly of Healthcare Ethics27(4), 599–609. https://doi-org.libproxy.stcloudstate.edu/10.1017/S0963180118000038

Keeling, G. (2018). Legal Necessity, Pareto Efficiency & Justified Killing in Autonomous Vehicle Collisions. Ethical Theory & Moral Practice21(2), 413–427. https://doi-org.libproxy.stcloudstate.edu/10.1007/s10677-018-9887-5

Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science & Engineering Ethics21(3), 619–630. https://doi-org.libproxy.stcloudstate.edu/10.1007/s11948-014-9565-5

Getha-Taylor, H. (2017). The Problem with Automated Ethics. Public Integrity19(4), 299–300. https://doi-org.libproxy.stcloudstate.edu/10.1080/10999922.2016.1250575

Keyword search: ethic* + artificial intelligence = 349

Etzioni, A., & Etzioni, O. (2017). Incorporating Ethics into Artificial Intelligence. Journal of Ethics21(4), 403–418. https://doi-org.libproxy.stcloudstate.edu/10.1007/s10892-017-9252-2

Köse, U. (2018). Are We Safe Enough in the Future of Artificial Intelligence? A Discussion on Machine Ethics and Artificial Intelligence Safety. BRAIN: Broad Research in Artificial Intelligence & Neuroscience9(2), 184–197. Retrieved from http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3daph%26AN%3d129943455%26site%3dehost-live%26scope%3dsite

++++++++++++++++
http://www.cts.umn.edu/events/conference/2018

2018 CTS Transportation Research Conference

Keynote presentations will explore the future of driving and the evolution and potential of automated vehicle technologies.

+++++++++++++++++++
http://blog.stcloudstate.edu/ims/2016/02/26/philosophy-and-technology/

+++++++++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims/2018/09/07/limbic-thought-artificial-intelligence/

AI and autonomous cars as ALA discussion topic
http://blog.stcloudstate.edu/ims/2018/01/11/ai-autonomous-cars-libraries/

and privacy concerns
http://blog.stcloudstate.edu/ims/2018/09/14/ai-for-education/

the call of the German scientists on ethics and AI
http://blog.stcloudstate.edu/ims/2018/09/01/ethics-and-ai/

AI in the race for world dominance
http://blog.stcloudstate.edu/ims/2018/04/21/ai-china-education/

coding ethics unpredictability

Franken-algorithms: the deadly consequences of unpredictable code

by  Thu 30 Aug 2018 

https://www.theguardian.com/technology/2018/aug/29/coding-algorithms-frankenalgos-program-danger

Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.

Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”

Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs. Where there are choices to be made, that’s where ethics comes in.

our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable.

model-based programming, in which machines do most of the coding work and are able to test as they go.

As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit

The venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work.

+++++++++++
more on coding in this IMS blog
http://blog.stcloudstate.edu/ims?s=coding

ethics and AI

Ethik und Künstliche Intelligenz: Die Zeit drängt – wir müssen handeln

8/7/2108 Prof. Dr. theol. habil. Arne Manzeschke

https://www.pcwelt.de/a/ethik-und-ki-die-zeit-draengt-wir-muessen-handeln,3451885

Das Europäische Parlament hat es im vergangenen Jahr ganz drastisch formuliert. Eine neue industrielle Revolution steht an
1954 wurdeUnimate, der erste Industrieroboter , von George Devol entwickelt [1]. Insbesondere in den 1970er Jahren haben viele produzierende Gewerbe eine Roboterisierung ihrer Arbeit erfahren (beispielsweise die Automobil- und Druckindustrie).
Definition eines Industrieroboters in der ISO 8373 (2012) vergegenwärtigt: »Ein Roboter ist ein frei und wieder programmierbarer, multifunktionaler Manipulator mit mindestens drei unabhängigen Achsen, um Materialien, Teile, Werkzeuge oder spezielle Geräte auf programmierten, variablen Bahnen zu bewegen zur Erfüllung der verschiedensten Aufgaben«.

Ethische Überlegungen zu Robotik und Künstlicher Intelligenz

Versucht man sich einen Überblick über die verschiedenen ethischen Probleme zu verschaffen, die mit dem Aufkommen von ›intelligenten‹ und in jeder Hinsicht (Präzision, Geschwindigkeit, Kraft, Kombinatorik und Vernetzung) immer mächtigeren Robotern verbunden sind, so ist es hilfreich, diese Probleme danach zu unterscheiden, ob sie

1. das Vorfeld der Ethik,

2. das bisherige Selbstverständnis menschlicher Subjekte (Anthropologie) oder

3. normative Fragen im Sinne von: »Was sollen wir tun?« betreffen.

Die folgenden Überlegungen geben einen kurzen Aufriss, mit welchen Fragen wir uns jeweils beschäftigen sollten, wie die verschiedenen Fragenkreise zusammenhängen, und woran wir uns in unseren Antworten orientieren können.

Aufgabe der Ethik ist es, solche moralischen Meinungen auf ihre Begründung und Geltung hin zu befragen und so zu einem geschärften ethischen Urteil zu kommen, das idealiter vor der Allgemeinheit moralischer Subjekte verantwortet werden kann und in seiner Umsetzung ein »gelungenes Leben mit und für die Anderen, in gerechten Institutionen« [8] ermöglicht. Das ist eine erste vage Richtungsangabe.

Normative Fragen lassen sich am Ende nur ganz konkret anhand einer bestimmten Situation bearbeiten. Entsprechend liefert die Ethik hier keine pauschalen Urteile wie: »Roboter sind gut/schlecht«, »Künstliche Intelligenz dient dem guten Leben/ist dem guten Leben abträglich«.

+++++++++++
more on Artificial Intelligence in this IMS blog
http://blog.stcloudstate.edu/ims?s=artifical+intelligence

Social Media Etiquette Ethics

Social Media Etiquette & Ethics: A Guide for Personal, Professional & Brand Use.

Published on , Marketing Professor & Researcher

https://www.linkedin.com/pulse/social-media-etiquette-ethics-guide-personal-brand-use-quesenberry

definition:

Etiquette is the proper way to behave and Ethics studies ideas about good and bad behavior. Both combine into Professionalism, which is the skill, good judgment, and polite behavior expected from a person trained to do a job such as social media marketing. Because social media blurs the lines between our personal and professional lives it is useful to look at actions in social media from three perspectives: Personal (as an individual), Professional (as an employee or perspective employee) and Brand (as an organization). To simplify the discussion I have created questions for each category in the Social Media Etiquette and Ethics Guide below. Click here to download.

Before you post or comment in a personal capacity consider:

  1. Is it all about me? No one likes someone who only talks about themselves. The same applies in social media. Balance boasting with complimenting.
  2. Am I stalking someone? It is good to be driven and persistent but be careful not to cross the line into creepy. Don’t be too aggressive in outreach.
  3. Am I spamming them? Not everything or even the majority of what you post should ask for something. Don’t make everything self-serving.
  4. Am I venting or ranting? Venting and ranting may feel good, but research says it doesn’t help and no matter how justified you feel, it never presents you in a positive light. Do not post negative comments or gossip.
  5. Did I ask before I tagged? You had a great time and want to share those memories, but your friends, family or employer may have different standards than a friends. Check before you tag people in posts.
  6. Did I read before commenting or sharing? Don’t make yourself look foolish by not fully reviewing something you are commenting on or sharing with others. Don’t jump to conclusions.
  7. Am I grateful and respectful? Don’t take people for granted. Respond and thank those who engage with you.
  8. Is this the right medium for the message? Not everything should be said in social media. Consider the feelings of the other person. Some messages should be given in person, by phone or email.
  9. Am I logged into the right account? There are too many corporate examples of embarrassing posts meant for personal jokes that went out on official brand accounts. Always double check which account you are on. Don’t post personal information on brand accounts.

Before you post or comment as a professional consider:

  1. Does it meet the Social Media Policy? Most organizations have official social media policies that you probably received when hired. Don’t assume you know what the policy says. Many employees have been fired for not following company social media regulations. Make sure you know and follow employer or client requirements.
  2. Does it hurt my company’s reputation? No matter how many disclaimers you put on your accounts such as “views are my own” certain content and behavior will negatively impact your employer. If your bio states where you work, your personal account represents your employer.
  3. Does it help my company’s marketing? Employee advocacy is an important strategy. Have a positive impact on your company’s image and when you can advocate for your brand in social.
  4. Would my boss/client be happy to see it? You may not have “friended” your boss or client but a co-worker may have and your post is only a share or screen grab away. Even private accounts are never fully private.
  5. Am I being open about who I work for? It is good to post positive content about your employer and it is nice to receive gifts, but if you are trying to pass it off as unbiased opinion that is wrong. Be transparent about your financial connections.
  6. Am I being fair and accurate? Everyone is entitled to their person opinion, but if your opinion tends to always be unfounded and seems to have an agenda it will reflect negatively upon you. Criticism is welcome when it is constructive and opinion is backed by evidence.
  7. Am I being respectful and not malicious? People can get very insensitive, judgmental and angry in social media posts. That does not convey a professional image. Don’t post what you wouldn’t say in person. Even an outburst in person fades in memory, but a malicious post is there forever.
  8. Does it respect intellectual property? Not everything on the Internet is free. Check for or get permission to post company or client brand assets and content.
  9. Is this confidential information? As an employee or contractor you are granted access to privileged and confidential information. Don’t assume it is fine to share. Do not disclose non-public company or client information.

Before posting or commenting as a brand on a social account consider:

  1. Does it speak to my target market? Social media is unique from traditional marketing and requires a different perspective to be effective. Be sure to focus on your target’s wants and needs not yours.
  2. Does it add value? Social media only works if people view and share it. Make your content educational, insightful or entertaining to grab interest and draw engagement.
  3. Does it fit the social channel? Don’t post content ideal for Twitter on Instagram or Reddit. Each channel has its own culture and community. Make sure each post fits the channel’s environment, mission and policies or standards.
  4. Is it authentic and transparent? Trying to trick people into clicking a link or making a purchase will get you nowhere. Don’t hide or exclude any relevant information.
  5. Is it real and unique? Bots can automate tasks and be a great time saver, but use them for the right actions. Don’t use auto responses and create anything that could be perceived as spam.
  6. Is it positive and respectful? It may be fine to talk trash about competitors or complain about customers in the office, but not in social media. Don’t badmouth the competition or customers.
  7. Does it meet codes of conduct? As professionals we are part of trade associations that set standards of conduct. Be sure you are meeting these ethical standards such as the Word of Mouth Marketing Association’s Code of Ethics.
  8. Does it meet all laws and regulations? Government has been catching up with social media and have issued regulations and laws you must follow. See guides on requirements like the FTC social media endorsement guidelines.
  9. Does it meet the Social Media Policy? Most likely your brand or a client’s brand has a social media policy. Ensure you follow your own company standards.

 

++++++++++++++
more on social media netiquette in this IMS blog
http://blog.stcloudstate.edu/ims?s=social+media+netiquette

Research and Ethics: If Facebook can tweak our emotions and make us vote, what else can it Do?

If Facebook can tweak our emotions and make us vote, what else can it do?

http://www.businessinsider.com/facebook-calls-experiment-innovative-2014-7#ixzz36PtsxVfL

Google’s chief executive has expressed concern that we don’t trust big companies with our data – but may be dismayed at Facebook’s latest venture into manipulation

Please consider the information on Power, Privacy, and the Internet and details on ethics and big data in this IMS blog entry:http://blog.stcloudstate.edu/ims/2014/07/01/privacy-and-surveillance-obama-advisor-john-podesta-every-country-has-a-history-of-going-over-the-line/

important information:
Please consider the SCSU Research Ethics and the IRB (Institutional Review Board) document:
http://www.stcloudstate.edu/graduatestudies/current/culmProject/documents/ResearchEthicsandQualitative–IRBPresentationforGradStudentsv2.2011.pdf
For more information, please contact the SCSU Institutional Review Board : http://www.stcloudstate.edu/irb/default.asp

The Facebook Conundrum: Where Ethics and Science Collide

http://blogs.kqed.org/mindshift/2014/07/the-facebook-conundrum-where-ethics-and-science-collide

The field of learning analytics isn’t just about advancing the understanding of learning. It’s also being applied in efforts to try to influence and predict student behavior.

Learning analytics has yet to demonstrate its big beneficial breakthrough, its “penicillin,” in the words of Reich. Nor has there been a big ethical failure to creep lots of people out.

“There’s a difference,” Pistilli says, “between what we can do and what we should do.”

Competent, Literate, Fluent

Competent, Literate, Fluent: The What and Why of Digital Initiatives

 Published:

https://er.educause.edu/blogs/2019/4/competent-literate-fluent-the-what-and-why-of-digital-initiatives

how should associated terms including literacycompetency, and fluency be distinguished

In the 2019 EDUCAUSE Learning Initiative (ELI) Key Issues in Teaching and Learning, digital and information literacy maintains a top-five position for the third consecutive year.

Is it significant that Bryn Mawr College has a framework for digital competencies, Virginia Tech is launching a program in digital literacy, and the University of Mary Washington has a curricular initiative for advanced digital fluency? And what does it mean that Penn State has shifted its focus from digital literacy to digital fluency?

Jennifer Sparrow and Clint Lalonde have argued that digital fluency is a distinct capacity above and beyond digital literacy.

digital frameworks … on three levels:

  • First, digital initiatives aim to enhance students’ success after graduation.
  • A second major objective is to develop “digital citizenship.”
  • At a third level, digital initiatives can promote deep reflection upon the distinctive nature and ethics of knowing and knowledge in the digital age.

Facial Recognition issues

Chinese Facial Recognition Will Take over the World in 2019

Michael K. Spencer Jan 14, 2018
https://medium.com/futuresin/chinese-facial-recognition-will-take-over-the-world-in-2019-520754a7f966
The best facial recognition startups are in China, by a long-shot. As their software is less biased, global adoption is occurring via their software. This is evidenced in 2019 by the New York Police department in NYC for example, according to the South China Morning Post.
The mass surveillance state of data harvesting in real-time is coming. Facebook already rates and profiles us.

The Tech Wars come down to an AI-War

Whether the NYC police angle is true or not (it’s being hotly disputed), Facebook and Google are thinking along lines that follow the whims of the Chinese Government.

SenseTime and Megvii won’t just be worth $5 Billion, they will be worth many times that in the future. This is because a facial recognition data-harvesting of everything is the future of consumerism and capitalism, and in some places, the central tenet of social order (think Asia).

China has already ‘won’ the trade-war, because its winning the race to innovation. America doesn’t regulate Amazon, Microsoft, Google or Facebook properly, that stunts innovation and ethics in technology where the West is now forced to copy China just to keep up.

+++++++++++++
more about facial recognition in schools
http://blog.stcloudstate.edu/ims/2019/02/02/facial-recognition-technology-in-schools/

1 2 3 4