Searching for "artificial intelligence"

Inclusive Design of Artificial Intelligence

EASI Free Webinar: Inclusive Design of Artificial Intelligence Thursday

October 25
Artificial Intelligence (AI) and accessibility: will it enhance or
impede accessibility for users with disabilities?
Artificial intelligence used to be all about the distance future, but it
has now become mainstream. It is already impacting us in ways we may not
recognize. It is impacting us today already. It is involved in search
engines. It is involved in the collecting of big data and analyzing it.
It is involved in all the arguments about the way social media is being
used to effect, or try to effect, our thinking and our politics. How
else might it play a role in the future of accessibility?
The webinar presenter: Jutta Treviranus at University of Toronto will
explore these questions in the webinar on Thursday, October 25 at 11
Pacific, noon Mountain, 1 central or 2 Eastern You can register now but
registration closes Wed. Oct. 24 at midnight Eastern.
You can register now on the web at https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Feasi.cc&data=01%7C01%7Cpmiltenoff%40STCLOUDSTATE.EDU%7C4afdbee13881489312d308d6383f541b%7C5e40e2ed600b4eeaa9851d0c9dcca629%7C0&sdata=O7nOVG8dbkDX7lf%2FR6nWJi4f6qyHklGKfc%2FaB8p4r5o%3D&reserved=0and look for the link
for webinars.
Those who register should get directions for joining sent late wednesday
or Early on Thursday.

+++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artificial+intelligence

Limbic thought and artificial intelligence

Limbic thought and artificial intelligence

September 5, 2018  Siddharth (Sid) Pai

https://www.linkedin.com/pulse/limbic-thought-artificial-intelligence-siddharth-sid-pai/

An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
++++++++++++++++++

Stephen Hawking warns artificial intelligence could end mankind

https://www.bbc.com/news/technology-30290540
++++++++++++++++++++
thank you Sarnath Ramnat (sarnath@stcloudstate.edu) for the finding

An AI Wake-Up Call From Ancient Greece

  https://www.project-syndicate.org/commentary/artificial-intelligence-pandoras-box-by-adrienne-mayor-2018-10

++++++++++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artifical+intelligence

Super Mario gets artificial intelligence

Researchers create ‘self-aware’ Super Mario with artificial intelligence

http://mashable.com/2015/01/19/super-mario-artificial-intelligence/

A team of German researchers has used artificial intelligence to create a “self-aware” version of Super Mario who can respond to verbal commands and automatically play his own game.

Artificial Intelligence helps Mario play his own game

Students at the University of Tubingen have used Mario as part of their efforts to find out how the human brain works.

The cognitive modelling unit claim their project has generated “a fully functional program” and “an alive and somewhat intelligent artificial agent”.

http://www.bbc.co.uk/newsbeat/30879456

Can Super Mario Save Artificial Intelligence?

The most popular approaches today focus on Big Data, or mimicking humansthat already know how to do some task. But sheer mimicry breaks down when one gives a machine new tasks, and, as I explained a few weeks ago, Big Data approaches tend to excel at finding correlations without necessarily being able to induce the rules of the game. If Big Data alone is not a powerful enough tool to induce a strategy in a complex but well-defined game like chess, then that’s a problem, since the real world is vastly more open-ended, and considerably more complicated.

http://www.newyorker.com/tech/elements/can-super-mario-save-artificial-intelligence

intelligence measure

Intelligence: a history

Intelligence has always been used as fig-leaf to justify domination and destruction. No wonder we fear super-smart robots

Stephen Cave

https://aeon.co/essays/on-the-dark-history-of-intelligence-as-domination

To say that someone is or is not intelligent has never been merely a comment on their mental faculties. It is always also a judgment on what they are permitted to do. Intelligence, in other words, is political.

The problem has taken an interesting 21st-century twist with the rise of Artificial Intelligence (AI).

The term ‘intelligence’ itself has never been popular with English-language philosophers. Nor does it have a direct translation into German or ancient Greek, two of the other great languages in the Western philosophical tradition. But that doesn’t mean philosophers weren’t interested in it. Indeed, they were obsessed with it, or more precisely a part of it: reason or rationality. The term ‘intelligence’ managed to eclipse its more old-fashioned relative in popular and political discourse only with the rise of the relatively new-fangled discipline of psychology, which claimed intelligence for itself.

Plato conclude, in The Republic, that the ideal ruler is ‘the philosopher king’, as only a philosopher can work out the proper order of things. This idea was revolutionary at the time. Athens had already experimented with democracy, the rule of the people – but to count as one of those ‘people’ you just had to be a male citizen, not necessarily intelligent. Elsewhere, the governing classes were made up of inherited elites (aristocracy), or by those who believed they had received divine instruction (theocracy), or simply by the strongest (tyranny).

Plato’s novel idea fell on the eager ears of the intellectuals, including those of his pupil Aristotle. Aristotle was always the more practical, taxonomic kind of thinker. He took the notion of the primacy of reason and used it to establish what he believed was a natural social hierarchy.

So at the dawn of Western philosophy, we have intelligence identified with the European, educated, male human. It becomes an argument for his right to dominate women, the lower classes, uncivilised peoples and non-human animals. While Plato argued for the supremacy of reason and placed it within a rather ungainly utopia, only one generation later, Aristotle presents the rule of the thinking man as obvious and natural.

The late Australian philosopher and conservationist Val Plumwood has argued that the giants of Greek philosophy set up a series of linked dualisms that continue to inform our thought. Opposing categories such as intelligent/stupid, rational/emotional and mind/body are linked, implicitly or explicitly, to others such as male/female, civilised/primitive, and human/animal. These dualisms aren’t value-neutral, but fall within a broader dualism, as Aristotle makes clear: that of dominant/subordinate or master/slave. Together, they make relationships of domination, such as patriarchy or slavery, appear to be part of the natural order of things.

Descartes rendered nature literally mindless, and so devoid of intrinsic value – which thereby legitimated the guilt-free oppression of other species.

For Kant, only reasoning creatures had moral standing. Rational beings were to be called ‘persons’ and were ‘ends in themselves’. Beings that were not rational, on the other hand, had ‘only a relative value as means, and are therefore called things’. We could do with them what we liked.

This line of thinking was extended to become a core part of the logic of colonialism. The argument ran like this: non-white peoples were less intelligent; they were therefore unqualified to rule over themselves and their lands. It was therefore perfectly legitimate – even a duty, ‘the white man’s burden’ – to destroy their cultures and take their territory.

The same logic was applied to women, who were considered too flighty and sentimental to enjoy the privileges afforded to the ‘rational man’.

Galton believe that intellectual ability was hereditary and could be enhanced through selective breeding. He decided to find a way to scientifically identify the most able members of society and encourage them to breed – prolifically, and with each other. The less intellectually capable should be discouraged from reproducing, or indeed prevented, for the sake of the species. Thus eugenics and the intelligence test were born together.

From David Hume to Friedrich Nietzsche, and Sigmund Freud through to postmodernism, there are plenty of philosophical traditions that challenge the notion that we’re as intelligent as we’d like to believe, and that intelligence is the highest virtue.

From 2001: A Space Odyssey to the Terminator films, writers have fantasised about machines rising up against us. Now we can see why. If we’re used to believing that the top spots in society should go to the brainiest, then of course we should expect to be made redundant by bigger-brained robots and sent to the bottom of the heap.

Natural stupidity, rather than artificial intelligence, remains the greatest risk.

++++++++++++++++++++++
more on intelligence in this IMS blog
http://blog.stcloudstate.edu/ims?s=intelligence

deep learning revolution

Sejnowski, T. J. (2018). The Deep Learning Revolution. Cambridge, MA: The MIT Press.

How deep learning―from Google Translate to driverless cars to personal cognitive assistants―is changing our lives and transforming every sector of the economy.

The deep learning revolution has brought us driverless cars, the greatly improved Google Translate, fluent conversations with Siri and Alexa, and enormous profits from automated trading on the New York Stock Exchange. Deep learning networks can play poker better than professional poker players and defeat a world champion at Go. In this book, Terry Sejnowski explains how deep learning went from being an arcane academic field to a disruptive technology in the information economy.

Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. Deep networks learn from data in the same way that babies experience the world, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. Learning algorithms extract information from raw data; information can be used to create knowledge; knowledge underlies understanding; understanding leads to wisdom. Someday a driverless car will know the road better than you do and drive with more skill; a deep learning network will diagnose your illness; a personal cognitive assistant will augment your puny human brain. It took nature many millions of years to evolve human intelligence; AI is on a trajectory measured in decades. Sejnowski prepares us for a deep learning future.

A pioneering scientist explains ‘deep learning’

Artificial intelligence meets human intelligence

neural networks

Buzzwords like “deep learning” and “neural networks” are everywhere, but so much of the popular understanding is misguided, says Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies.

Sejnowski, a pioneer in the study of learning algorithms, is the author of The Deep Learning Revolution (out next week from MIT Press). He argues that the hype about killer AI or robots making us obsolete ignores exciting possibilities happening in the fields of computer science and neuroscience, and what can happen when artificial intelligence meets human intelligence.

Machine learning is a very large field and goes way back. Originally, people were calling it “pattern recognition,” but the algorithms became much broader and much more sophisticated mathematically. Within machine learning are neural networks inspired by the brain, and then deep learning. Deep learning algorithms have a particular architecture with many layers that flow through the network. So basically, deep learning is one part of machine learning and machine learning is one part of AI.

December 2012 at the NIPS meeting, which is the biggest AI conference. There, [computer scientist] Geoff Hinton and two of his graduate students showed you could take a very large dataset called ImageNet, with 10,000 categories and 10 million images, and reduce the classification error by 20 percent using deep learning.Traditionally on that dataset, error decreases by less than 1 percent in one year. In one year, 20 years of research was bypassed. That really opened the floodgates.

The inspiration for deep learning really comes from neuroscience.

AlphaGo, the program that beat the Go champion included not just a model of the cortex, but also a model of a part of the brain called the basal ganglia, which is important for making a sequence of decisions to meet a goal. There’s an algorithm there called temporal differences, developed back in the ‘80s by Richard Sutton, that, when coupled with deep learning, is capable of very sophisticated plays that no human has ever seen before.

there’s a convergence occurring between AI and human intelligence. As we learn more and more about how the brain works, that’s going to reflect back in AI. But at the same time, they’re actually creating a whole theory of learning that can be applied to understanding the brain and allowing us to analyze the thousands of neurons and how their activities are coming out. So there’s this feedback loop between neuroscience and AI

+++++++++++
deep learning revolution
http://blog.stcloudstate.edu/ims?s=deep+learning

digital transformation online professional education

Sharpen the digital transformation 
strategy for your business.

Enroll today in Digital Transformation: From AI and IoT to Cloud, Blockchain, and Cybersecurity

https://professionalonline1.mit.edu/digital-transformation/index.php

PROGRAM FEES $2,300 STARTS ON November 28, 20182 months, online
6-8 hours per week

A Digital Revolution Is Underway.

In a rapidly expanding digital marketplace, legacy companies without a clear digital transformation strategy are being left behind. How can we stay on top of rapid—and sometimes radical—change? How can we position our organizations to take advantage of new technologies? How can we track and combat the security threats facing all of us as we are swept forward into the future?

Who is this Program for?

  • Professionals in traditional companies poised to implement strategic change, as well as entrepreneurs seeking to harness the opportunities afforded by new technologies, will learn the fundamentals of digital transformation and secure the necessary tools to navigate their enterprise to a digital platform.
  • Participants come from a wide range of industries and include C-suite executives, business consultants, corporate attorneys, risk officers, marketing, R&D, and innovation enablers.

Your Learning Journey

This online program takes you through the fundamentals of digital technologies transforming our world today. Led by MIT faculty at the forefront of data science, participants will learn the history and application of transformative technologies such as blockchain, artificial intelligence, cloud computing, IoT, and cybersecurity as well as the implications of employing—or ignoring—digitalization.

Brochure_MIT_PE_DigitalTransformation_17_Oct_18_V20-1w4qpjv

ELI 2018 Key Issues Teaching Learning

Key Issues in Teaching and Learning

https://www.educause.edu/eli/initiatives/key-issues-in-teaching-and-learning

A roster of results since 2011 is here.

ELI 2018 key issues

1. Academic Transformation

2. Accessibility and UDL

3. Faculty Development

4. Privacy and Security

5. Digital and Information Literacies

https://cdn.nmc.org/media/2017-nmc-strategic-brief-digital-literacy-in-higher-education-II.pdf
Three Models of Digital Literacy: Universal, Creative, Literacy Across Disciplines

United States digital literacy frameworks tend to focus on educational policy details and personal empowerment, the latter encouraging learners to become more effective students, better creators, smarter information consumers, and more influential members of their community.

National policies are vitally important in European digital literacy work, unsurprising for a continent well populated with nation-states and struggling to redefine itself, while still trying to grow economies in the wake of the 2008 financial crisis and subsequent financial pressures

African digital literacy is more business-oriented.

Middle Eastern nations offer yet another variation, with a strong focus on media literacy. As with other regions, this can be a response to countries with strong state influence or control over local media. It can also represent a drive to produce more locally-sourced content, as opposed to consuming material from abroad, which may elicit criticism of neocolonialism or religious challenges.

p. 14 Digital literacy for Humanities: What does it mean to be digitally literate in history, literature, or philosophy? Creativity in these disciplines often involves textuality, given the large role writing plays in them, as, for example, in the Folger Shakespeare Library’s instructor’s guide. In the digital realm, this can include web-based writing through social media, along with the creation of multimedia projects through posters, presentations, and video. Information literacy remains a key part of digital literacy in the humanities. The digital humanities movement has not seen much connection with digital literacy, unfortunately, but their alignment seems likely, given the turn toward using digital technologies to explore humanities questions. That development could then foster a spread of other technologies and approaches to the rest of the humanities, including mapping, data visualization, text mining, web-based digital archives, and “distant reading” (working with very large bodies of texts). The digital humanities’ emphasis on making projects may also increase

Digital Literacy for Business: Digital literacy in this world is focused on manipulation of data, from spreadsheets to more advanced modeling software, leading up to degrees in management information systems. Management classes unsurprisingly focus on how to organize people working on and with digital tools.

Digital Literacy for Computer Science: Naturally, coding appears as a central competency within this discipline. Other aspects of the digital world feature prominently, including hardware and network architecture. Some courses housed within the computer science discipline offer a deeper examination of the impact of computing on society and politics, along with how to use digital tools. Media production plays a minor role here, beyond publications (posters, videos), as many institutions assign multimedia to other departments. Looking forward to a future when automation has become both more widespread and powerful, developing artificial intelligence projects will potentially play a role in computer science literacy.

6. Integrated Planning and Advising Systems for Student Success (iPASS)

7. Instructional Design

8. Online and Blended Learning

In traditional instruction, students’ first contact with new ideas happens in class, usually through direct instruction from the professor; after exposure to the basics, students are turned out of the classroom to tackle the most difficult tasks in learning — those that involve application, analysis, synthesis, and creativity — in their individual spaces. Flipped learning reverses this, by moving first contact with new concepts to the individual space and using the newly-expanded time in class for students to pursue difficult, higher-level tasks together, with the instructor as a guide.

Let’s take a look at some of the myths about flipped learning and try to find the facts.

Myth: Flipped learning is predicated on recording videos for students to watch before class.

Fact: Flipped learning does not require video. Although many real-life implementations of flipped learning use video, there’s nothing that says video must be used. In fact, one of the earliest instances of flipped learning — Eric Mazur’s peer instruction concept, used in Harvard physics classes — uses no video but rather an online text outfitted with social annotation software. And one of the most successful public instances of flipped learning, an edX course on numerical methods designed by Lorena Barba of George Washington University, uses precisely one video. Video is simply not necessary for flipped learning, and many alternatives to video can lead to effective flipped learning environments [http://rtalbert.org/flipped-learning-without-video/].

Myth: Flipped learning replaces face-to-face teaching.

Fact: Flipped learning optimizes face-to-face teaching. Flipped learning may (but does not always) replace lectures in class, but this is not to say that it replaces teaching. Teaching and “telling” are not the same thing.

Myth: Flipped learning has no evidence to back up its effectiveness.

Fact: Flipped learning research is growing at an exponential pace and has been since at least 2014. That research — 131 peer-reviewed articles in the first half of 2017 alone — includes results from primary, secondary, and postsecondary education in nearly every discipline, most showing significant improvements in student learning, motivation, and critical thinking skills.

Myth: Flipped learning is a fad.

Fact: Flipped learning has been with us in the form defined here for nearly 20 years.

Myth: People have been doing flipped learning for centuries.

Fact: Flipped learning is not just a rebranding of old techniques. The basic concept of students doing individually active work to encounter new ideas that are then built upon in class is almost as old as the university itself. So flipped learning is, in a real sense, a modern means of returning higher education to its roots. Even so, flipped learning is different from these time-honored techniques.

Myth: Students and professors prefer lecture over flipped learning.

Fact: Students and professors embrace flipped learning once they understand the benefits. It’s true that professors often enjoy their lectures, and students often enjoy being lectured to. But the question is not who “enjoys” what, but rather what helps students learn the best.They know what the research says about the effectiveness of active learning

Assertion: Flipped learning provides a platform for implementing active learning in a way that works powerfully for students.

9. Evaluating Technology-based Instructional Innovations

Transitioning to an ROI lens requires three fundamental shifts
What is the total cost of my innovation, including both new spending and the use of existing resources?

What’s the unit I should measure that connects cost with a change in performance?

How might the expected change in student performance also support a more sustainable financial model?

The Exposure Approach: we don’t provide a way for participants to determine if they learned anything new or now have the confidence or competence to apply what they learned.

The Exemplar Approach: from ‘show and tell’ for adults to show, tell, do and learn.

The Tutorial Approach: Getting a group that can meet at the same time and place can be challenging. That is why many faculty report a preference for self-paced professional development.build in simple self-assessment checks. We can add prompts that invite people to engage in some sort of follow up activity with a colleague. We can also add an elective option for faculty in a tutorial to actually create or do something with what they learned and then submit it for direct or narrative feedback.

The Course Approach: a non-credit format, these have the benefits of a more structured and lengthy learning experience, even if they are just three to five-week short courses that meet online or in-person once every week or two.involve badges, portfolios, peer assessment, self-assessment, or one-on-one feedback from a facilitator

The Academy Approach: like the course approach, is one that tends to be a deeper and more extended experience. People might gather in a cohort over a year or longer.Assessment through coaching and mentoring, the use of portfolios, peer feedback and much more can be easily incorporated to add a rich assessment element to such longer-term professional development programs.

The Mentoring Approach: The mentors often don’t set specific learning goals with the mentee. Instead, it is often a set of structured meetings, but also someone to whom mentees can turn with questions and tips along the way.

The Coaching Approach: A mentor tends to be a broader type of relationship with a person.A coaching relationship tends to be more focused upon specific goals, tasks or outcomes.

The Peer Approach:This can be done on a 1:1 basis or in small groups, where those who are teaching the same courses are able to compare notes on curricula and teaching models. They might give each other feedback on how to teach certain concepts, how to write syllabi, how to handle certain teaching and learning challenges, and much more. Faculty might sit in on each other’s courses, observe, and give feedback afterward.

The Self-Directed Approach:a self-assessment strategy such as setting goals and creating simple checklists and rubrics to monitor our progress. Or, we invite feedback from colleagues, often in a narrative and/or informal format. We might also create a portfolio of our work, or engage in some sort of learning journal that documents our thoughts, experiments, experiences, and learning along the way.

The Buffet Approach:

10. Open Education

Figure 1. A Model for Networked Education (Credit: Image by Catherine Cronin, building on
Interpretations of
Balancing Privacy and Openness (Credit: Image by Catherine Cronin. CC BY-SA)

11. Learning Analytics

12. Adaptive Teaching and Learning

13. Working with Emerging Technology

In 2014, administrators at Central Piedmont Community College (CPCC) in Charlotte, North Carolina, began talks with members of the North Carolina State Board of Community Colleges and North Carolina Community College System (NCCCS) leadership about starting a CBE program.

Building on an existing project at CPCC for identifying the elements of a digital learning environment (DLE), which was itself influenced by the EDUCAUSE publication The Next Generation Digital Learning Environment: A Report on Research,1 the committee reached consensus on a DLE concept and a shared lexicon: the “Digital Learning Environment Operational Definitions,

Figure 1. NC-CBE Digital Learning Environment

AI and ethics

Live Facebook discussion at SCSU VizLab on ethics and technology:

Join our discussion on #technology and #ethics. share your opinions, suggestions, ideas

Posted by InforMedia Services on Thursday, November 1, 2018

Heard on Marketplace this morning (Oct. 22, 2018): ethics of artificial intelligence with John Havens of the Institute of Electrical and Electronics Engineers, which has developed a new ethics certification process for AI: https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ec_bios.pdf

Ethics and AI

***** The student club, the Philosophical Society, has now been recognized by SCSU as a student organization ***

https://ed.ted.com/lessons/the-ethical-dilemma-of-self-driving-cars-patrick-lin

Could it be the case that a random decision is still better then predetermined one designed to minimize harm?

similar ethical considerations are raised also:

in this sitcom

https://www.theatlantic.com/sponsored/hpe-2018/the-ethics-of-ai/1865/ (full movie)

This TED talk:

http://blog.stcloudstate.edu/ims/2017/09/19/social-media-algorithms/

http://blog.stcloudstate.edu/ims/2018/10/02/social-media-monopoly/

 

 

+++++++++++++++++++
IoT (Internet of Things), Industry 4.0, Big Data, BlockChain,

+++++++++++++++++++
IoT (Internet of Things), Industry 4.0, Big Data, BlockChain, Privacy, Security, Surveilance

http://blog.stcloudstate.edu/ims?s=internet+of+things

peer-reviewed literature;

Keyword search: ethic* + Internet of Things = 31

Baldini, G., Botterman, M., Neisse, R., & Tallacchini, M. (2018). Ethical Design in the Internet of Things. Science & Engineering Ethics24(3), 905–925. https://doi-org.libproxy.stcloudstate.edu/10.1007/s11948-016-9754-5

Berman, F., & Cerf, V. G. (2017). Social and Ethical Behavior in the Internet of Things. Communications of the ACM60(2), 6–7. https://doi-org.libproxy.stcloudstate.edu/10.1145/3036698

Murdock, G. (2018). Media Materialties: For A Moral Economy of Machines. Journal of Communication68(2), 359–368. https://doi-org.libproxy.stcloudstate.edu/10.1093/joc/jqx023

Carrier, J. G. (2018). Moral economy: What’s in a name. Anthropological Theory18(1), 18–35. https://doi-org.libproxy.stcloudstate.edu/10.1177/1463499617735259

Kernaghan, K. (2014). Digital dilemmas: Values, ethics and information technology. Canadian Public Administration57(2), 295–317. https://doi-org.libproxy.stcloudstate.edu/10.1111/capa.12069

Koucheryavy, Y., Kirichek, R., Glushakov, R., & Pirmagomedov, R. (2017). Quo vadis, humanity? Ethics on the last mile toward cybernetic organism. Russian Journal of Communication9(3), 287–293. https://doi-org.libproxy.stcloudstate.edu/10.1080/19409419.2017.1376561

Keyword search: ethic+ + autonomous vehicles = 46

Cerf, V. G. (2017). A Brittle and Fragile Future. Communications of the ACM60(7), 7. https://doi-org.libproxy.stcloudstate.edu/10.1145/3102112

Fleetwood, J. (2017). Public Health, Ethics, and Autonomous Vehicles. American Journal of Public Health107(4), 632–537. https://doi-org.libproxy.stcloudstate.edu/10.2105/AJPH.2016.303628

HARRIS, J. (2018). Who Owns My Autonomous Vehicle? Ethics and Responsibility in Artificial and Human Intelligence. Cambridge Quarterly of Healthcare Ethics27(4), 599–609. https://doi-org.libproxy.stcloudstate.edu/10.1017/S0963180118000038

Keeling, G. (2018). Legal Necessity, Pareto Efficiency & Justified Killing in Autonomous Vehicle Collisions. Ethical Theory & Moral Practice21(2), 413–427. https://doi-org.libproxy.stcloudstate.edu/10.1007/s10677-018-9887-5

Hevelke, A., & Nida-Rümelin, J. (2015). Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis. Science & Engineering Ethics21(3), 619–630. https://doi-org.libproxy.stcloudstate.edu/10.1007/s11948-014-9565-5

Getha-Taylor, H. (2017). The Problem with Automated Ethics. Public Integrity19(4), 299–300. https://doi-org.libproxy.stcloudstate.edu/10.1080/10999922.2016.1250575

Keyword search: ethic* + artificial intelligence = 349

Etzioni, A., & Etzioni, O. (2017). Incorporating Ethics into Artificial Intelligence. Journal of Ethics21(4), 403–418. https://doi-org.libproxy.stcloudstate.edu/10.1007/s10892-017-9252-2

Köse, U. (2018). Are We Safe Enough in the Future of Artificial Intelligence? A Discussion on Machine Ethics and Artificial Intelligence Safety. BRAIN: Broad Research in Artificial Intelligence & Neuroscience9(2), 184–197. Retrieved from http://login.libproxy.stcloudstate.edu/login?qurl=http%3a%2f%2fsearch.ebscohost.com%2flogin.aspx%3fdirect%3dtrue%26db%3daph%26AN%3d129943455%26site%3dehost-live%26scope%3dsite

++++++++++++++++
http://www.cts.umn.edu/events/conference/2018

2018 CTS Transportation Research Conference

Keynote presentations will explore the future of driving and the evolution and potential of automated vehicle technologies.

+++++++++++++++++++
http://blog.stcloudstate.edu/ims/2016/02/26/philosophy-and-technology/

+++++++++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims/2018/09/07/limbic-thought-artificial-intelligence/

AI and autonomous cars as ALA discussion topic
http://blog.stcloudstate.edu/ims/2018/01/11/ai-autonomous-cars-libraries/

and privacy concerns
http://blog.stcloudstate.edu/ims/2018/09/14/ai-for-education/

the call of the German scientists on ethics and AI
http://blog.stcloudstate.edu/ims/2018/09/01/ethics-and-ai/

AI in the race for world dominance
http://blog.stcloudstate.edu/ims/2018/04/21/ai-china-education/

AI for Education

The Promise (and Pitfalls) of AI for Education

Artificial intelligence could have a profound impact on learning, but it also raises key questions.

By Dennis Pierce, Alice Hathaway 08/29/18

https://thejournal.com/articles/2018/08/29/the-promise-of-ai-for-education.aspx

Artificial intelligence (AI) and machine learning are no longer fantastical prospects seen only in science fiction. Products like Amazon Echo and Siri have brought AI into many homes,

Kelly Calhoun Williams, an education analyst for the technology research firm Gartner Inc., cautions there is a clear gap between the promise of AI and the reality of AI.

Artificial intelligence is a broad term used to describe any technology that emulates human intelligence, such as by understanding complex information, drawing its own conclusions and engaging in natural dialog with people.

Machine learning is a subset of AI in which the software can learn or adapt like a human can. Essentially, it analyzes huge amounts of data and looks for patterns in order to classify information or make predictions. The addition of a feedback loop allows the software to “learn” as it goes by modifying its approach based on whether the conclusions it draws are right or wrong.

AI can process far more information than a human can, and it can perform tasks much faster and with more accuracy. Some curriculum software developers have begun harnessing these capabilities to create programs that can adapt to each student’s unique circumstances.

For instance, a Seattle-based nonprofit company called Enlearn has developed an adaptive learning platform that uses machine learning technology to create highly individualized learning paths that can accelerate learning for every student. (My note: about learning and technology, Alfie Kohn in http://blog.stcloudstate.edu/ims/2018/09/11/educational-technology/

GoGuardian, a Los Angeles company, uses machine learning technology to improve the accuracy of its cloud-based Internet filtering and monitoring software for Chromebooks. (My note: that smells Big Brother).Instead of blocking students’ access to questionable material based on a website’s address or domain name, GoGuardian’s software uses AI to analyze the actual content of a page in real time to determine whether it’s appropriate for students. (my note: privacy)

serious privacy concerns. It requires an increased focus not only on data quality and accuracy, but also on the responsible stewardship of this information. “School leaders need to get ready for AI from a policy standpoint,” Calhoun Williams said. For instance: What steps will administrators take to secure student data and ensure the privacy of this information?

++++++++++++
more on AI in education in this IMS blog
http://blog.stcloudstate.edu/ims?s=artifical+intelligence

1 2 3