Searching for "artificial intelligence"

Policy for Artificial Intelligence

Law is Code: Making Policy for Artificial Intelligence

Jules Polonetsky and Omer Tene January 16, 2019

https://www.ourworld.co/law-is-code-making-policy-for-artificial-intelligence/

Twenty years have passed since renowned Harvard Professor Larry Lessig coined the phrase “Code is Law”, suggesting that in the digital age, computer code regulates behavior much like legislative code traditionally did.  These days, the computer code that powers artificial intelligence (AI) is a salient example of Lessig’s statement.

  • Good AI requires sound data.  One of the principles,  some would say the organizing principle, of privacy and data protection frameworks is data minimization.  Data protection laws require organizations to limit data collection to the extent strictly necessary and retain data only so long as it is needed for its stated goal. 
  • Preventing discrimination – intentional or not.
    When is a distinction between groups permissible or even merited and when is it untoward?  How should organizations address historically entrenched inequalities that are embedded in data?  New mathematical theories such as “fairness through awareness” enable sophisticated modeling to guarantee statistical parity between groups.
  • Assuring explainability – technological due process.  In privacy and freedom of information frameworks alike, transparency has traditionally been a bulwark against unfairness and discrimination.  As Justice Brandeis once wrote, “Sunlight is the best of disinfectants.”
  • Deep learning means that iterative computer programs derive conclusions for reasons that may not be evident even after forensic inquiry. 

Yet even with code as law and a rising need for law in code, policymakers do not need to become mathematicians, engineers and coders.  Instead, institutions must develop and enhance their technical toolbox by hiring experts and consulting with top academics, industry researchers and civil society voices.  Responsible AI requires access to not only lawyers, ethicists and philosophers but also to technical leaders and subject matter experts to ensure an appropriate balance between economic and scientific benefits to society on the one hand and individual rights and freedoms on the other hand.

+++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artificial+intelligence

Inclusive Design of Artificial Intelligence

EASI Free Webinar: Inclusive Design of Artificial Intelligence Thursday

October 25
Artificial Intelligence (AI) and accessibility: will it enhance or
impede accessibility for users with disabilities?
Artificial intelligence used to be all about the distance future, but it
has now become mainstream. It is already impacting us in ways we may not
recognize. It is impacting us today already. It is involved in search
engines. It is involved in the collecting of big data and analyzing it.
It is involved in all the arguments about the way social media is being
used to effect, or try to effect, our thinking and our politics. How
else might it play a role in the future of accessibility?
The webinar presenter: Jutta Treviranus at University of Toronto will
explore these questions in the webinar on Thursday, October 25 at 11
Pacific, noon Mountain, 1 central or 2 Eastern You can register now but
registration closes Wed. Oct. 24 at midnight Eastern.
You can register now on the web at https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Feasi.cc&data=01%7C01%7Cpmiltenoff%40STCLOUDSTATE.EDU%7C4afdbee13881489312d308d6383f541b%7C5e40e2ed600b4eeaa9851d0c9dcca629%7C0&sdata=O7nOVG8dbkDX7lf%2FR6nWJi4f6qyHklGKfc%2FaB8p4r5o%3D&reserved=0and look for the link
for webinars.
Those who register should get directions for joining sent late wednesday
or Early on Thursday.

+++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artificial+intelligence

Limbic thought and artificial intelligence

Limbic thought and artificial intelligence

September 5, 2018  Siddharth (Sid) Pai

https://www.linkedin.com/pulse/limbic-thought-artificial-intelligence-siddharth-sid-pai/

An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
++++++++++++++++++

Stephen Hawking warns artificial intelligence could end mankind

https://www.bbc.com/news/technology-30290540
++++++++++++++++++++
thank you Sarnath Ramnat (sarnath@stcloudstate.edu) for the finding

An AI Wake-Up Call From Ancient Greece

  https://www.project-syndicate.org/commentary/artificial-intelligence-pandoras-box-by-adrienne-mayor-2018-10

++++++++++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artifical+intelligence

Super Mario gets artificial intelligence

Researchers create ‘self-aware’ Super Mario with artificial intelligence

http://mashable.com/2015/01/19/super-mario-artificial-intelligence/

A team of German researchers has used artificial intelligence to create a “self-aware” version of Super Mario who can respond to verbal commands and automatically play his own game.

Artificial Intelligence helps Mario play his own game

Students at the University of Tubingen have used Mario as part of their efforts to find out how the human brain works.

The cognitive modelling unit claim their project has generated “a fully functional program” and “an alive and somewhat intelligent artificial agent”.

http://www.bbc.co.uk/newsbeat/30879456

Can Super Mario Save Artificial Intelligence?

The most popular approaches today focus on Big Data, or mimicking humansthat already know how to do some task. But sheer mimicry breaks down when one gives a machine new tasks, and, as I explained a few weeks ago, Big Data approaches tend to excel at finding correlations without necessarily being able to induce the rules of the game. If Big Data alone is not a powerful enough tool to induce a strategy in a complex but well-defined game like chess, then that’s a problem, since the real world is vastly more open-ended, and considerably more complicated.

http://www.newyorker.com/tech/elements/can-super-mario-save-artificial-intelligence

intelligence measure

Intelligence: a history

Intelligence has always been used as fig-leaf to justify domination and destruction. No wonder we fear super-smart robots

Stephen Cave

https://aeon.co/essays/on-the-dark-history-of-intelligence-as-domination

To say that someone is or is not intelligent has never been merely a comment on their mental faculties. It is always also a judgment on what they are permitted to do. Intelligence, in other words, is political.

The problem has taken an interesting 21st-century twist with the rise of Artificial Intelligence (AI).

The term ‘intelligence’ itself has never been popular with English-language philosophers. Nor does it have a direct translation into German or ancient Greek, two of the other great languages in the Western philosophical tradition. But that doesn’t mean philosophers weren’t interested in it. Indeed, they were obsessed with it, or more precisely a part of it: reason or rationality. The term ‘intelligence’ managed to eclipse its more old-fashioned relative in popular and political discourse only with the rise of the relatively new-fangled discipline of psychology, which claimed intelligence for itself.

Plato conclude, in The Republic, that the ideal ruler is ‘the philosopher king’, as only a philosopher can work out the proper order of things. This idea was revolutionary at the time. Athens had already experimented with democracy, the rule of the people – but to count as one of those ‘people’ you just had to be a male citizen, not necessarily intelligent. Elsewhere, the governing classes were made up of inherited elites (aristocracy), or by those who believed they had received divine instruction (theocracy), or simply by the strongest (tyranny).

Plato’s novel idea fell on the eager ears of the intellectuals, including those of his pupil Aristotle. Aristotle was always the more practical, taxonomic kind of thinker. He took the notion of the primacy of reason and used it to establish what he believed was a natural social hierarchy.

So at the dawn of Western philosophy, we have intelligence identified with the European, educated, male human. It becomes an argument for his right to dominate women, the lower classes, uncivilised peoples and non-human animals. While Plato argued for the supremacy of reason and placed it within a rather ungainly utopia, only one generation later, Aristotle presents the rule of the thinking man as obvious and natural.

The late Australian philosopher and conservationist Val Plumwood has argued that the giants of Greek philosophy set up a series of linked dualisms that continue to inform our thought. Opposing categories such as intelligent/stupid, rational/emotional and mind/body are linked, implicitly or explicitly, to others such as male/female, civilised/primitive, and human/animal. These dualisms aren’t value-neutral, but fall within a broader dualism, as Aristotle makes clear: that of dominant/subordinate or master/slave. Together, they make relationships of domination, such as patriarchy or slavery, appear to be part of the natural order of things.

Descartes rendered nature literally mindless, and so devoid of intrinsic value – which thereby legitimated the guilt-free oppression of other species.

For Kant, only reasoning creatures had moral standing. Rational beings were to be called ‘persons’ and were ‘ends in themselves’. Beings that were not rational, on the other hand, had ‘only a relative value as means, and are therefore called things’. We could do with them what we liked.

This line of thinking was extended to become a core part of the logic of colonialism. The argument ran like this: non-white peoples were less intelligent; they were therefore unqualified to rule over themselves and their lands. It was therefore perfectly legitimate – even a duty, ‘the white man’s burden’ – to destroy their cultures and take their territory.

The same logic was applied to women, who were considered too flighty and sentimental to enjoy the privileges afforded to the ‘rational man’.

Galton believe that intellectual ability was hereditary and could be enhanced through selective breeding. He decided to find a way to scientifically identify the most able members of society and encourage them to breed – prolifically, and with each other. The less intellectually capable should be discouraged from reproducing, or indeed prevented, for the sake of the species. Thus eugenics and the intelligence test were born together.

From David Hume to Friedrich Nietzsche, and Sigmund Freud through to postmodernism, there are plenty of philosophical traditions that challenge the notion that we’re as intelligent as we’d like to believe, and that intelligence is the highest virtue.

From 2001: A Space Odyssey to the Terminator films, writers have fantasised about machines rising up against us. Now we can see why. If we’re used to believing that the top spots in society should go to the brainiest, then of course we should expect to be made redundant by bigger-brained robots and sent to the bottom of the heap.

Natural stupidity, rather than artificial intelligence, remains the greatest risk.

++++++++++++++++++++++
more on intelligence in this IMS blog
http://blog.stcloudstate.edu/ims?s=intelligence

AI in education

https://www.edsurge.com/news/2019-01-23-how-much-artificial-intelligence-should-there-be-in-the-classroom

a two-day conference about artificial intelligence in education organized by a company called Squirrel AI.

he believes that having AI-driven tutors or instructors will help them each get the individual approach they need.

the Chinese government has declared a national goal of surpassing the U.S. in AI technology by the year 2030, so there is almost a Sputnik-like push for the tech going on right now in China.

+_+++++++++++++++++
more on AI in education in this IMS blog
http://blog.stcloudstate.edu/ims?s=Artificial+Intelligence+and+education

AI deep learning

Machine learning for sensors

June 3, 2019

https://phys.org/news/2019-06-machine-sensors.html

Researchers at the Fraunhofer Institute for Microelectronic Circuits and Systems IMS have developed AIfES, an artificial intelligence (AI) concept for microcontrollers and sensors that contains a completely configurable artificial neural network. AIfES is a platform-independent machine learning library which can be used to realize self-learning microelectronics requiring no connection to a cloud or to high-performance computers. The sensor-related AI system recognizes handwriting and gestures, enabling for example gesture control of input when the library is running on a wearable.

a machine learning library programmed in C that can run on microcontrollers, but also on other platforms such as PCs, Raspberry PI and Android.

+++++++++++++++++
more about machine learning in this IMS blog
http://blog.stcloudstate.edu/ims?s=machine+learning

data interference

APRIL 21, 2019 Zeynep Tufekci

Think You’re Discreet Online? Think Again

Because of technological advances and the sheer amount of data now available about billions of other people, discretion no longer suffices to protect your privacy. Computer algorithms and network analyses can now infer, with a sufficiently high degree of accuracy, a wide range of things about you that you may have never disclosed, including your moods, your political beliefs, your sexual orientation and your health.

There is no longer such a thing as individually “opting out” of our privacy-compromised world.

In 2017, the newspaper The Australian published an article, based on a leaked document from Facebook, revealing that the company had told advertisers that it could predict when younger users, including teenagers, were feeling “insecure,” “worthless” or otherwise in need of a “confidence boost.” Facebook was apparently able to draw these inferences by monitoring photos, posts and other social media data.

In 2017, academic researchers, armed with data from more than 40,000 Instagram photos, used machine-learning tools to accurately identify signs of depression in a group of 166 Instagram users. Their computer models turned out to be better predictors of depression than humans who were asked to rate whether photos were happy or sad and so forth.

Computational inference can also be a tool of social control. The Chinese government, having gathered biometric data on its citizens, is trying to use big data and artificial intelligence to single out “threats” to Communist rule, including the country’s Uighurs, a mostly Muslim ethnic group.

+++++++++++++

Zeynep Tufekci and Seth Stephens-Davidowitz: Privacy is over

https://www.centreforideas.com/article/zeynep-tufekci-and-seth-stephens-davidowitz-privacy-over

+++++++++++

Zeynep Tufekci writes about security and data privacy for NY Times, disinformation’s threat to democracy for WIRED

++++++++++
more on privacy in this IMS blog
http://blog.stcloudstate.edu/ims?s=privacy

OLC Collaborate

OLC Collaborate

https://onlinelearningconsortium.org/attend-2019/innovate/

schedule:

https://onlinelearningconsortium.org/attend-2019/innovate/program/all_sessions/#streamed

Wednesday

++++++++++++++++
THE NEW PROFESSOR: HOW I PODCASTED MY WAY INTO STUDENTS’ LIVES (AND HOW YOU CAN, TOO)

Concurrent Session 1

https://onlinelearningconsortium.org/olc-innovate-2019-session-page/?session=6734&kwds=

+++++++++++++

Creating A Cost-Free Course

+++++++++++++++++

Idea Hose: AI Design For People
Date: Wednesday, April 3rd
Time: 3:30 PM to 4:15 PM
Conference Session: Concurrent Session 3
Streamed session
Lead Presenter: Brian Kane (General Design LLC)
Track: Research: Designs, Methods, and Findings
Location: Juniper A
Session Duration: 45min
Brief Abstract:What happens when you apply design thinking to AI? AI presents a fundamental change in the way people interact with machines. By applying design thinking to the way AI is made and used, we can generate an unlimited amount of new ideas for products and experiences that people will love and use.https://onlinelearningconsortium.org/olc-innovate-2019-session-page/?session=6964&kwds=
Notes from the session:
design thinking: get out from old mental models.  new narratives; get out of the sci fi movies.
narrative generators: AI design for people stream
we need machines to make mistakes. Ai even more then traditional software.
Lessons learned: don’t replace people
creativity engines – automated creativity.
trends:
 AI Design for People stream49 PM-us9swehttps://www.androidauthority.com/nvidia-jetson-nano-966609/
https://community.infiniteflight.com/t/virtualhub-ios-and-android-free/142837?u=sudafly
 http://bit.ly/VirtualHub
Thursday
Chatbots, Game Theory, And AI: Adapting Learning For Humans, Or Innovating Humans Out Of The Picture?
Date: Thursday, April 4th
Time: 8:45 AM to 9:30 AM
Conference Session: Concurrent Session 4
Streamed session
Lead Presenter: Matt Crosslin (University of Texas at Arlington LINK Research Lab)
Track: Experiential and Life-Long Learning
Location: Cottonwood 4-5
Session Duration: 45min
Brief Abstract:How can teachers utilize chatbots and artificial intelligence in ways that won’t remove humans out of the education picture? Using tools like Twine and Recast.AI chatobts, this session will focus on how to build adaptive content that allows learners to create their own heutagogical educational pathways based on individual needs.++++++++++++++++

This Is Us: Fostering Effective Storytelling Through EdTech & Student’s Influence As Digital Citizens
Date: Thursday, April 4th
Time: 9:45 AM to 10:30 AM
Conference Session: Concurrent Session 5
Streamed session
Lead Presenter: Maikel Alendy (FIU Online)
Co-presenter: Sky V. King (FIU Online – Florida International University)
Track: Teaching and Learning Practice
Location: Cottonwood 4-5
Session Duration: 45min
Brief Abstract:“This is Us” demonstrates how leveraging storytelling in learning engages students to effectively communicate their authentic story, transitioning from consumerism to become creators and influencers. Addressing responsibility as a digital citizen, information and digital literacy, online privacy, and strategies with examples using several edtech tools, will be reviewed.++++++++++++++++++

Personalized Learning At Scale: Using Adaptive Tools & Digital Assistants
Date: Thursday, April 4th
Time: 11:15 AM to 12:00 PM
Conference Session: Concurrent Session 6
Streamed session
Lead Presenter: Kristin Bushong (Arizona State University )
Co-presenter: Heather Nebrich (Arizona State University)
Track: Effective Tools, Toys and Technologies
Location: Juniper C
Session Duration: 45min
Brief Abstract:Considering today’s overstimulated lifestyle, how do we engage busy learners to stay on task? Join this session to discover current efforts in implementing ubiquitous educational opportunities through customized interests and personalized learning aspirations e.g., adaptive math tools, AI support communities, and memory management systems.+++++++++++++

High-Impact Practices Online: Starting The Conversation
Date: Thursday, April 4th
Time: 1:15 PM to 2:00 PM
Conference Session: Concurrent Session 7
Streamed session
Lead Presenter: Katie Linder (Oregon State University)
Co-presenter: June Griffin (University of Nebraska-Lincoln)
Track: Teaching and Learning Practice
Location: Cottonwood 4-5
Session Duration: 45min
Brief Abstract:The concept of High-impact Educational Practices (HIPs) is well-known, but the conversation about transitioning HIPs online is new. In this session, contributors from the edited collection High-Impact Practices in Online Education will share current HIP research, and offer ideas for participants to reflect on regarding implementing HIPs into online environments.https://www.aacu.org/leap/hipshttps://www.aacu.org/sites/default/files/files/LEAP/HIP_tables.pdf+++++++++++++++++++++++

Human Skills For Digital Natives: Expanding Our Definition Of Tech And Media Literacy
Date: Thursday, April 4th
Time: 3:45 PM to 5:00 PM
Streamed session
Lead Presenter: Manoush Zomorodi (Stable Genius Productions)
Track: N/A
Location: Adams Ballroom
Session Duration: 1hr 15min
Brief Abstract:How can we ensure that students and educators thrive in increasingly digital environments, where change is the only constant? In this keynote, author and journalist Manoush Zomorodi shares her pioneering approach to researching the effects of technology on our behavior. Her unique brand of journalism includes deep-dive investigations into such timely topics as personal privacy, information overload, and the Attention Economy. These interactive multi-media experiments with tens of thousands of podcast listeners will inspire you to think creatively about how we use technology to educate and grow communities.Friday

Anger Is An Energy
Date: Friday, April 5th
Time: 8:30 AM to 9:30 AM
Streamed session
Lead Presenter: Michael Caulfield (Washington State University-Vancouver)
Track: N/A
Location: Adams Ballroom
Position: 2
Session Duration: 60min
Brief Abstract:Years ago, John Lyndon (then Johnny Rotten) sang that “anger is an energy.” And he was right, of course. Anger isn’t an emotion, like happiness or sadness. It’s a reaction, a swelling up of a confused urge. I’m a person profoundly uncomfortable with anger, but yet I’ve found in my professional career that often my most impactful work begins in a place of anger: anger against injustice, inequality, lies, or corruption. And often it is that anger that gives me the energy and endurance to make a difference, to move the mountains that need to be moved. In this talk I want to think through our uneasy relationship with anger; how it can be helpful, and how it can destroy us if we’re not careful.++++++++++++++++

Improving Online Teaching Practice, Creating Community And Sharing Resources
Date: Friday, April 5th
Time: 10:45 AM to 11:30 AM
Conference Session: Concurrent Session 10
Streamed session
Lead Presenter: Laurie Daily (Augustana University)
Co-presenter: Sharon Gray (Augustana University)
Track: Problems, Processes, and Practices
Location: Juniper A
Session Duration: 45min
Brief Abstract:The purpose of this session is to explore the implementation of a Community of Practice to support professional development, enhance online course and program development efforts, and to foster community and engagement between and among full and part time faculty.+++++++++++++++

It’s Not What You Teach, It’s HOW You Teach: A Story-Driven Approach To Course Design
Date: Friday, April 5th
Time: 11:45 AM to 12:30 PM
Conference Session: Concurrent Session 11
Streamed session
Lead Presenter: Katrina Rainer (Strayer University)
Co-presenter: Jennifer M McVay-Dyche (Strayer University)
Track: Teaching and Learning Practice
Location: Cottonwood 2-3
Session Duration: 45min
Brief Abstract:Learning is more effective and organic when we teach through the art of storytelling. At Strayer University, we are blending the principles story-driven learning with research-based instructional design practices to create engaging learning experiences. This session will provide you with strategies to strategically infuse stories into any lesson, course, or curriculum.

1 2 3 5