Searching for "artificial intelligence"

Inclusive Design of Artificial Intelligence

EASI Free Webinar: Inclusive Design of Artificial Intelligence Thursday

October 25
Artificial Intelligence (AI) and accessibility: will it enhance or
impede accessibility for users with disabilities?
Artificial intelligence used to be all about the distance future, but it
has now become mainstream. It is already impacting us in ways we may not
recognize. It is impacting us today already. It is involved in search
engines. It is involved in the collecting of big data and analyzing it.
It is involved in all the arguments about the way social media is being
used to effect, or try to effect, our thinking and our politics. How
else might it play a role in the future of accessibility?
The webinar presenter: Jutta Treviranus at University of Toronto will
explore these questions in the webinar on Thursday, October 25 at 11
Pacific, noon Mountain, 1 central or 2 Eastern You can register now but
registration closes Wed. Oct. 24 at midnight Eastern.
You can register now on the web at https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Feasi.cc&data=01%7C01%7Cpmiltenoff%40STCLOUDSTATE.EDU%7C4afdbee13881489312d308d6383f541b%7C5e40e2ed600b4eeaa9851d0c9dcca629%7C0&sdata=O7nOVG8dbkDX7lf%2FR6nWJi4f6qyHklGKfc%2FaB8p4r5o%3D&reserved=0and look for the link
for webinars.
Those who register should get directions for joining sent late wednesday
or Early on Thursday.

+++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artificial+intelligence

Limbic thought and artificial intelligence

Limbic thought and artificial intelligence

September 5, 2018  Siddharth (Sid) Pai

https://www.linkedin.com/pulse/limbic-thought-artificial-intelligence-siddharth-sid-pai/

An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
++++++++++++++++++

Stephen Hawking warns artificial intelligence could end mankind

https://www.bbc.com/news/technology-30290540
++++++++++++++++++++
thank you Sarnath Ramnat (sarnath@stcloudstate.edu) for the finding

An AI Wake-Up Call From Ancient Greece

  https://www.project-syndicate.org/commentary/artificial-intelligence-pandoras-box-by-adrienne-mayor-2018-10

++++++++++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artifical+intelligence

Super Mario gets artificial intelligence

Researchers create ‘self-aware’ Super Mario with artificial intelligence

http://mashable.com/2015/01/19/super-mario-artificial-intelligence/

A team of German researchers has used artificial intelligence to create a “self-aware” version of Super Mario who can respond to verbal commands and automatically play his own game.

Artificial Intelligence helps Mario play his own game

Students at the University of Tubingen have used Mario as part of their efforts to find out how the human brain works.

The cognitive modelling unit claim their project has generated “a fully functional program” and “an alive and somewhat intelligent artificial agent”.

http://www.bbc.co.uk/newsbeat/30879456

Can Super Mario Save Artificial Intelligence?

The most popular approaches today focus on Big Data, or mimicking humansthat already know how to do some task. But sheer mimicry breaks down when one gives a machine new tasks, and, as I explained a few weeks ago, Big Data approaches tend to excel at finding correlations without necessarily being able to induce the rules of the game. If Big Data alone is not a powerful enough tool to induce a strategy in a complex but well-defined game like chess, then that’s a problem, since the real world is vastly more open-ended, and considerably more complicated.

http://www.newyorker.com/tech/elements/can-super-mario-save-artificial-intelligence

intelligence measure

Intelligence: a history

Intelligence has always been used as fig-leaf to justify domination and destruction. No wonder we fear super-smart robots

Stephen Cave

https://aeon.co/essays/on-the-dark-history-of-intelligence-as-domination

To say that someone is or is not intelligent has never been merely a comment on their mental faculties. It is always also a judgment on what they are permitted to do. Intelligence, in other words, is political.

The problem has taken an interesting 21st-century twist with the rise of Artificial Intelligence (AI).

The term ‘intelligence’ itself has never been popular with English-language philosophers. Nor does it have a direct translation into German or ancient Greek, two of the other great languages in the Western philosophical tradition. But that doesn’t mean philosophers weren’t interested in it. Indeed, they were obsessed with it, or more precisely a part of it: reason or rationality. The term ‘intelligence’ managed to eclipse its more old-fashioned relative in popular and political discourse only with the rise of the relatively new-fangled discipline of psychology, which claimed intelligence for itself.

Plato conclude, in The Republic, that the ideal ruler is ‘the philosopher king’, as only a philosopher can work out the proper order of things. This idea was revolutionary at the time. Athens had already experimented with democracy, the rule of the people – but to count as one of those ‘people’ you just had to be a male citizen, not necessarily intelligent. Elsewhere, the governing classes were made up of inherited elites (aristocracy), or by those who believed they had received divine instruction (theocracy), or simply by the strongest (tyranny).

Plato’s novel idea fell on the eager ears of the intellectuals, including those of his pupil Aristotle. Aristotle was always the more practical, taxonomic kind of thinker. He took the notion of the primacy of reason and used it to establish what he believed was a natural social hierarchy.

So at the dawn of Western philosophy, we have intelligence identified with the European, educated, male human. It becomes an argument for his right to dominate women, the lower classes, uncivilised peoples and non-human animals. While Plato argued for the supremacy of reason and placed it within a rather ungainly utopia, only one generation later, Aristotle presents the rule of the thinking man as obvious and natural.

The late Australian philosopher and conservationist Val Plumwood has argued that the giants of Greek philosophy set up a series of linked dualisms that continue to inform our thought. Opposing categories such as intelligent/stupid, rational/emotional and mind/body are linked, implicitly or explicitly, to others such as male/female, civilised/primitive, and human/animal. These dualisms aren’t value-neutral, but fall within a broader dualism, as Aristotle makes clear: that of dominant/subordinate or master/slave. Together, they make relationships of domination, such as patriarchy or slavery, appear to be part of the natural order of things.

Descartes rendered nature literally mindless, and so devoid of intrinsic value – which thereby legitimated the guilt-free oppression of other species.

For Kant, only reasoning creatures had moral standing. Rational beings were to be called ‘persons’ and were ‘ends in themselves’. Beings that were not rational, on the other hand, had ‘only a relative value as means, and are therefore called things’. We could do with them what we liked.

This line of thinking was extended to become a core part of the logic of colonialism. The argument ran like this: non-white peoples were less intelligent; they were therefore unqualified to rule over themselves and their lands. It was therefore perfectly legitimate – even a duty, ‘the white man’s burden’ – to destroy their cultures and take their territory.

The same logic was applied to women, who were considered too flighty and sentimental to enjoy the privileges afforded to the ‘rational man’.

Galton believe that intellectual ability was hereditary and could be enhanced through selective breeding. He decided to find a way to scientifically identify the most able members of society and encourage them to breed – prolifically, and with each other. The less intellectually capable should be discouraged from reproducing, or indeed prevented, for the sake of the species. Thus eugenics and the intelligence test were born together.

From David Hume to Friedrich Nietzsche, and Sigmund Freud through to postmodernism, there are plenty of philosophical traditions that challenge the notion that we’re as intelligent as we’d like to believe, and that intelligence is the highest virtue.

From 2001: A Space Odyssey to the Terminator films, writers have fantasised about machines rising up against us. Now we can see why. If we’re used to believing that the top spots in society should go to the brainiest, then of course we should expect to be made redundant by bigger-brained robots and sent to the bottom of the heap.

Natural stupidity, rather than artificial intelligence, remains the greatest risk.

++++++++++++++++++++++
more on intelligence in this IMS blog
http://blog.stcloudstate.edu/ims?s=intelligence

shaping the future of AI

Shaping the Future of A.I.

Daniel Burrus

https://www.linkedin.com/pulse/shaping-future-ai-daniel-burrus/

Way back in 1983, I identified A.I. as one of 20 exponential technologies that would increasingly drive economic growth for decades to come.

Artificial intelligence applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, decision trees and machine learning to recognize patterns from vast amounts of data, provide insights, predict outcomes and make complex decisions. A.I. can be applied to pattern recognition, object classification, language translation, data translation, logistical modeling and predictive modeling, to name a few. It’s important to understand that all A.I. relies on vast amounts of quality data and advanced analytics technology. The quality of the data used will determine the reliability of the A.I. output.

Machine learning is a subset of A.I. that utilizes advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon’s Alexa, Apple’s Siri, or any of the others from companies like Google and Microsoft all get better every year thanks to all of the use we give them and the machine learning that takes place in the background.

Deep learning is a subset of machine learning that uses advanced algorithms to enable an A.I. system to train itself to perform tasks by exposing multi-layered neural networks to vast amounts of data, then using what has been learned to recognize new patterns contained in the data. Learning can be Human Supervised LearningUnsupervised Learningand/or Reinforcement Learning like Google used with DeepMind to learn how to beat humans at the complex game Go. Reinforcement learning will drive some of the biggest breakthroughs.

Autonomous computing uses advanced A.I. tools such as deep learning to enable systems to be self-governing and capable of acting according to situational data without human command. A.I. autonomy includes perception, high-speed analytics, machine-to-machine communications and movement. For example, autonomous vehicles use all of these in real time to successfully pilot a vehicle without a human driver.

Augmented thinking: Over the next five years and beyond, A.I. will become increasingly embedded at the chip level into objects, processes, products and services, and humans will augment their personal problem-solving and decision-making abilities with the insights A.I. provides to get to a better answer faster.

Technology is not good or evil, it is how we as humans apply it. Since we can’t stop the increasing power of A.I., I want us to direct its future, putting it to the best possible use for humans. 

++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artifical+intelligence

more on deep learning in this IMS blog
http://blog.stcloudstate.edu/ims?s=deep+learning

eLearning Trends To Treat With Caution

4 eLearning Trends To Treat With Caution

https://elearningindustry.com/instructional-design-models-and-theories

Jumping onboard to a new industry trend with insufficient planning can result in your initiative failing to achieve its objective and, in the worst case, even hinder the learning process. So which hot topics should you treat with care?

1. Virtual Reality, or VR

Ultimately, the key question to consider when adopting anything new is whether it will help you achieve the desired outcome. VR shouldn’t be incorporated into learning just because it’s a common buzzword. Before you decide to give it a go, consider how it’s going to help your learner, and whether it’s truly the most effective or efficient way to meet the learning goal.

2. Gamification

considering introducing an interactive element to your learning, don’t let this deter you—just ensure that it’s relevant to the content and will aid the learning process.

3. Artificial Intelligence, or AI

If you are confident that a trend is going to yield better results for your learners, the ROI you see may well justify the upfront resources it requires.
Again, it all comes down to whether a trend is going to deliver in terms of achieving an objective.

4. Microlearning

The theory behind microlearning makes a lot of sense: organizing content into sections so that learning can fit easily with modern day attention spans and learners’ busy lifestyles is not a bad thing. The worry is that the buzzword, ‘microlearning’, has grown legs of its own, meaning the industry is losing sight of its’ founding principles.

+++++++++
more on elearning in this IMS blog
http://blog.stcloudstate.edu/ims?s=elearning

Does AI favor tyranny

Why Technology Favors Tyranny

Artificial intelligence could erase many practical advantages of democracy, and erode the ideals of liberty and equality. It will further concentrate power among a small elite if we don’t take steps to stop it.

https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/

YUVAL NOAH HARARI  OCTOBER 2018 ISSUE

Ordinary people may not understand artificial intelligence and biotechnology in any detail, but they can sense that the future is passing them by. In 1938 the common man’s condition in the Soviet Union, Germany, or the United States may have been grim, but he was constantly told that he was the most important thing in the world, and that he was the future (provided, of course, that he was an “ordinary man,” rather than, say, a Jew or a woman).

n 2018 the common person feels increasingly irrelevant. Lots of mysterious terms are bandied about excitedly in ted Talks, at government think tanks, and at high-tech conferences—globalizationblockchaingenetic engineeringAImachine learning—and common people, both men and women, may well suspect that none of these terms is about them.

Fears of machines pushing people out of the job market are, of course, nothing new, and in the past such fears proved to be unfounded. But artificial intelligence is different from the old machines. In the past, machines competed with humans mainly in manual skills. Now they are beginning to compete with us in cognitive skills.

Israel is a leader in the field of surveillance technology, and has created in the occupied West Bank a working prototype for a total-surveillance regime. Already today whenever Palestinians make a phone call, post something on Facebook, or travel from one city to another, they are likely to be monitored by Israeli microphones, cameras, drones, or spy software. Algorithms analyze the gathered data, helping the Israeli security forces pinpoint and neutralize what they consider to be potential threats.

The conflict between democracy and dictatorship is actually a conflict between two different data-processing systems. AI may swing the advantage toward the latter.

As we rely more on Google for answers, our ability to locate information independently diminishes. Already today, “truth” is defined by the top results of a Google search. This process has likewise affected our physical abilities, such as navigating space.

So what should we do?

For starters, we need to place a much higher priority on understanding how the human mind works—particularly how our own wisdom and compassion can be cultivated.

+++++++++++++++
more on SCSU student philosophy club in this IMS blog
http://blog.stcloudstate.edu/ims?s=philosophy+student+club

AI fool fingerprints scanners

Artificial Intelligence Can Unlock Fingerprint Scanners

Artificial Intelligence Can Unlock Fingerprint Scanners

This week in security fails: AI can create artificial fingerprints that unlock fingerprint scanners

Posted by NowThis Future on Friday, November 16, 2018

++++++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artificial+intelligence

data is the new oil in Industry 4.0

Why “data is the new oil” and what happens when energy meets Industry 4.0

By Nicholas Waller PUBLISHED 19:42 NOVEMBER 14, 2018

Why “data is the new oil” and what happens when energy meets Industry 4.0

At the Abu Dhabi International Petroleum Exhibition and Conference (ADIPEC) this week, the UAE’s minister of state for Artificial Intelligence, Omar bin Sultan Al Olama, went so far as to declare that “Data is the new oil.”

according to Pulitzer Prize-winning author, economic historian and one of the world’s leading experts on the oil & gas sector; Daniel Yergin, there is now a “symbiosis” between energy producers and the new knowledge economy. The production of oil & gas and the generation of data are now, Yergin argues, “wholly inter-dependent”.

What does Oil & Gas 4.0 look like in practice?

the greater use of automation and collection of data has allowed an upsurge in the “de-manning” of oil & gas facilities

Thanks to a significant increase in the number of sensors being deployed across operations, companies can monitor what is happening in real time, which markedly improves safety levels.

in the competitive environment of the Fourth Industrial Revolution, no business can afford to be left behind by not investing in new technologies – so strategic discussions are important.

+++++++++++
more on big data in this IMS blog
http://blog.stcloudstate.edu/ims?s=big+data

more on industry 4.0 in this IMS blog
http://blog.stcloudstate.edu/ims?s=industry

deep learning revolution

Sejnowski, T. J. (2018). The Deep Learning Revolution. Cambridge, MA: The MIT Press.

How deep learning―from Google Translate to driverless cars to personal cognitive assistants―is changing our lives and transforming every sector of the economy.

The deep learning revolution has brought us driverless cars, the greatly improved Google Translate, fluent conversations with Siri and Alexa, and enormous profits from automated trading on the New York Stock Exchange. Deep learning networks can play poker better than professional poker players and defeat a world champion at Go. In this book, Terry Sejnowski explains how deep learning went from being an arcane academic field to a disruptive technology in the information economy.

Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. Deep networks learn from data in the same way that babies experience the world, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. Learning algorithms extract information from raw data; information can be used to create knowledge; knowledge underlies understanding; understanding leads to wisdom. Someday a driverless car will know the road better than you do and drive with more skill; a deep learning network will diagnose your illness; a personal cognitive assistant will augment your puny human brain. It took nature many millions of years to evolve human intelligence; AI is on a trajectory measured in decades. Sejnowski prepares us for a deep learning future.

A pioneering scientist explains ‘deep learning’

Artificial intelligence meets human intelligence

neural networks

Buzzwords like “deep learning” and “neural networks” are everywhere, but so much of the popular understanding is misguided, says Terrence Sejnowski, a computational neuroscientist at the Salk Institute for Biological Studies.

Sejnowski, a pioneer in the study of learning algorithms, is the author of The Deep Learning Revolution (out next week from MIT Press). He argues that the hype about killer AI or robots making us obsolete ignores exciting possibilities happening in the fields of computer science and neuroscience, and what can happen when artificial intelligence meets human intelligence.

Machine learning is a very large field and goes way back. Originally, people were calling it “pattern recognition,” but the algorithms became much broader and much more sophisticated mathematically. Within machine learning are neural networks inspired by the brain, and then deep learning. Deep learning algorithms have a particular architecture with many layers that flow through the network. So basically, deep learning is one part of machine learning and machine learning is one part of AI.

December 2012 at the NIPS meeting, which is the biggest AI conference. There, [computer scientist] Geoff Hinton and two of his graduate students showed you could take a very large dataset called ImageNet, with 10,000 categories and 10 million images, and reduce the classification error by 20 percent using deep learning.Traditionally on that dataset, error decreases by less than 1 percent in one year. In one year, 20 years of research was bypassed. That really opened the floodgates.

The inspiration for deep learning really comes from neuroscience.

AlphaGo, the program that beat the Go champion included not just a model of the cortex, but also a model of a part of the brain called the basal ganglia, which is important for making a sequence of decisions to meet a goal. There’s an algorithm there called temporal differences, developed back in the ‘80s by Richard Sutton, that, when coupled with deep learning, is capable of very sophisticated plays that no human has ever seen before.

there’s a convergence occurring between AI and human intelligence. As we learn more and more about how the brain works, that’s going to reflect back in AI. But at the same time, they’re actually creating a whole theory of learning that can be applied to understanding the brain and allowing us to analyze the thousands of neurons and how their activities are coming out. So there’s this feedback loop between neuroscience and AI

+++++++++++
deep learning revolution
http://blog.stcloudstate.edu/ims?s=deep+learning

1 2 3 4