Understanding the 4 Types of Artificial Intelligence (AI)
https://www.linkedin.com/pulse/understanding-4-types-artificial-intelligence-ai-bernard-marr/
Reactive AI
Examples of reactive AI include:
- Deep Blue, the chess-playing IBM supercomputer that bested world champion Garry Kasparov
- Spam filters for our email that keep promotions and phishing attempts out of our inboxes
- The Netflix recommendation engine
Limited Memory AI
For example, autonomous vehicles use limited memory AI to observe other cars’ speed and direction, helping them “read the road” and adjust as needed. This process for understanding and interpreting incoming data makes them safer on the roads.
Theory of Mind AI
The Kismet robot head, developed by Professor Cynthia Breazeal, could recognize emotional signals on human faces and replicate those emotions on its own face. Humanoid robot Sophia, developed by Hanson Robotics in Hong Kong, can recognize faces and respond to interactions with her own facial expressions.
Self-aware AI
The most advanced type of artificial intelligence is self-aware AI. When machines can be aware of their own emotions, as well as the emotions of others around them, they will have a level of consciousness and intelligence similar to human beings. This type of AI will have desires, needs, and emotions as well.
+++++++++++++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artificial+intelligence
Law is Code: Making Policy for Artificial Intelligence
Jules Polonetsky and Omer Tene January 16, 2019
https://www.ourworld.co/law-is-code-making-policy-for-artificial-intelligence/
Twenty years have passed since renowned Harvard Professor Larry Lessig coined the phrase “Code is Law”, suggesting that in the digital age, computer code regulates behavior much like legislative code traditionally did. These days, the computer code that powers artificial intelligence (AI) is a salient example of Lessig’s statement.
- Good AI requires sound data. One of the principles, some would say the organizing principle, of privacy and data protection frameworks is data minimization. Data protection laws require organizations to limit data collection to the extent strictly necessary and retain data only so long as it is needed for its stated goal.
- Preventing discrimination – intentional or not.
When is a distinction between groups permissible or even merited and when is it untoward? How should organizations address historically entrenched inequalities that are embedded in data? New mathematical theories such as “fairness through awareness” enable sophisticated modeling to guarantee statistical parity between groups.
- Assuring explainability – technological due process. In privacy and freedom of information frameworks alike, transparency has traditionally been a bulwark against unfairness and discrimination. As Justice Brandeis once wrote, “Sunlight is the best of disinfectants.”
- Deep learning means that iterative computer programs derive conclusions for reasons that may not be evident even after forensic inquiry.
Yet even with code as law and a rising need for law in code, policymakers do not need to become mathematicians, engineers and coders. Instead, institutions must develop and enhance their technical toolbox by hiring experts and consulting with top academics, industry researchers and civil society voices. Responsible AI requires access to not only lawyers, ethicists and philosophers but also to technical leaders and subject matter experts to ensure an appropriate balance between economic and scientific benefits to society on the one hand and individual rights and freedoms on the other hand.
+++++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artificial+intelligence
EASI Free Webinar: Inclusive Design of Artificial Intelligence Thursday
October 25
Artificial Intelligence (AI) and accessibility: will it enhance or
impede accessibility for users with disabilities?
Artificial intelligence used to be all about the distance future, but it
has now become mainstream. It is already impacting us in ways we may not
recognize. It is impacting us today already. It is involved in search
engines. It is involved in the collecting of big data and analyzing it.
It is involved in all the arguments about the way social media is being
used to effect, or try to effect, our thinking and our politics. How
else might it play a role in the future of accessibility?
The webinar presenter: Jutta Treviranus at University of Toronto will
explore these questions in the webinar on Thursday, October 25 at 11
Pacific, noon Mountain, 1 central or 2 Eastern You can register now but
registration closes Wed. Oct. 24 at midnight Eastern.
for webinars.
Those who register should get directions for joining sent late wednesday
or Early on Thursday.
+++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artificial+intelligence
Limbic thought and artificial intelligence
September 5, 2018 Siddharth (Sid) Pai
https://www.linkedin.com/pulse/limbic-thought-artificial-intelligence-siddharth-sid-pai/
It will be eons before AI thinks with a limbic brain, let alone has consciousness
AI programmes themselves generate additional computer programming code to fine-tune their algorithms—without the need for an army of computer programmers. In AI speak, this is now often referred to as “machine learning”.
An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
++++++++++++++++++
Stephen Hawking warns artificial intelligence could end mankind
By Rory Cellan-JonesTechnology correspondent,2 December 2014
++++++++++++++++++++
thank you Sarnath Ramnat (sarnath@stcloudstate.edu) for the finding
++++++++++++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artifical+intelligence
http://mashable.com/2015/01/19/super-mario-artificial-intelligence/
A team of German researchers has used artificial intelligence to create a “self-aware” version of Super Mario who can respond to verbal commands and automatically play his own game.
Artificial Intelligence helps Mario play his own game
Students at the University of Tubingen have used Mario as part of their efforts to find out how the human brain works.
The cognitive modelling unit claim their project has generated “a fully functional program” and “an alive and somewhat intelligent artificial agent”.
http://www.bbc.co.uk/newsbeat/30879456
Can Super Mario Save Artificial Intelligence?
The most popular approaches today focus on Big Data, or mimicking humansthat already know how to do some task. But sheer mimicry breaks down when one gives a machine new tasks, and, as I explained a few weeks ago, Big Data approaches tend to excel at finding correlations without necessarily being able to induce the rules of the game. If Big Data alone is not a powerful enough tool to induce a strategy in a complex but well-defined game like chess, then that’s a problem, since the real world is vastly more open-ended, and considerably more complicated.
http://www.newyorker.com/tech/elements/can-super-mario-save-artificial-intelligence
Intelligence: a history
Intelligence has always been used as fig-leaf to justify domination and destruction. No wonder we fear super-smart robots
Stephen Cave
https://aeon.co/essays/on-the-dark-history-of-intelligence-as-domination
To say that someone is or is not intelligent has never been merely a comment on their mental faculties. It is always also a judgment on what they are permitted to do. Intelligence, in other words, is political.
The problem has taken an interesting 21st-century twist with the rise of Artificial Intelligence (AI).
The term ‘intelligence’ itself has never been popular with English-language philosophers. Nor does it have a direct translation into German or ancient Greek, two of the other great languages in the Western philosophical tradition. But that doesn’t mean philosophers weren’t interested in it. Indeed, they were obsessed with it, or more precisely a part of it: reason or rationality. The term ‘intelligence’ managed to eclipse its more old-fashioned relative in popular and political discourse only with the rise of the relatively new-fangled discipline of psychology, which claimed intelligence for itself.
Plato conclude, in The Republic, that the ideal ruler is ‘the philosopher king’, as only a philosopher can work out the proper order of things. This idea was revolutionary at the time. Athens had already experimented with democracy, the rule of the people – but to count as one of those ‘people’ you just had to be a male citizen, not necessarily intelligent. Elsewhere, the governing classes were made up of inherited elites (aristocracy), or by those who believed they had received divine instruction (theocracy), or simply by the strongest (tyranny).
Plato’s novel idea fell on the eager ears of the intellectuals, including those of his pupil Aristotle. Aristotle was always the more practical, taxonomic kind of thinker. He took the notion of the primacy of reason and used it to establish what he believed was a natural social hierarchy.
So at the dawn of Western philosophy, we have intelligence identified with the European, educated, male human. It becomes an argument for his right to dominate women, the lower classes, uncivilised peoples and non-human animals. While Plato argued for the supremacy of reason and placed it within a rather ungainly utopia, only one generation later, Aristotle presents the rule of the thinking man as obvious and natural.
The late Australian philosopher and conservationist Val Plumwood has argued that the giants of Greek philosophy set up a series of linked dualisms that continue to inform our thought. Opposing categories such as intelligent/stupid, rational/emotional and mind/body are linked, implicitly or explicitly, to others such as male/female, civilised/primitive, and human/animal. These dualisms aren’t value-neutral, but fall within a broader dualism, as Aristotle makes clear: that of dominant/subordinate or master/slave. Together, they make relationships of domination, such as patriarchy or slavery, appear to be part of the natural order of things.
Descartes rendered nature literally mindless, and so devoid of intrinsic value – which thereby legitimated the guilt-free oppression of other species.
For Kant, only reasoning creatures had moral standing. Rational beings were to be called ‘persons’ and were ‘ends in themselves’. Beings that were not rational, on the other hand, had ‘only a relative value as means, and are therefore called things’. We could do with them what we liked.
This line of thinking was extended to become a core part of the logic of colonialism. The argument ran like this: non-white peoples were less intelligent; they were therefore unqualified to rule over themselves and their lands. It was therefore perfectly legitimate – even a duty, ‘the white man’s burden’ – to destroy their cultures and take their territory.
The same logic was applied to women, who were considered too flighty and sentimental to enjoy the privileges afforded to the ‘rational man’.
Galton believe that intellectual ability was hereditary and could be enhanced through selective breeding. He decided to find a way to scientifically identify the most able members of society and encourage them to breed – prolifically, and with each other. The less intellectually capable should be discouraged from reproducing, or indeed prevented, for the sake of the species. Thus eugenics and the intelligence test were born together.
From David Hume to Friedrich Nietzsche, and Sigmund Freud through to postmodernism, there are plenty of philosophical traditions that challenge the notion that we’re as intelligent as we’d like to believe, and that intelligence is the highest virtue.
From 2001: A Space Odyssey to the Terminator films, writers have fantasised about machines rising up against us. Now we can see why. If we’re used to believing that the top spots in society should go to the brainiest, then of course we should expect to be made redundant by bigger-brained robots and sent to the bottom of the heap.
Natural stupidity, rather than artificial intelligence, remains the greatest risk.
++++++++++++++++++++++
more on intelligence in this IMS blog
https://blog.stcloudstate.edu/ims?s=intelligence
7 Edtech Trends to Watch in 2022: a Startup Guide for Entrepreneurs
https://www.edsurge.com/news/2022-04-18-7-edtech-trends-to-watch-in-2022-a-startup-guide-for-entrepreneurs
1. Data is abundant and the key to today’s edtech solutions
2. Artificial intelligence (AI) and machine learning (ML) are powering the latest generation of edtechs
3. Game-based learning is transforming how students learn
4. Edtechs are at the forefront of digital transformation in the classroom
5. Workforce upskilling is being supplemented by edtech solutions
6. Edtechs are being called upon to help with student wellbeing
7. Augmented reality (AR) and virtual reality are top of mind