Searching for "artificial intelligence"

Policy for Artificial Intelligence

Law is Code: Making Policy for Artificial Intelligence

Jules Polonetsky and Omer Tene January 16, 2019

https://www.ourworld.co/law-is-code-making-policy-for-artificial-intelligence/

Twenty years have passed since renowned Harvard Professor Larry Lessig coined the phrase “Code is Law”, suggesting that in the digital age, computer code regulates behavior much like legislative code traditionally did.  These days, the computer code that powers artificial intelligence (AI) is a salient example of Lessig’s statement.

  • Good AI requires sound data.  One of the principles,  some would say the organizing principle, of privacy and data protection frameworks is data minimization.  Data protection laws require organizations to limit data collection to the extent strictly necessary and retain data only so long as it is needed for its stated goal. 
  • Preventing discrimination – intentional or not.
    When is a distinction between groups permissible or even merited and when is it untoward?  How should organizations address historically entrenched inequalities that are embedded in data?  New mathematical theories such as “fairness through awareness” enable sophisticated modeling to guarantee statistical parity between groups.
  • Assuring explainability – technological due process.  In privacy and freedom of information frameworks alike, transparency has traditionally been a bulwark against unfairness and discrimination.  As Justice Brandeis once wrote, “Sunlight is the best of disinfectants.”
  • Deep learning means that iterative computer programs derive conclusions for reasons that may not be evident even after forensic inquiry. 

Yet even with code as law and a rising need for law in code, policymakers do not need to become mathematicians, engineers and coders.  Instead, institutions must develop and enhance their technical toolbox by hiring experts and consulting with top academics, industry researchers and civil society voices.  Responsible AI requires access to not only lawyers, ethicists and philosophers but also to technical leaders and subject matter experts to ensure an appropriate balance between economic and scientific benefits to society on the one hand and individual rights and freedoms on the other hand.

+++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artificial+intelligence

Inclusive Design of Artificial Intelligence

EASI Free Webinar: Inclusive Design of Artificial Intelligence Thursday

October 25
Artificial Intelligence (AI) and accessibility: will it enhance or
impede accessibility for users with disabilities?
Artificial intelligence used to be all about the distance future, but it
has now become mainstream. It is already impacting us in ways we may not
recognize. It is impacting us today already. It is involved in search
engines. It is involved in the collecting of big data and analyzing it.
It is involved in all the arguments about the way social media is being
used to effect, or try to effect, our thinking and our politics. How
else might it play a role in the future of accessibility?
The webinar presenter: Jutta Treviranus at University of Toronto will
explore these questions in the webinar on Thursday, October 25 at 11
Pacific, noon Mountain, 1 central or 2 Eastern You can register now but
registration closes Wed. Oct. 24 at midnight Eastern.
You can register now on the web at https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Feasi.cc&data=01%7C01%7Cpmiltenoff%40STCLOUDSTATE.EDU%7C4afdbee13881489312d308d6383f541b%7C5e40e2ed600b4eeaa9851d0c9dcca629%7C0&sdata=O7nOVG8dbkDX7lf%2FR6nWJi4f6qyHklGKfc%2FaB8p4r5o%3D&reserved=0and look for the link
for webinars.
Those who register should get directions for joining sent late wednesday
or Early on Thursday.

+++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artificial+intelligence

Limbic thought and artificial intelligence

Limbic thought and artificial intelligence

September 5, 2018  Siddharth (Sid) Pai

https://www.linkedin.com/pulse/limbic-thought-artificial-intelligence-siddharth-sid-pai/

An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
++++++++++++++++++

Stephen Hawking warns artificial intelligence could end mankind

https://www.bbc.com/news/technology-30290540
++++++++++++++++++++
thank you Sarnath Ramnat (sarnath@stcloudstate.edu) for the finding

An AI Wake-Up Call From Ancient Greece

  https://www.project-syndicate.org/commentary/artificial-intelligence-pandoras-box-by-adrienne-mayor-2018-10

++++++++++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artifical+intelligence

Super Mario gets artificial intelligence

Researchers create ‘self-aware’ Super Mario with artificial intelligence

http://mashable.com/2015/01/19/super-mario-artificial-intelligence/

A team of German researchers has used artificial intelligence to create a “self-aware” version of Super Mario who can respond to verbal commands and automatically play his own game.

Artificial Intelligence helps Mario play his own game

Students at the University of Tubingen have used Mario as part of their efforts to find out how the human brain works.

The cognitive modelling unit claim their project has generated “a fully functional program” and “an alive and somewhat intelligent artificial agent”.

http://www.bbc.co.uk/newsbeat/30879456

Can Super Mario Save Artificial Intelligence?

The most popular approaches today focus on Big Data, or mimicking humansthat already know how to do some task. But sheer mimicry breaks down when one gives a machine new tasks, and, as I explained a few weeks ago, Big Data approaches tend to excel at finding correlations without necessarily being able to induce the rules of the game. If Big Data alone is not a powerful enough tool to induce a strategy in a complex but well-defined game like chess, then that’s a problem, since the real world is vastly more open-ended, and considerably more complicated.

http://www.newyorker.com/tech/elements/can-super-mario-save-artificial-intelligence

intelligence measure

Intelligence: a history

Intelligence has always been used as fig-leaf to justify domination and destruction. No wonder we fear super-smart robots

Stephen Cave

https://aeon.co/essays/on-the-dark-history-of-intelligence-as-domination

To say that someone is or is not intelligent has never been merely a comment on their mental faculties. It is always also a judgment on what they are permitted to do. Intelligence, in other words, is political.

The problem has taken an interesting 21st-century twist with the rise of Artificial Intelligence (AI).

The term ‘intelligence’ itself has never been popular with English-language philosophers. Nor does it have a direct translation into German or ancient Greek, two of the other great languages in the Western philosophical tradition. But that doesn’t mean philosophers weren’t interested in it. Indeed, they were obsessed with it, or more precisely a part of it: reason or rationality. The term ‘intelligence’ managed to eclipse its more old-fashioned relative in popular and political discourse only with the rise of the relatively new-fangled discipline of psychology, which claimed intelligence for itself.

Plato conclude, in The Republic, that the ideal ruler is ‘the philosopher king’, as only a philosopher can work out the proper order of things. This idea was revolutionary at the time. Athens had already experimented with democracy, the rule of the people – but to count as one of those ‘people’ you just had to be a male citizen, not necessarily intelligent. Elsewhere, the governing classes were made up of inherited elites (aristocracy), or by those who believed they had received divine instruction (theocracy), or simply by the strongest (tyranny).

Plato’s novel idea fell on the eager ears of the intellectuals, including those of his pupil Aristotle. Aristotle was always the more practical, taxonomic kind of thinker. He took the notion of the primacy of reason and used it to establish what he believed was a natural social hierarchy.

So at the dawn of Western philosophy, we have intelligence identified with the European, educated, male human. It becomes an argument for his right to dominate women, the lower classes, uncivilised peoples and non-human animals. While Plato argued for the supremacy of reason and placed it within a rather ungainly utopia, only one generation later, Aristotle presents the rule of the thinking man as obvious and natural.

The late Australian philosopher and conservationist Val Plumwood has argued that the giants of Greek philosophy set up a series of linked dualisms that continue to inform our thought. Opposing categories such as intelligent/stupid, rational/emotional and mind/body are linked, implicitly or explicitly, to others such as male/female, civilised/primitive, and human/animal. These dualisms aren’t value-neutral, but fall within a broader dualism, as Aristotle makes clear: that of dominant/subordinate or master/slave. Together, they make relationships of domination, such as patriarchy or slavery, appear to be part of the natural order of things.

Descartes rendered nature literally mindless, and so devoid of intrinsic value – which thereby legitimated the guilt-free oppression of other species.

For Kant, only reasoning creatures had moral standing. Rational beings were to be called ‘persons’ and were ‘ends in themselves’. Beings that were not rational, on the other hand, had ‘only a relative value as means, and are therefore called things’. We could do with them what we liked.

This line of thinking was extended to become a core part of the logic of colonialism. The argument ran like this: non-white peoples were less intelligent; they were therefore unqualified to rule over themselves and their lands. It was therefore perfectly legitimate – even a duty, ‘the white man’s burden’ – to destroy their cultures and take their territory.

The same logic was applied to women, who were considered too flighty and sentimental to enjoy the privileges afforded to the ‘rational man’.

Galton believe that intellectual ability was hereditary and could be enhanced through selective breeding. He decided to find a way to scientifically identify the most able members of society and encourage them to breed – prolifically, and with each other. The less intellectually capable should be discouraged from reproducing, or indeed prevented, for the sake of the species. Thus eugenics and the intelligence test were born together.

From David Hume to Friedrich Nietzsche, and Sigmund Freud through to postmodernism, there are plenty of philosophical traditions that challenge the notion that we’re as intelligent as we’d like to believe, and that intelligence is the highest virtue.

From 2001: A Space Odyssey to the Terminator films, writers have fantasised about machines rising up against us. Now we can see why. If we’re used to believing that the top spots in society should go to the brainiest, then of course we should expect to be made redundant by bigger-brained robots and sent to the bottom of the heap.

Natural stupidity, rather than artificial intelligence, remains the greatest risk.

++++++++++++++++++++++
more on intelligence in this IMS blog
http://blog.stcloudstate.edu/ims?s=intelligence

data interference

APRIL 21, 2019 Zeynep Tufekci

Think You’re Discreet Online? Think Again

Because of technological advances and the sheer amount of data now available about billions of other people, discretion no longer suffices to protect your privacy. Computer algorithms and network analyses can now infer, with a sufficiently high degree of accuracy, a wide range of things about you that you may have never disclosed, including your moods, your political beliefs, your sexual orientation and your health.

There is no longer such a thing as individually “opting out” of our privacy-compromised world.

In 2017, the newspaper The Australian published an article, based on a leaked document from Facebook, revealing that the company had told advertisers that it could predict when younger users, including teenagers, were feeling “insecure,” “worthless” or otherwise in need of a “confidence boost.” Facebook was apparently able to draw these inferences by monitoring photos, posts and other social media data.

In 2017, academic researchers, armed with data from more than 40,000 Instagram photos, used machine-learning tools to accurately identify signs of depression in a group of 166 Instagram users. Their computer models turned out to be better predictors of depression than humans who were asked to rate whether photos were happy or sad and so forth.

Computational inference can also be a tool of social control. The Chinese government, having gathered biometric data on its citizens, is trying to use big data and artificial intelligence to single out “threats” to Communist rule, including the country’s Uighurs, a mostly Muslim ethnic group.

+++++++++++++

Zeynep Tufekci and Seth Stephens-Davidowitz: Privacy is over

https://www.centreforideas.com/article/zeynep-tufekci-and-seth-stephens-davidowitz-privacy-over

+++++++++++

Zeynep Tufekci writes about security and data privacy for NY Times, disinformation’s threat to democracy for WIRED

++++++++++
more on privacy in this IMS blog
http://blog.stcloudstate.edu/ims?s=privacy

OLC Collaborate

OLC Collaborate

https://onlinelearningconsortium.org/attend-2019/innovate/

schedule:

https://onlinelearningconsortium.org/attend-2019/innovate/program/all_sessions/#streamed

Wednesday

++++++++++++++++
THE NEW PROFESSOR: HOW I PODCASTED MY WAY INTO STUDENTS’ LIVES (AND HOW YOU CAN, TOO)

Concurrent Session 1

https://onlinelearningconsortium.org/olc-innovate-2019-session-page/?session=6734&kwds=

+++++++++++++

Creating A Cost-Free Course

+++++++++++++++++

Idea Hose: AI Design For People
Date: Wednesday, April 3rd
Time: 3:30 PM to 4:15 PM
Conference Session: Concurrent Session 3
Streamed session
Lead Presenter: Brian Kane (General Design LLC)
Track: Research: Designs, Methods, and Findings
Location: Juniper A
Session Duration: 45min
Brief Abstract:What happens when you apply design thinking to AI? AI presents a fundamental change in the way people interact with machines. By applying design thinking to the way AI is made and used, we can generate an unlimited amount of new ideas for products and experiences that people will love and use.https://onlinelearningconsortium.org/olc-innovate-2019-session-page/?session=6964&kwds=
Notes from the session:
design thinking: get out from old mental models.  new narratives; get out of the sci fi movies.
narrative generators: AI design for people stream
we need machines to make mistakes. Ai even more then traditional software.
Lessons learned: don’t replace people
creativity engines – automated creativity.
trends:
 AI Design for People stream49 PM-us9swehttps://www.androidauthority.com/nvidia-jetson-nano-966609/
https://community.infiniteflight.com/t/virtualhub-ios-and-android-free/142837?u=sudafly
 http://bit.ly/VirtualHub
Thursday
Chatbots, Game Theory, And AI: Adapting Learning For Humans, Or Innovating Humans Out Of The Picture?
Date: Thursday, April 4th
Time: 8:45 AM to 9:30 AM
Conference Session: Concurrent Session 4
Streamed session
Lead Presenter: Matt Crosslin (University of Texas at Arlington LINK Research Lab)
Track: Experiential and Life-Long Learning
Location: Cottonwood 4-5
Session Duration: 45min
Brief Abstract:How can teachers utilize chatbots and artificial intelligence in ways that won’t remove humans out of the education picture? Using tools like Twine and Recast.AI chatobts, this session will focus on how to build adaptive content that allows learners to create their own heutagogical educational pathways based on individual needs.++++++++++++++++

This Is Us: Fostering Effective Storytelling Through EdTech & Student’s Influence As Digital Citizens
Date: Thursday, April 4th
Time: 9:45 AM to 10:30 AM
Conference Session: Concurrent Session 5
Streamed session
Lead Presenter: Maikel Alendy (FIU Online)
Co-presenter: Sky V. King (FIU Online – Florida International University)
Track: Teaching and Learning Practice
Location: Cottonwood 4-5
Session Duration: 45min
Brief Abstract:“This is Us” demonstrates how leveraging storytelling in learning engages students to effectively communicate their authentic story, transitioning from consumerism to become creators and influencers. Addressing responsibility as a digital citizen, information and digital literacy, online privacy, and strategies with examples using several edtech tools, will be reviewed.++++++++++++++++++

Personalized Learning At Scale: Using Adaptive Tools & Digital Assistants
Date: Thursday, April 4th
Time: 11:15 AM to 12:00 PM
Conference Session: Concurrent Session 6
Streamed session
Lead Presenter: Kristin Bushong (Arizona State University )
Co-presenter: Heather Nebrich (Arizona State University)
Track: Effective Tools, Toys and Technologies
Location: Juniper C
Session Duration: 45min
Brief Abstract:Considering today’s overstimulated lifestyle, how do we engage busy learners to stay on task? Join this session to discover current efforts in implementing ubiquitous educational opportunities through customized interests and personalized learning aspirations e.g., adaptive math tools, AI support communities, and memory management systems.+++++++++++++

High-Impact Practices Online: Starting The Conversation
Date: Thursday, April 4th
Time: 1:15 PM to 2:00 PM
Conference Session: Concurrent Session 7
Streamed session
Lead Presenter: Katie Linder (Oregon State University)
Co-presenter: June Griffin (University of Nebraska-Lincoln)
Track: Teaching and Learning Practice
Location: Cottonwood 4-5
Session Duration: 45min
Brief Abstract:The concept of High-impact Educational Practices (HIPs) is well-known, but the conversation about transitioning HIPs online is new. In this session, contributors from the edited collection High-Impact Practices in Online Education will share current HIP research, and offer ideas for participants to reflect on regarding implementing HIPs into online environments.https://www.aacu.org/leap/hipshttps://www.aacu.org/sites/default/files/files/LEAP/HIP_tables.pdf+++++++++++++++++++++++

Human Skills For Digital Natives: Expanding Our Definition Of Tech And Media Literacy
Date: Thursday, April 4th
Time: 3:45 PM to 5:00 PM
Streamed session
Lead Presenter: Manoush Zomorodi (Stable Genius Productions)
Track: N/A
Location: Adams Ballroom
Session Duration: 1hr 15min
Brief Abstract:How can we ensure that students and educators thrive in increasingly digital environments, where change is the only constant? In this keynote, author and journalist Manoush Zomorodi shares her pioneering approach to researching the effects of technology on our behavior. Her unique brand of journalism includes deep-dive investigations into such timely topics as personal privacy, information overload, and the Attention Economy. These interactive multi-media experiments with tens of thousands of podcast listeners will inspire you to think creatively about how we use technology to educate and grow communities.Friday

Anger Is An Energy
Date: Friday, April 5th
Time: 8:30 AM to 9:30 AM
Streamed session
Lead Presenter: Michael Caulfield (Washington State University-Vancouver)
Track: N/A
Location: Adams Ballroom
Position: 2
Session Duration: 60min
Brief Abstract:Years ago, John Lyndon (then Johnny Rotten) sang that “anger is an energy.” And he was right, of course. Anger isn’t an emotion, like happiness or sadness. It’s a reaction, a swelling up of a confused urge. I’m a person profoundly uncomfortable with anger, but yet I’ve found in my professional career that often my most impactful work begins in a place of anger: anger against injustice, inequality, lies, or corruption. And often it is that anger that gives me the energy and endurance to make a difference, to move the mountains that need to be moved. In this talk I want to think through our uneasy relationship with anger; how it can be helpful, and how it can destroy us if we’re not careful.++++++++++++++++

Improving Online Teaching Practice, Creating Community And Sharing Resources
Date: Friday, April 5th
Time: 10:45 AM to 11:30 AM
Conference Session: Concurrent Session 10
Streamed session
Lead Presenter: Laurie Daily (Augustana University)
Co-presenter: Sharon Gray (Augustana University)
Track: Problems, Processes, and Practices
Location: Juniper A
Session Duration: 45min
Brief Abstract:The purpose of this session is to explore the implementation of a Community of Practice to support professional development, enhance online course and program development efforts, and to foster community and engagement between and among full and part time faculty.+++++++++++++++

It’s Not What You Teach, It’s HOW You Teach: A Story-Driven Approach To Course Design
Date: Friday, April 5th
Time: 11:45 AM to 12:30 PM
Conference Session: Concurrent Session 11
Streamed session
Lead Presenter: Katrina Rainer (Strayer University)
Co-presenter: Jennifer M McVay-Dyche (Strayer University)
Track: Teaching and Learning Practice
Location: Cottonwood 2-3
Session Duration: 45min
Brief Abstract:Learning is more effective and organic when we teach through the art of storytelling. At Strayer University, we are blending the principles story-driven learning with research-based instructional design practices to create engaging learning experiences. This session will provide you with strategies to strategically infuse stories into any lesson, course, or curriculum.

Peter Rubin Future Presence

P 4. But all that “disruption,” as people love to collect, is over looking the thing that’s the most disruptive of them all call on the way we relate to each other will never be the same. That’s because of something called presence.
Presence is the absolute foundation of virtual reality, and in VR, it’s the absolute foundation of connection-connection with yourself, with an idea, with another human, even connection with artificial intelligence.
p. 28 VR definition
Virtual reality is an 1. artificial environment that’s 2. immersive enough to convince you that you are 3. actually inside it.
1. ” artificial environment ” could mean just about anything. The photograph is an artificial environment of video game is an artificial environment a Pixar movie is an artificial environment the only thing that matters is that it’s not where are you physically are
p. 44 VR: putting the “it” in “meditation”
my note: it seems Rubin sees the 21st century VR as the equivalent of the drug experimentation in the 1960s US: p. 46 “VR is potentially going to become a direct interface to the subconscious”

p. 74 serious games, Carrie Heeter. p. 49

The default network in the brain in today’s society is the wandering mind. We are ruminating about the past, and we are worrying about the future, or maybe even planning for the future; there is some productive thinking. But in general, a wandering mind is an unhappy mind. And that is where we spent all of our week in time: not being aware of everything that we are experiencing in the moment.
Hester’s Open meditation had already let her to design apps and studies that investigated mediate meditations ability to calm that wandering mind
p. 51 Something called interoception. It is a term that is gaining ground in psychologist circles in recent years and basically means awareness of battle associations-like my noticing the fact that I was sitting awkwardly or that keeping my elbows on the cheers armrests was making my shoulders hunched slightly. Not surprisingly, mindfulness meditation seems to heighten interoception. And that is exactly how Heeter and Allbritton Strep throat the meditation I am doing on Costa Del sole. First, I connect with the environment; then with my body; Dan I combined the two. The combination of the VR and interception leads to what she describes as “embodied presence”: not only do you feel like you are in the VR environment, but because you have consciously work to integrate your bodily Sensations into VR, it is a fuller, more vivid version of presents.

p. 52 guided meditation VR GMVR

p. 56 VVVR visual voice virtual reality

p. 57

Just as the ill-fated google glass immediately stigmatized all its wearers as “glassholes”- a.k.a. “techier-than-thou douche bags who dropped $1500 to see an email notification appear in front of their face”-so to do some VR headset still look like face TVs for another it’s

p. 61 Hedgehog Love
engineering feelings with social presence. p.64 remember presents? This is the beginning of social presence. Mindfulness is cool, but making eye contact with Henry is the first step into the future.

p.65 back in 1992, our friend Carrie heeter posited that presence-the sensation did you are really there in VR-head treat day mentions. There was personal presents, environmental presents, and social presents, which she basically defined is being around other people who register your existence.
p. 66 the idea that emotion can be not a cause, as sweet so often assumed, but a result of it of behavior
p. 72 in chapter 1, we explain the difference between Mobile VR and PC driven PR.  The former is cheaper and easier; all you do is drop your smart phone into a headset, and it provides just about everything can eat. Dedicated VR headsets rely on the stronger processors of desktop PCs and game consoles,So they can provide a more robust sense of presence-usually at the cost of being hit Earth to your computer with cables. (it’s the cost of actual money: dedicated headset systems from hundreds of dollars, while mobile headsets like Samsung’s deer VR or Google’s DayDream View can be had for mere tens of dollars.) There is one other fundamental distinction between mobile VR and high-end VR, though, and that is what you do with your hands-how you input your desires. When VR reemerged in the early 2010s, however, the question of input was open to debate. Actually, more than one debate. p. 73 video game controllers are basically metaphors. Some, like steering wheels or pilot flight sticks, might look like that think they’re supposed to be, but  at their essence they are all just collections of buttons. p. 77 HTC sales small wearable truckers that you can affix to any object, or anybody part, to break it into the Vive’s VR.
p. 78 wait a second – you were talking about storytelling.
p. 79 Every Hollywood studio you can imagine-21st Century Fox, Paramount, Warner Bross.-Has already invested in virtual reality. They have made VR experiences based on their own movies, like interstellar or ghost in the Shell, and they have invested in other VR companies. Hollywood directors like Doug Liman (Edge of Tomorrow) and Robert Stromberg (Maleficent) have taken VR project. And the progress is exhilarating. Alejandro GOnzalez Inarritu, a 4-Time Oscar winner for best director 2014 movie Birdman, won best picture, received this special achievement Academy award in 2017 for a VR Schwartz he made. Yet Carne Y Arena, which puts viewers insight a harrowing journey from Mexico to the United States, is nothing like a movie, or even a video game.

When you premiered at the Cannes film Festival in early 2017, it was housed in an airplane hangar; viewers were a shirt, barefoot, into a room with a sand-covert floor, where they could watch and interact with other people trying to make it over the border. Arrests, detention centers, dehydration-the extremity of the human condition happening all around you. India announcement, the Academy of motion picture arts and sciences called the peas “deeply emotional and physically immersive”

p. 83 empathy versus intimacy. Why good stories need someone else

p. 84 Chris Milk

http://www.thewildernessdowntown.com/

p. 85 empathy vs intimacy: appreciation vs emotion

Both of these words are fuzzy, to say the least. Both have decades of study behind him, but both have also appeared and more magazine covers in just about any words, other than possibly “abs”

Empathy: dear Do it to do identify with and understand dollars, particularly on an emotional level. It involves imagining yourself in the place of another and, therefore, appreciating how do you feel.

Intimacy: a complex sphere of ‘inmost’ relationships with self and others that are not usually minor or incidental (though they may be a transitory) and which usually touch the personal world very deeply. They are our closest relationships with friends, family, children, lovers, but they are also the deep into important experiences we have with self

Empathy necessarily needs to involve other people; intimacy doesn’t. Empathy involves emotional understanding; intimacy involves emotion itself. Empathy, at its base, isn’t act of getting outside yourself: you’re protecting yourself into someone’s else experience, which means that in some ways you are leaving your own experience behind, other than as a reference point. Intimacy, on the other hand, is at its base act of feeling: you might be connecting quit someone or something Else, but you are doing so on the basis of the emotions you feel. p 86. Any type of VR experience perfectly illustrates the surprising gap between empathy and intimacy: life action VR. p. 87 unlike CGI-based storytelling, which full somewhere in between game in movie, live action VR feels much more like the conventional video forms that we are used to from television and movies. Like those media, people have been using VR to shoot everything from narrative fiction to documentary the sports.

Nonny de la Peña Hunger in Los Angeles at Sundance

p. 89 Clouds over Sidra Chris Milk

p. 90 SXSW south by southwest Austin Texas

p. 92 every single story has only one goal at its base: to make you care. This holds true whether it is a tale told around a campfire at night, one related to a sequence of panels in the comic book, or dialogue-heavy narrative of a television show. The story might be trying to make you laugh, or just scare you, or to make you feel sad or happy on behalf of one of the characters, but those are all just forms of caring, right? Your emotional investment-the fact that what kept us in this tale matters to you-is the fundamental aim of the storyteller.

Storytelling, than, has evolved to find ways to draw you out of yourself, to make you forget that what you are hearing or seeing or reading isn’t real. It’s only at that point, after all, that our natural capacity for empathy can kick in. p. 93 meanwhile, technology continues to evolve to detaches from those stories. For one, the frame itself continues to get smaller. Strangers still, this distraction has happened well stories continue to become more and more complex. Narratively, at least, stories are more intricate then the have ever been. p. 94. Now, with VR storytelling, the distracting power of multiple screens his met it’s match.

p. 101 experiencing our lives- together

What videos two cannot do, though, he’s bringing people together insights VR, the way re-McClure’s sinking-multicoloredat-blogs-at-each-other tag-team project is VVVR does. That’s why even V are filmmaking powerhouses like Within ( https://www.with.in/get-the-app) are moving beyond mere documentary and narrative and trying to turn storytelling into a shared experience.

Make no mistake: storytelling has always been a shirt experience. Being conscripted into the story, or even being the story.

https://www.linkedin.com/in/jess-engel-96421010/

https://medium.com/@Within/welcome-jess-aea620df0ca9

p. 103 like so many VR experiences, life of us defies many of the ways we describe a story to each other. For one, it feels at fonts shorter and longer than its actual seven-minutes runtime; although it’s seems to be over in a flash, flash contains so many details that in retrospect it is as full and vivid is a two-our movie.

There is another think, though, that sets life of us apart from so many other stories-it is the fact that not only was I in the story, but someone else was in there with me. In that someone wasn’t a field character talking to a camera that they some calling about it, or a video game creature that was programmed to look in ‘my’ direction, but a real person-a person who saw what I saw, a person who was present for each of those moments and who know is inextricably part of my old, shard-Like memory of them.

p. 107 what to do and what to do it with . How social VR is reinventing everything from game night to online harassment.

Facebook Hires Altspace CEO Eric Romo

p. 110 VR isn’t given Romo’s first bet on the future. When he was finishing up his masters degree in mechanical engineering, a professor emailed him on behalf of two men who were recruiting for a rocket company there were starting. One of those man was a Elon musk, which is how Romo became the 13th employee at space X. Eventually, she started the company focusing go solar energy, but when the bottom fell out of the industry, she shut down the company and looked for his next opportunity. Romo spent the next year and a half researching the technology and thinking about what kind of company might make sense in the new VR enabled world. He had read Snow crash, but he oh soon you get our hopes for DVR future could very well end up like gay themed flying car: defined-and limited-bite an expectation that might not match perfectly which what we actually want.

https://www.amazon.com/Snow-Crash-Neal-Stephenson/dp/1491515058

p. 116 back in the day, trolling just trim forward to pursuing a provocative argument for kicks. Today, the word used to describe the actions of anonymous mobs like the one that, for instance, Rolf actor Leslie Jones off Twitter with an onslaught of racist and sexist abuse. Harassment has become one of the defining characteristics of the Internet is for use it today. But with the emergernce of VR, our social networks have become, quite literally, embodied.

p. 116 https://medium.com/athena-talks/my-first-virtual-reality-sexual-assault-2330410b62ee 

p. 142 increasing memory function by moving from being a voyeur to physically participating in the virtual activity. embodied presence – bringing not just your head into your hands, but your body into VR-strengthens memories in the number of ways.

p. 143 at the beginning of 2017, Facebook fit published some of its. New Ron’s in internal research about the potential of social VR. Neurons INc. The agency measured eye movements, Brain activity, and pools of volunteers who were watching streaming video on smart phones and ultimately discovered that buffering and lag were significantly more stressful than waiting can line it a store, and even slightly more stressful than watching a horror movie.

p. 145 after the VR experience, more than 80% of introverts — is identified by a short survey participants took before hand-wanted to become friends with the person they had chatted with, as opposed to less than 60% of extroverts

p. 149 Rec Room Confidential: the anatomy in evolution of VR friendships

p. 165 reach out and touch someone; haptics, tactile presence and making VR physical.

https://www.digicert.com/ 

VOID: Vision of Infinite Dimensions p. 167

p. 169 the 4-D-effects: steam, cool air, moisture,

p. 170 Copresence

About

https://www.researchgate.net/profile/Shanyang_Zhao

https://www.researchgate.net/publication/2532682_Toward_A_Taxonomy_of_Copresence

https://astro.temple.edu/~bzhao001/Taxonomy_Copresence.pdf

p. 171 Zhao laid out two different criteria. The first was whether or not to people are actually in the same place-basically, are they or their stand-ins physically close enough to be able to communicate without any other tools? To people, she wrote, can either have “physical proximity” or “electronic proximity” the latter being some sort of networked connection. The second criterion was whether each person is corporeally there; in other words, is it their actual flesh-and-blood body? The second condition can have three outcomes: both people can be there corporeally; neither can be there corporeally , instead using some sort of stand in like an avatar or a robot; or just one of them can be there corporeally, with the other using case stent in

“virtual copresence” is when a flesh and blood person interacts physically with a representative of a human; if that sounds confusing, 80 good example is using an ATM call mom where are the ATM is a stent in for a bank teller

p. 172 “hypervirtual copresence,” which involves nonhuman devices that are interacting in the same physical space in a humanlike fashion. social VR does not quite fit into any of this category. Zhao refers to this sort of hybrid as a “synthetic environment” and claims that it is a combination of corporeal https://www.waze.com/telecopresence (like Skyping) and virtual telecopresence(like Waze directions )

p. 172 haptic tactics for tactile aptness

Of the five human senses,  a VR headset ca currently stimulates only to: vision and hearing. That leaves treat others-and while smell and taste me come some day.
P. 174; https://en.wikipedia.org/wiki/Aldous_Huxley Brave New World. tactile “feelies”

p. 175 https://en.wikipedia.org/wiki/A._Michael_Noll, 1971

p. 177 https://www.pcmag.com/review/349966/oculus-touch

p. 178 haptic feedback accessories, gloves. full body suites, p. 179 ultrasonics, low-frequency sound waves.

p. 186 the dating game: how touch changes intimacy.

p. 187 MIT Presence https://www.mitpressjournals.org/loi/pres

p. 186-190 questionnaire for the VRrelax project

p. 195 XXX-chnage program: turning porn back into people

p. 221 where we are going, we don’t need headsets. lets get speculative

p. 225 Magic Leap. p. 227 Magic Leap calls its technology “mixed reality,” claiming that the three dimensional virtual objects it brings into your world are far more advanced than the flat, static overlays of augmented reality. In reality, there is no longer any distinction between the two; in fact, the air are by now so many terms being accused in various ways by various companies that it’s probably worth a quick clarification.

definitions

Virtual reality: the illusion of an all-enveloping artificial world, created by wearing an opaque display in front of your eyes.

augmented reality: Bringing artificial objects into the real world-these can be as simple as a ” heads-up display,” like a speedometer project it onto your car’s windshield, or as complex as seen to be virtual creature woke across your real world leaving room, casting a realistic shadow on the floor

mixed reality: generally speaking, this is synonymous with AR, or eight at least with the part of AR that brings virtual objects into the real world. However, some people prefer “mixed” because they think “augmented” implies that reality isn’t enough.

extended or synthetic reality (XR or SR): all of the above! this are bought catch old terms that encompass the full spectrum of virtual elements individual settings.

p. 228 https://avegant.com/.

Edward Tang:

p. 231 in ten years, we won’t even have smartphone anymore.

p. 229 Eve VR is these come blink toddler, though, AR/MR is a third-trimester fetus: eat may be fully formed book eat is not quite ready to be out in the world yet. The headsets or large, the equipment is far more expensive than VR Anthony in many cases we don’t even know what a consumer product looks like.

p. 235 when 2020 is hindsight: what life in 2028 might actually look like.

++++++++++++

Machine Learning and the Cloud Rescue IT

How Machine Learning and the Cloud Can Rescue IT From the Plumbing Business

 FROM AMAZON WEB SERVICES (AWS)

By Andrew Barbour     Feb 19, 2019

https://www.edsurge.com/news/2019-02-19-how-machine-learning-and-the-cloud-can-rescue-it-from-the-plumbing-business

Many educational institutions maintain their own data centers.  “We need to minimize the amount of work we do to keep systems up and running, and spend more energy innovating on things that matter to people.”

what’s the difference between machine learning (ML) and artificial intelligence (AI)?

Jeff Olson: That’s actually the setup for a joke going around the data science community. The punchline? If it’s written in Python or R, it’s machine learning. If it’s written in PowerPoint, it’s AI.
machine learning is in practical use in a lot of places, whereas AI conjures up all these fantastic thoughts in people.

What is serverless architecture, and why are you excited about it?

Instead of having a machine running all the time, you just run the code necessary to do what you want—there is no persisting server or container. There is only this fleeting moment when the code is being executed. It’s called Function as a Service, and AWS pioneered it with a service called AWS Lambda. It allows an organization to scale up without planning ahead.

How do you think machine learning and Function as a Service will impact higher education in general?

The radical nature of this innovation will make a lot of systems that were built five or 10 years ago obsolete. Once an organization comes to grips with Function as a Service (FaaS) as a concept, it’s a pretty simple step for that institution to stop doing its own plumbing. FaaS will help accelerate innovation in education because of the API economy.

If the campus IT department will no longer be taking care of the plumbing, what will its role be?

I think IT will be curating the inter-operation of services, some developed locally but most purchased from the API economy.

As a result, you write far less code and have fewer security risks, so you can innovate faster. A succinct machine-learning algorithm with fewer than 500 lines of code can now replace an application that might have required millions of lines of code. Second, it scales. If you happen to have a gigantic spike in traffic, it deals with it effortlessly. If you have very little traffic, you incur a negligible cost.

++++++++
more on machine learning in this IMS blog
http://blog.stcloudstate.edu/ims?s=machine+learning

1 2 3 5