Searching for "artificial intelligence"

OLC Collaborate

OLC Collaborate

https://onlinelearningconsortium.org/attend-2019/innovate/

schedule:

https://onlinelearningconsortium.org/attend-2019/innovate/program/all_sessions/#streamed

Wednesday

++++++++++++++++
THE NEW PROFESSOR: HOW I PODCASTED MY WAY INTO STUDENTS’ LIVES (AND HOW YOU CAN, TOO)

Concurrent Session 1

https://onlinelearningconsortium.org/olc-innovate-2019-session-page/?session=6734&kwds=

+++++++++++++

Creating A Cost-Free Course

+++++++++++++++++

Idea Hose: AI Design For People
Date: Wednesday, April 3rd
Time: 3:30 PM to 4:15 PM
Conference Session: Concurrent Session 3
Streamed session
Lead Presenter: Brian Kane (General Design LLC)
Track: Research: Designs, Methods, and Findings
Location: Juniper A
Session Duration: 45min
Brief Abstract:What happens when you apply design thinking to AI? AI presents a fundamental change in the way people interact with machines. By applying design thinking to the way AI is made and used, we can generate an unlimited amount of new ideas for products and experiences that people will love and use.https://onlinelearningconsortium.org/olc-innovate-2019-session-page/?session=6964&kwds=
Notes from the session:
design thinking: get out from old mental models.  new narratives; get out of the sci fi movies.
narrative generators: AI design for people stream
we need machines to make mistakes. Ai even more then traditional software.
Lessons learned: don’t replace people
creativity engines – automated creativity.
trends:
 AI Design for People stream49 PM-us9swehttps://www.androidauthority.com/nvidia-jetson-nano-966609/
https://community.infiniteflight.com/t/virtualhub-ios-and-android-free/142837?u=sudafly
 http://bit.ly/VirtualHub
Thursday
Chatbots, Game Theory, And AI: Adapting Learning For Humans, Or Innovating Humans Out Of The Picture?
Date: Thursday, April 4th
Time: 8:45 AM to 9:30 AM
Conference Session: Concurrent Session 4
Streamed session
Lead Presenter: Matt Crosslin (University of Texas at Arlington LINK Research Lab)
Track: Experiential and Life-Long Learning
Location: Cottonwood 4-5
Session Duration: 45min
Brief Abstract:How can teachers utilize chatbots and artificial intelligence in ways that won’t remove humans out of the education picture? Using tools like Twine and Recast.AI chatobts, this session will focus on how to build adaptive content that allows learners to create their own heutagogical educational pathways based on individual needs.++++++++++++++++

This Is Us: Fostering Effective Storytelling Through EdTech & Student’s Influence As Digital Citizens
Date: Thursday, April 4th
Time: 9:45 AM to 10:30 AM
Conference Session: Concurrent Session 5
Streamed session
Lead Presenter: Maikel Alendy (FIU Online)
Co-presenter: Sky V. King (FIU Online – Florida International University)
Track: Teaching and Learning Practice
Location: Cottonwood 4-5
Session Duration: 45min
Brief Abstract:“This is Us” demonstrates how leveraging storytelling in learning engages students to effectively communicate their authentic story, transitioning from consumerism to become creators and influencers. Addressing responsibility as a digital citizen, information and digital literacy, online privacy, and strategies with examples using several edtech tools, will be reviewed.++++++++++++++++++

Personalized Learning At Scale: Using Adaptive Tools & Digital Assistants
Date: Thursday, April 4th
Time: 11:15 AM to 12:00 PM
Conference Session: Concurrent Session 6
Streamed session
Lead Presenter: Kristin Bushong (Arizona State University )
Co-presenter: Heather Nebrich (Arizona State University)
Track: Effective Tools, Toys and Technologies
Location: Juniper C
Session Duration: 45min
Brief Abstract:Considering today’s overstimulated lifestyle, how do we engage busy learners to stay on task? Join this session to discover current efforts in implementing ubiquitous educational opportunities through customized interests and personalized learning aspirations e.g., adaptive math tools, AI support communities, and memory management systems.+++++++++++++

High-Impact Practices Online: Starting The Conversation
Date: Thursday, April 4th
Time: 1:15 PM to 2:00 PM
Conference Session: Concurrent Session 7
Streamed session
Lead Presenter: Katie Linder (Oregon State University)
Co-presenter: June Griffin (University of Nebraska-Lincoln)
Track: Teaching and Learning Practice
Location: Cottonwood 4-5
Session Duration: 45min
Brief Abstract:The concept of High-impact Educational Practices (HIPs) is well-known, but the conversation about transitioning HIPs online is new. In this session, contributors from the edited collection High-Impact Practices in Online Education will share current HIP research, and offer ideas for participants to reflect on regarding implementing HIPs into online environments.https://www.aacu.org/leap/hipshttps://www.aacu.org/sites/default/files/files/LEAP/HIP_tables.pdf+++++++++++++++++++++++

Human Skills For Digital Natives: Expanding Our Definition Of Tech And Media Literacy
Date: Thursday, April 4th
Time: 3:45 PM to 5:00 PM
Streamed session
Lead Presenter: Manoush Zomorodi (Stable Genius Productions)
Track: N/A
Location: Adams Ballroom
Session Duration: 1hr 15min
Brief Abstract:How can we ensure that students and educators thrive in increasingly digital environments, where change is the only constant? In this keynote, author and journalist Manoush Zomorodi shares her pioneering approach to researching the effects of technology on our behavior. Her unique brand of journalism includes deep-dive investigations into such timely topics as personal privacy, information overload, and the Attention Economy. These interactive multi-media experiments with tens of thousands of podcast listeners will inspire you to think creatively about how we use technology to educate and grow communities.Friday

Anger Is An Energy
Date: Friday, April 5th
Time: 8:30 AM to 9:30 AM
Streamed session
Lead Presenter: Michael Caulfield (Washington State University-Vancouver)
Track: N/A
Location: Adams Ballroom
Position: 2
Session Duration: 60min
Brief Abstract:Years ago, John Lyndon (then Johnny Rotten) sang that “anger is an energy.” And he was right, of course. Anger isn’t an emotion, like happiness or sadness. It’s a reaction, a swelling up of a confused urge. I’m a person profoundly uncomfortable with anger, but yet I’ve found in my professional career that often my most impactful work begins in a place of anger: anger against injustice, inequality, lies, or corruption. And often it is that anger that gives me the energy and endurance to make a difference, to move the mountains that need to be moved. In this talk I want to think through our uneasy relationship with anger; how it can be helpful, and how it can destroy us if we’re not careful.++++++++++++++++

Improving Online Teaching Practice, Creating Community And Sharing Resources
Date: Friday, April 5th
Time: 10:45 AM to 11:30 AM
Conference Session: Concurrent Session 10
Streamed session
Lead Presenter: Laurie Daily (Augustana University)
Co-presenter: Sharon Gray (Augustana University)
Track: Problems, Processes, and Practices
Location: Juniper A
Session Duration: 45min
Brief Abstract:The purpose of this session is to explore the implementation of a Community of Practice to support professional development, enhance online course and program development efforts, and to foster community and engagement between and among full and part time faculty.+++++++++++++++

It’s Not What You Teach, It’s HOW You Teach: A Story-Driven Approach To Course Design
Date: Friday, April 5th
Time: 11:45 AM to 12:30 PM
Conference Session: Concurrent Session 11
Streamed session
Lead Presenter: Katrina Rainer (Strayer University)
Co-presenter: Jennifer M McVay-Dyche (Strayer University)
Track: Teaching and Learning Practice
Location: Cottonwood 2-3
Session Duration: 45min
Brief Abstract:Learning is more effective and organic when we teach through the art of storytelling. At Strayer University, we are blending the principles story-driven learning with research-based instructional design practices to create engaging learning experiences. This session will provide you with strategies to strategically infuse stories into any lesson, course, or curriculum.

Peter Rubin Future Presence

P 4. But all that “disruption,” as people love to collect, is over looking the thing that’s the most disruptive of them all call on the way we relate to each other will never be the same. That’s because of something called presence.
Presence is the absolute foundation of virtual reality, and in VR, it’s the absolute foundation of connection-connection with yourself, with an idea, with another human, even connection with artificial intelligence.
p. 28 VR definition
Virtual reality is an 1. artificial environment that’s 2. immersive enough to convince you that you are 3. actually inside it.
1. ” artificial environment ” could mean just about anything. The photograph is an artificial environment of video game is an artificial environment a Pixar movie is an artificial environment the only thing that matters is that it’s not where are you physically are
p. 44 VR: putting the “it” in “meditation”
my note: it seems Rubin sees the 21st century VR as the equivalent of the drug experimentation in the 1960s US: p. 46 “VR is potentially going to become a direct interface to the subconscious”

p. 74 serious games, Carrie Heeter. p. 49

The default network in the brain in today’s society is the wandering mind. We are ruminating about the past, and we are worrying about the future, or maybe even planning for the future; there is some productive thinking. But in general, a wandering mind is an unhappy mind. And that is where we spent all of our week in time: not being aware of everything that we are experiencing in the moment.
Hester’s Open meditation had already let her to design apps and studies that investigated mediate meditations ability to calm that wandering mind
p. 51 Something called interoception. It is a term that is gaining ground in psychologist circles in recent years and basically means awareness of battle associations-like my noticing the fact that I was sitting awkwardly or that keeping my elbows on the cheers armrests was making my shoulders hunched slightly. Not surprisingly, mindfulness meditation seems to heighten interoception. And that is exactly how Heeter and Allbritton Strep throat the meditation I am doing on Costa Del sole. First, I connect with the environment; then with my body; Dan I combined the two. The combination of the VR and interception leads to what she describes as “embodied presence”: not only do you feel like you are in the VR environment, but because you have consciously work to integrate your bodily Sensations into VR, it is a fuller, more vivid version of presents.

p. 52 guided meditation VR GMVR

p. 56 VVVR visual voice virtual reality

p. 57

Just as the ill-fated google glass immediately stigmatized all its wearers as “glassholes”- a.k.a. “techier-than-thou douche bags who dropped $1500 to see an email notification appear in front of their face”-so to do some VR headset still look like face TVs for another it’s

p. 61 Hedgehog Love
engineering feelings with social presence. p.64 remember presents? This is the beginning of social presence. Mindfulness is cool, but making eye contact with Henry is the first step into the future.

p.65 back in 1992, our friend Carrie heeter posited that presence-the sensation did you are really there in VR-head treat day mentions. There was personal presents, environmental presents, and social presents, which she basically defined is being around other people who register your existence.
p. 66 the idea that emotion can be not a cause, as sweet so often assumed, but a result of it of behavior
p. 72 in chapter 1, we explain the difference between Mobile VR and PC driven PR.  The former is cheaper and easier; all you do is drop your smart phone into a headset, and it provides just about everything can eat. Dedicated VR headsets rely on the stronger processors of desktop PCs and game consoles,So they can provide a more robust sense of presence-usually at the cost of being hit Earth to your computer with cables. (it’s the cost of actual money: dedicated headset systems from hundreds of dollars, while mobile headsets like Samsung’s deer VR or Google’s DayDream View can be had for mere tens of dollars.) There is one other fundamental distinction between mobile VR and high-end VR, though, and that is what you do with your hands-how you input your desires. When VR reemerged in the early 2010s, however, the question of input was open to debate. Actually, more than one debate. p. 73 video game controllers are basically metaphors. Some, like steering wheels or pilot flight sticks, might look like that think they’re supposed to be, but  at their essence they are all just collections of buttons. p. 77 HTC sales small wearable truckers that you can affix to any object, or anybody part, to break it into the Vive’s VR.
p. 78 wait a second – you were talking about storytelling.
p. 79 Every Hollywood studio you can imagine-21st Century Fox, Paramount, Warner Bross.-Has already invested in virtual reality. They have made VR experiences based on their own movies, like interstellar or ghost in the Shell, and they have invested in other VR companies. Hollywood directors like Doug Liman (Edge of Tomorrow) and Robert Stromberg (Maleficent) have taken VR project. And the progress is exhilarating. Alejandro GOnzalez Inarritu, a 4-Time Oscar winner for best director 2014 movie Birdman, won best picture, received this special achievement Academy award in 2017 for a VR Schwartz he made. Yet Carne Y Arena, which puts viewers insight a harrowing journey from Mexico to the United States, is nothing like a movie, or even a video game.

When you premiered at the Cannes film Festival in early 2017, it was housed in an airplane hangar; viewers were a shirt, barefoot, into a room with a sand-covert floor, where they could watch and interact with other people trying to make it over the border. Arrests, detention centers, dehydration-the extremity of the human condition happening all around you. India announcement, the Academy of motion picture arts and sciences called the peas “deeply emotional and physically immersive”

p. 83 empathy versus intimacy. Why good stories need someone else

p. 84 Chris Milk

http://www.thewildernessdowntown.com/

p. 85 empathy vs intimacy: appreciation vs emotion

Both of these words are fuzzy, to say the least. Both have decades of study behind him, but both have also appeared and more magazine covers in just about any words, other than possibly “abs”

Empathy: dear Do it to do identify with and understand dollars, particularly on an emotional level. It involves imagining yourself in the place of another and, therefore, appreciating how do you feel.

Intimacy: a complex sphere of ‘inmost’ relationships with self and others that are not usually minor or incidental (though they may be a transitory) and which usually touch the personal world very deeply. They are our closest relationships with friends, family, children, lovers, but they are also the deep into important experiences we have with self

Empathy necessarily needs to involve other people; intimacy doesn’t. Empathy involves emotional understanding; intimacy involves emotion itself. Empathy, at its base, isn’t act of getting outside yourself: you’re protecting yourself into someone’s else experience, which means that in some ways you are leaving your own experience behind, other than as a reference point. Intimacy, on the other hand, is at its base act of feeling: you might be connecting quit someone or something Else, but you are doing so on the basis of the emotions you feel. p 86. Any type of VR experience perfectly illustrates the surprising gap between empathy and intimacy: life action VR. p. 87 unlike CGI-based storytelling, which full somewhere in between game in movie, live action VR feels much more like the conventional video forms that we are used to from television and movies. Like those media, people have been using VR to shoot everything from narrative fiction to documentary the sports.

Nonny de la Peña Hunger in Los Angeles at Sundance

p. 89 Clouds over Sidra Chris Milk

p. 90 SXSW south by southwest Austin Texas

p. 92 every single story has only one goal at its base: to make you care. This holds true whether it is a tale told around a campfire at night, one related to a sequence of panels in the comic book, or dialogue-heavy narrative of a television show. The story might be trying to make you laugh, or just scare you, or to make you feel sad or happy on behalf of one of the characters, but those are all just forms of caring, right? Your emotional investment-the fact that what kept us in this tale matters to you-is the fundamental aim of the storyteller.

Storytelling, than, has evolved to find ways to draw you out of yourself, to make you forget that what you are hearing or seeing or reading isn’t real. It’s only at that point, after all, that our natural capacity for empathy can kick in. p. 93 meanwhile, technology continues to evolve to detaches from those stories. For one, the frame itself continues to get smaller. Strangers still, this distraction has happened well stories continue to become more and more complex. Narratively, at least, stories are more intricate then the have ever been. p. 94. Now, with VR storytelling, the distracting power of multiple screens his met it’s match.

p. 101 experiencing our lives- together

What videos two cannot do, though, he’s bringing people together insights VR, the way re-McClure’s sinking-multicoloredat-blogs-at-each-other tag-team project is VVVR does. That’s why even V are filmmaking powerhouses like Within ( https://www.with.in/get-the-app) are moving beyond mere documentary and narrative and trying to turn storytelling into a shared experience.

Make no mistake: storytelling has always been a shirt experience. Being conscripted into the story, or even being the story.

https://www.linkedin.com/in/jess-engel-96421010/

https://medium.com/@Within/welcome-jess-aea620df0ca9

p. 103 like so many VR experiences, life of us defies many of the ways we describe a story to each other. For one, it feels at fonts shorter and longer than its actual seven-minutes runtime; although it’s seems to be over in a flash, flash contains so many details that in retrospect it is as full and vivid is a two-our movie.

There is another think, though, that sets life of us apart from so many other stories-it is the fact that not only was I in the story, but someone else was in there with me. In that someone wasn’t a field character talking to a camera that they some calling about it, or a video game creature that was programmed to look in ‘my’ direction, but a real person-a person who saw what I saw, a person who was present for each of those moments and who know is inextricably part of my old, shard-Like memory of them.

p. 107 what to do and what to do it with . How social VR is reinventing everything from game night to online harassment.

Facebook Hires Altspace CEO Eric Romo

p. 110 VR isn’t given Romo’s first bet on the future. When he was finishing up his masters degree in mechanical engineering, a professor emailed him on behalf of two men who were recruiting for a rocket company there were starting. One of those man was a Elon musk, which is how Romo became the 13th employee at space X. Eventually, she started the company focusing go solar energy, but when the bottom fell out of the industry, she shut down the company and looked for his next opportunity. Romo spent the next year and a half researching the technology and thinking about what kind of company might make sense in the new VR enabled world. He had read Snow crash, but he oh soon you get our hopes for DVR future could very well end up like gay themed flying car: defined-and limited-bite an expectation that might not match perfectly which what we actually want.

https://www.amazon.com/Snow-Crash-Neal-Stephenson/dp/1491515058

p. 116 back in the day, trolling just trim forward to pursuing a provocative argument for kicks. Today, the word used to describe the actions of anonymous mobs like the one that, for instance, Rolf actor Leslie Jones off Twitter with an onslaught of racist and sexist abuse. Harassment has become one of the defining characteristics of the Internet is for use it today. But with the emergernce of VR, our social networks have become, quite literally, embodied.

p. 116 https://medium.com/athena-talks/my-first-virtual-reality-sexual-assault-2330410b62ee 

p. 142 increasing memory function by moving from being a voyeur to physically participating in the virtual activity. embodied presence – bringing not just your head into your hands, but your body into VR-strengthens memories in the number of ways.

p. 143 at the beginning of 2017, Facebook fit published some of its. New Ron’s in internal research about the potential of social VR. Neurons INc. The agency measured eye movements, Brain activity, and pools of volunteers who were watching streaming video on smart phones and ultimately discovered that buffering and lag were significantly more stressful than waiting can line it a store, and even slightly more stressful than watching a horror movie.

p. 145 after the VR experience, more than 80% of introverts — is identified by a short survey participants took before hand-wanted to become friends with the person they had chatted with, as opposed to less than 60% of extroverts

p. 149 Rec Room Confidential: the anatomy in evolution of VR friendships

p. 165 reach out and touch someone; haptics, tactile presence and making VR physical.

https://www.digicert.com/ 

VOID: Vision of Infinite Dimensions p. 167

p. 169 the 4-D-effects: steam, cool air, moisture,

p. 170 Copresence

About

https://www.researchgate.net/profile/Shanyang_Zhao

https://www.researchgate.net/publication/2532682_Toward_A_Taxonomy_of_Copresence

https://astro.temple.edu/~bzhao001/Taxonomy_Copresence.pdf

p. 171 Zhao laid out two different criteria. The first was whether or not to people are actually in the same place-basically, are they or their stand-ins physically close enough to be able to communicate without any other tools? To people, she wrote, can either have “physical proximity” or “electronic proximity” the latter being some sort of networked connection. The second criterion was whether each person is corporeally there; in other words, is it their actual flesh-and-blood body? The second condition can have three outcomes: both people can be there corporeally; neither can be there corporeally , instead using some sort of stand in like an avatar or a robot; or just one of them can be there corporeally, with the other using case stent in

“virtual copresence” is when a flesh and blood person interacts physically with a representative of a human; if that sounds confusing, 80 good example is using an ATM call mom where are the ATM is a stent in for a bank teller

p. 172 “hypervirtual copresence,” which involves nonhuman devices that are interacting in the same physical space in a humanlike fashion. social VR does not quite fit into any of this category. Zhao refers to this sort of hybrid as a “synthetic environment” and claims that it is a combination of corporeal https://www.waze.com/telecopresence (like Skyping) and virtual telecopresence(like Waze directions )

p. 172 haptic tactics for tactile aptness

Of the five human senses,  a VR headset ca currently stimulates only to: vision and hearing. That leaves treat others-and while smell and taste me come some day.
P. 174; https://en.wikipedia.org/wiki/Aldous_Huxley Brave New World. tactile “feelies”

p. 175 https://en.wikipedia.org/wiki/A._Michael_Noll, 1971

p. 177 https://www.pcmag.com/review/349966/oculus-touch

p. 178 haptic feedback accessories, gloves. full body suites, p. 179 ultrasonics, low-frequency sound waves.

p. 186 the dating game: how touch changes intimacy.

p. 187 MIT Presence https://www.mitpressjournals.org/loi/pres

p. 186-190 questionnaire for the VRrelax project

p. 195 XXX-chnage program: turning porn back into people

p. 221 where we are going, we don’t need headsets. lets get speculative

p. 225 Magic Leap. p. 227 Magic Leap calls its technology “mixed reality,” claiming that the three dimensional virtual objects it brings into your world are far more advanced than the flat, static overlays of augmented reality. In reality, there is no longer any distinction between the two; in fact, the air are by now so many terms being accused in various ways by various companies that it’s probably worth a quick clarification.

definitions

Virtual reality: the illusion of an all-enveloping artificial world, created by wearing an opaque display in front of your eyes.

augmented reality: Bringing artificial objects into the real world-these can be as simple as a ” heads-up display,” like a speedometer project it onto your car’s windshield, or as complex as seen to be virtual creature woke across your real world leaving room, casting a realistic shadow on the floor

mixed reality: generally speaking, this is synonymous with AR, or eight at least with the part of AR that brings virtual objects into the real world. However, some people prefer “mixed” because they think “augmented” implies that reality isn’t enough.

extended or synthetic reality (XR or SR): all of the above! this are bought catch old terms that encompass the full spectrum of virtual elements individual settings.

p. 228 https://avegant.com/.

Edward Tang:

p. 231 in ten years, we won’t even have smartphone anymore.

p. 229 Eve VR is these come blink toddler, though, AR/MR is a third-trimester fetus: eat may be fully formed book eat is not quite ready to be out in the world yet. The headsets or large, the equipment is far more expensive than VR Anthony in many cases we don’t even know what a consumer product looks like.

p. 235 when 2020 is hindsight: what life in 2028 might actually look like.

++++++++++++

Machine Learning and the Cloud Rescue IT

How Machine Learning and the Cloud Can Rescue IT From the Plumbing Business

 FROM AMAZON WEB SERVICES (AWS)

By Andrew Barbour     Feb 19, 2019

https://www.edsurge.com/news/2019-02-19-how-machine-learning-and-the-cloud-can-rescue-it-from-the-plumbing-business

Many educational institutions maintain their own data centers.  “We need to minimize the amount of work we do to keep systems up and running, and spend more energy innovating on things that matter to people.”

what’s the difference between machine learning (ML) and artificial intelligence (AI)?

Jeff Olson: That’s actually the setup for a joke going around the data science community. The punchline? If it’s written in Python or R, it’s machine learning. If it’s written in PowerPoint, it’s AI.
machine learning is in practical use in a lot of places, whereas AI conjures up all these fantastic thoughts in people.

What is serverless architecture, and why are you excited about it?

Instead of having a machine running all the time, you just run the code necessary to do what you want—there is no persisting server or container. There is only this fleeting moment when the code is being executed. It’s called Function as a Service, and AWS pioneered it with a service called AWS Lambda. It allows an organization to scale up without planning ahead.

How do you think machine learning and Function as a Service will impact higher education in general?

The radical nature of this innovation will make a lot of systems that were built five or 10 years ago obsolete. Once an organization comes to grips with Function as a Service (FaaS) as a concept, it’s a pretty simple step for that institution to stop doing its own plumbing. FaaS will help accelerate innovation in education because of the API economy.

If the campus IT department will no longer be taking care of the plumbing, what will its role be?

I think IT will be curating the inter-operation of services, some developed locally but most purchased from the API economy.

As a result, you write far less code and have fewer security risks, so you can innovate faster. A succinct machine-learning algorithm with fewer than 500 lines of code can now replace an application that might have required millions of lines of code. Second, it scales. If you happen to have a gigantic spike in traffic, it deals with it effortlessly. If you have very little traffic, you incur a negligible cost.

++++++++
more on machine learning in this IMS blog
http://blog.stcloudstate.edu/ims?s=machine+learning

AI in the classroom

How Much Artificial Intelligence Should There Be in the Classroom?

By Betsy Corcoran and Jeffrey R. Young     Jan 23, 2019

https://www.edsurge.com/news/2019-01-23-how-much-artificial-intelligence-should-there-be-in-the-classroom

We can build robot teachers, or even robot teaching assistants. But should we?

the Chinese government has declared a national goal of surpassing the U.S. in AI technology by the year 2030, so there is almost a Sputnik-like push for the tech going on right now in China. At the same time, China is also facing a shortage of qualified teachers in many rural areas, and there’s a huge demand for high-quality language teachers and tutors throughout the country.

+++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artificial+intelligence

American AI Initiative

Trump creates American AI Initiative to boost research, train displaced workers

The order is designed to protect American technology, national security, privacy, and values when it comes to artificial intelligence.

STEPHEN SHANKLAND,SEAN KEANE FEBRUARY 11, 2019

https://www.cnet.com/news/trump-to-create-american-ai-initiative-with-executive-order/

President Donald Trump on Monday directed federal agencies to improve the nation’s artificial intelligence abilities — and help people whose jobs are displaced by the automation it enables.

t’s good for the US government to focus on AI, said Daniel Castro, chief executive of the Center for Data Innovation, a technology-focused think tank that supports the initiative.

Silicon Valley has been investing heavily in AI in recent years, but the path hasn’t always been an easy one. In October, for instance, Google withdrew from competition for a $10 billion Pentagon cloud computing contract, saying it might conflict with its principles for ethical use of AI.

Trump this week is also reportedly expected to sign an executive order banning Chinese telecom equipment from US wireless networks by the end of February.

++++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artificial+intelligence

music literacy

The Tragic Decline of Music Literacy (and Quality)

Jon Henschen | August 16, 2018 |  529,478

https://www.intellectualtakeout.org/article/tragic-decline-music-literacy-and-quality

Both jazz and classical art forms require not only music literacy, but for the musician to be at the top of their game in technical proficiency, tonal quality and creativity in the case of the jazz idiom. Jazz masters like John Coltrane would practice six to nine hours a day, often cutting his practice only because his inner lower lip would be bleeding from the friction caused by his mouth piece against his gums and teeth. His ability to compose and create new styles and directions for jazz was legendary. With few exceptions such as Wes Montgomery or Chet Baker, if you couldn’t read music, you couldn’t play jazz.

 

can you read music?

Besides the decline of music literacy and participation, there has also been a decline in the quality of music which has been proven scientifically by Joan Serra, a postdoctoral scholar at the Artificial Intelligence Research Institute of the Spanish National Research Council in Barcelona. Joan and his colleagues looked at 500,000 pieces of music between 1955-2010, running songs through a complex set of algorithms examining three aspects of those songs:

1. Timbre- sound color, texture and tone quality

2. Pitch- harmonic content of the piece, including its chords, melody, and tonal arrangements

3. Loudness- volume variance adding richness and depth

In an interview, Billy Joel was asked what has made him a standout. He responded his ability to read and compose music made him unique in the music industry, which as he explained, was troubling for the industry when being musically literate makes you stand out. An astonishing amount of today’s popular music is written by two people: Lukasz Gottwald of the United States and Max Martin from Sweden, who are both responsible for dozens of songs in the top 100 charts. You can credit Max and Dr. Luke for most the hits of these stars:

Katy Perry, Britney Spears, Kelly Clarkson, Taylor Swift, Jessie J., KE$HA, Miley Cyrus, Avril Lavigne, Maroon 5, Taio Cruz, Ellie Goulding, NSYNC, Backstreet Boys, Ariana Grande, Justin Timberlake, Nick Minaj, Celine Dion, Bon Jovi, Usher, Adam Lambert, Justin Bieber, Domino, Pink, Pitbull, One Direction, Flo Rida, Paris Hilton, The Veronicas, R. Kelly, Zebrahead

++++++++++++++++
more on metaliteracies in this IMS blog
http://blog.stcloudstate.edu/ims?s=metaliteracies

shaping the future of AI

Shaping the Future of A.I.

Daniel Burrus

https://www.linkedin.com/pulse/shaping-future-ai-daniel-burrus/

Way back in 1983, I identified A.I. as one of 20 exponential technologies that would increasingly drive economic growth for decades to come.

Artificial intelligence applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, decision trees and machine learning to recognize patterns from vast amounts of data, provide insights, predict outcomes and make complex decisions. A.I. can be applied to pattern recognition, object classification, language translation, data translation, logistical modeling and predictive modeling, to name a few. It’s important to understand that all A.I. relies on vast amounts of quality data and advanced analytics technology. The quality of the data used will determine the reliability of the A.I. output.

Machine learning is a subset of A.I. that utilizes advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon’s Alexa, Apple’s Siri, or any of the others from companies like Google and Microsoft all get better every year thanks to all of the use we give them and the machine learning that takes place in the background.

Deep learning is a subset of machine learning that uses advanced algorithms to enable an A.I. system to train itself to perform tasks by exposing multi-layered neural networks to vast amounts of data, then using what has been learned to recognize new patterns contained in the data. Learning can be Human Supervised LearningUnsupervised Learningand/or Reinforcement Learning like Google used with DeepMind to learn how to beat humans at the complex game Go. Reinforcement learning will drive some of the biggest breakthroughs.

Autonomous computing uses advanced A.I. tools such as deep learning to enable systems to be self-governing and capable of acting according to situational data without human command. A.I. autonomy includes perception, high-speed analytics, machine-to-machine communications and movement. For example, autonomous vehicles use all of these in real time to successfully pilot a vehicle without a human driver.

Augmented thinking: Over the next five years and beyond, A.I. will become increasingly embedded at the chip level into objects, processes, products and services, and humans will augment their personal problem-solving and decision-making abilities with the insights A.I. provides to get to a better answer faster.

Technology is not good or evil, it is how we as humans apply it. Since we can’t stop the increasing power of A.I., I want us to direct its future, putting it to the best possible use for humans. 

++++++++++
more on AI in this IMS blog
http://blog.stcloudstate.edu/ims?s=artifical+intelligence

more on deep learning in this IMS blog
http://blog.stcloudstate.edu/ims?s=deep+learning

eLearning Trends To Treat With Caution

4 eLearning Trends To Treat With Caution

https://elearningindustry.com/instructional-design-models-and-theories

Jumping onboard to a new industry trend with insufficient planning can result in your initiative failing to achieve its objective and, in the worst case, even hinder the learning process. So which hot topics should you treat with care?

1. Virtual Reality, or VR

Ultimately, the key question to consider when adopting anything new is whether it will help you achieve the desired outcome. VR shouldn’t be incorporated into learning just because it’s a common buzzword. Before you decide to give it a go, consider how it’s going to help your learner, and whether it’s truly the most effective or efficient way to meet the learning goal.

2. Gamification

considering introducing an interactive element to your learning, don’t let this deter you—just ensure that it’s relevant to the content and will aid the learning process.

3. Artificial Intelligence, or AI

If you are confident that a trend is going to yield better results for your learners, the ROI you see may well justify the upfront resources it requires.
Again, it all comes down to whether a trend is going to deliver in terms of achieving an objective.

4. Microlearning

The theory behind microlearning makes a lot of sense: organizing content into sections so that learning can fit easily with modern day attention spans and learners’ busy lifestyles is not a bad thing. The worry is that the buzzword, ‘microlearning’, has grown legs of its own, meaning the industry is losing sight of its’ founding principles.

+++++++++
more on elearning in this IMS blog
http://blog.stcloudstate.edu/ims?s=elearning

Does AI favor tyranny

Why Technology Favors Tyranny

Artificial intelligence could erase many practical advantages of democracy, and erode the ideals of liberty and equality. It will further concentrate power among a small elite if we don’t take steps to stop it.

https://www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/

YUVAL NOAH HARARI  OCTOBER 2018 ISSUE

Ordinary people may not understand artificial intelligence and biotechnology in any detail, but they can sense that the future is passing them by. In 1938 the common man’s condition in the Soviet Union, Germany, or the United States may have been grim, but he was constantly told that he was the most important thing in the world, and that he was the future (provided, of course, that he was an “ordinary man,” rather than, say, a Jew or a woman).

n 2018 the common person feels increasingly irrelevant. Lots of mysterious terms are bandied about excitedly in ted Talks, at government think tanks, and at high-tech conferences—globalizationblockchaingenetic engineeringAImachine learning—and common people, both men and women, may well suspect that none of these terms is about them.

Fears of machines pushing people out of the job market are, of course, nothing new, and in the past such fears proved to be unfounded. But artificial intelligence is different from the old machines. In the past, machines competed with humans mainly in manual skills. Now they are beginning to compete with us in cognitive skills.

Israel is a leader in the field of surveillance technology, and has created in the occupied West Bank a working prototype for a total-surveillance regime. Already today whenever Palestinians make a phone call, post something on Facebook, or travel from one city to another, they are likely to be monitored by Israeli microphones, cameras, drones, or spy software. Algorithms analyze the gathered data, helping the Israeli security forces pinpoint and neutralize what they consider to be potential threats.

The conflict between democracy and dictatorship is actually a conflict between two different data-processing systems. AI may swing the advantage toward the latter.

As we rely more on Google for answers, our ability to locate information independently diminishes. Already today, “truth” is defined by the top results of a Google search. This process has likewise affected our physical abilities, such as navigating space.

So what should we do?

For starters, we need to place a much higher priority on understanding how the human mind works—particularly how our own wisdom and compassion can be cultivated.

+++++++++++++++
more on SCSU student philosophy club in this IMS blog
http://blog.stcloudstate.edu/ims?s=philosophy+student+club

1 2 3 4 5

Skip to toolbar