third Library 2.019 mini-conference: “Emerging Technology,” which will be held online (and for free) on Wednesday, October 30th, from 12:00 – 3:00 pm US-Pacific Daylight Time (click for your own time zone).
Tomorrow’s technologies are shaping our world today, revolutionizing the way we live and learn. Virtual Reality, Augmented Reality, Artificial Intelligence, Machine Learning, Blockchain, Internet of Things, Drones, Personalization, the Quantified Self. Libraries can and should be the epicenter of exploring, building and promoting these emerging techs, assuring the better futures and opportunities they offer are accessible to everyone. Learn what libraries are doing right now with these cutting-edge technologies, what they’re planning next and how you can implement these ideas in your own organization.
This is a free event, being held live online and also recorded. REGISTER HERE
Researchers at the Fraunhofer Institute for Microelectronic Circuits and Systems IMS have developed AIfES, an artificial intelligence (AI) concept for microcontrollers and sensors that contains a completely configurable artificial neural network. AIfES is a platform-independent machine learning library which can be used to realize self-learning microelectronics requiring no connection to a cloud or to high-performance computers. The sensor-related AI system recognizes handwriting and gestures, enabling for example gesture control of input when the library is running on a wearable.
a machine learning library programmed in C that can run on microcontrollers, but also on other platforms such as PCs, Raspberry PI and Android.
Because of technological advances and the sheer amount of data now available about billions of other people, discretion no longer suffices to protect your privacy. Computer algorithms and network analyses can now infer, with a sufficiently high degree of accuracy, a wide range of things about you that you may have never disclosed, including your moods, your political beliefs, your sexual orientation and your health.
There is no longer such a thing as individually “opting out” of our privacy-compromised world.
In 2017, the newspaper The Australian published an article, based on a leaked document from Facebook, revealing that the company had told advertisers that it could predict when younger users, including teenagers, were feeling “insecure,” “worthless” or otherwise in need of a “confidence boost.” Facebook was apparently able to draw these inferences by monitoring photos, posts and other social media data.
In 2017, academic researchers, armed with data from more than 40,000 Instagram photos, used machine-learning tools to accurately identify signs of depression in a group of 166 Instagram users. Their computer models turned out to be better predictors of depression than humans who were asked to rate whether photos were happy or sad and so forth.
Computational inference can also be a tool of social control. The Chinese government, having gathered biometric data on its citizens, is trying to use big data and artificial intelligence to single out “threats” to Communist rule, including the country’s Uighurs, a mostly Muslim ethnic group.
Zeynep Tufekci and Seth Stephens-Davidowitz: Privacy is over
Date: Wednesday, April 3rd Time: 3:30 PM to 4:15 PM Conference Session: Concurrent Session 3 Streamed session Lead Presenter: Brian Kane (General Design LLC) Track: Research: Designs, Methods, and Findings Location: Juniper A Session Duration: 45min Brief Abstract:What happens when you apply design thinking to AI? AI presents a fundamental change in the way people interact with machines. By applying design thinking to the way AI is made and used, we can generate an unlimited amount of new ideas for products and experiences that people will love and use.https://onlinelearningconsortium.org/olc-innovate-2019-session-page/?session=6964&kwds=
Notes from the session:
design thinking: get out from old mental models. new narratives; get out of the sci fi movies.
we need machines to make mistakes. Ai even more then traditional software.
Lessons learned: don’t replace people
Date: Thursday, April 4th Time: 8:45 AM to 9:30 AM Conference Session: Concurrent Session 4 Streamed session Lead Presenter: Matt Crosslin (University of Texas at Arlington LINK Research Lab) Track: Experiential and Life-Long Learning Location: Cottonwood 4-5 Session Duration: 45min Brief Abstract:How can teachers utilize chatbots and artificial intelligence in ways that won’t remove humans out of the education picture? Using tools like Twine and Recast.AI chatobts, this session will focus on how to build adaptive content that allows learners to create their own heutagogical educational pathways based on individual needs.++++++++++++++++
Date: Thursday, April 4th Time: 9:45 AM to 10:30 AM Conference Session: Concurrent Session 5 Streamed session Lead Presenter: Maikel Alendy (FIU Online) Co-presenter: Sky V. King (FIU Online – Florida International University) Track: Teaching and Learning Practice Location: Cottonwood 4-5 Session Duration: 45min Brief Abstract:“This is Us” demonstrates how leveraging storytelling in learning engages students to effectively communicate their authentic story, transitioning from consumerism to become creators and influencers. Addressing responsibility as a digital citizen, information and digital literacy, online privacy, and strategies with examples using several edtech tools, will be reviewed.++++++++++++++++++
Date: Thursday, April 4th Time: 11:15 AM to 12:00 PM Conference Session: Concurrent Session 6 Streamed session Lead Presenter: Kristin Bushong (Arizona State University ) Co-presenter: Heather Nebrich (Arizona State University) Track: Effective Tools, Toys and Technologies Location: Juniper C Session Duration: 45min Brief Abstract:Considering today’s overstimulated lifestyle, how do we engage busy learners to stay on task? Join this session to discover current efforts in implementing ubiquitous educational opportunities through customized interests and personalized learning aspirations e.g., adaptive math tools, AI support communities, and memory management systems.+++++++++++++
Date: Thursday, April 4th Time: 1:15 PM to 2:00 PM Conference Session: Concurrent Session 7 Streamed session Lead Presenter: Katie Linder (Oregon State University) Co-presenter: June Griffin (University of Nebraska-Lincoln) Track: Teaching and Learning Practice Location: Cottonwood 4-5 Session Duration: 45min Brief Abstract:The concept of High-impact Educational Practices (HIPs) is well-known, but the conversation about transitioning HIPs online is new. In this session, contributors from the edited collection High-Impact Practices in Online Education will share current HIP research, and offer ideas for participants to reflect on regarding implementing HIPs into online environments.https://www.aacu.org/leap/hipshttps://www.aacu.org/sites/default/files/files/LEAP/HIP_tables.pdf+++++++++++++++++++++++
Date: Thursday, April 4th Time: 3:45 PM to 5:00 PM Streamed session Lead Presenter: Manoush Zomorodi (Stable Genius Productions) Track: N/A Location: Adams Ballroom Session Duration: 1hr 15min Brief Abstract:How can we ensure that students and educators thrive in increasingly digital environments, where change is the only constant? In this keynote, author and journalist Manoush Zomorodi shares her pioneering approach to researching the effects of technology on our behavior. Her unique brand of journalism includes deep-dive investigations into such timely topics as personal privacy, information overload, and the Attention Economy. These interactive multi-media experiments with tens of thousands of podcast listeners will inspire you to think creatively about how we use technology to educate and grow communities.Friday
Date: Friday, April 5th Time: 8:30 AM to 9:30 AM Streamed session Lead Presenter: Michael Caulfield (Washington State University-Vancouver) Track: N/A Location: Adams Ballroom Position: 2 Session Duration: 60min Brief Abstract:Years ago, John Lyndon (then Johnny Rotten) sang that “anger is an energy.” And he was right, of course. Anger isn’t an emotion, like happiness or sadness. It’s a reaction, a swelling up of a confused urge. I’m a person profoundly uncomfortable with anger, but yet I’ve found in my professional career that often my most impactful work begins in a place of anger: anger against injustice, inequality, lies, or corruption. And often it is that anger that gives me the energy and endurance to make a difference, to move the mountains that need to be moved. In this talk I want to think through our uneasy relationship with anger; how it can be helpful, and how it can destroy us if we’re not careful.++++++++++++++++
Date: Friday, April 5th Time: 10:45 AM to 11:30 AM Conference Session: Concurrent Session 10 Streamed session Lead Presenter: Laurie Daily (Augustana University) Co-presenter: Sharon Gray (Augustana University) Track: Problems, Processes, and Practices Location: Juniper A Session Duration: 45min Brief Abstract:The purpose of this session is to explore the implementation of a Community of Practice to support professional development, enhance online course and program development efforts, and to foster community and engagement between and among full and part time faculty.+++++++++++++++
Date: Friday, April 5th Time: 11:45 AM to 12:30 PM Conference Session: Concurrent Session 11 Streamed session Lead Presenter: Katrina Rainer (Strayer University) Co-presenter: Jennifer M McVay-Dyche (Strayer University) Track: Teaching and Learning Practice Location: Cottonwood 2-3 Session Duration: 45min Brief Abstract:Learning is more effective and organic when we teach through the art of storytelling. At Strayer University, we are blending the principles story-driven learning with research-based instructional design practices to create engaging learning experiences. This session will provide you with strategies to strategically infuse stories into any lesson, course, or curriculum.
P 4. But all that “disruption,” as people love to collect, is over looking the thing that’s the most disruptive of them all call on the way we relate to each other will never be the same. That’s because of something called presence.
Presence is the absolute foundation of virtual reality, and in VR, it’s the absolute foundation of connection-connection with yourself, with an idea, with another human, even connection with artificial intelligence.
p. 28 VR definition
Virtual reality is an 1. artificial environment that’s 2. immersive enough to convince you that you are 3. actually inside it.
1. ” artificial environment ” could mean just about anything. The photograph is an artificial environment of video game is an artificial environment a Pixar movie is an artificial environment the only thing that matters is that it’s not where are you physically are
p. 44 VR: putting the “it” in “meditation” my note: it seems Rubin sees the 21st century VR as the equivalent of the drug experimentation in the 1960s US: p. 46 “VR is potentially going to become a direct interface to the subconscious”
p. 74 serious games, Carrie Heeter. p. 49
The default network in the brain in today’s society is the wandering mind. We are ruminating about the past, and we are worrying about the future, or maybe even planning for the future; there is some productive thinking. But in general, a wandering mind is an unhappy mind. And that is where we spent all of our week in time: not being aware of everything that we are experiencing in the moment.
Hester’s Open meditation had already let her to design apps and studies that investigated mediate meditations ability to calm that wandering mind
p. 51 Something called interoception. It is a term that is gaining ground in psychologist circles in recent years and basically means awareness of battle associations-like my noticing the fact that I was sitting awkwardly or that keeping my elbows on the cheers armrests was making my shoulders hunched slightly. Not surprisingly, mindfulness meditation seems to heighten interoception. And that is exactly how Heeter and Allbritton Strep throat the meditation I am doing on Costa Del sole. First, I connect with the environment; then with my body; Dan I combined the two. The combination of the VR and interception leads to what she describes as “embodied presence”: not only do you feel like you are in the VR environment, but because you have consciously work to integrate your bodily Sensations into VR, it is a fuller, more vivid version of presents.
p. 52 guided meditation VR GMVR
p. 56 VVVR visual voice virtual reality
Just as the ill-fated google glass immediately stigmatized all its wearers as “glassholes”- a.k.a. “techier-than-thou douche bags who dropped $1500 to see an email notification appear in front of their face”-so to do some VR headset still look like face TVs for another it’s
p. 61 Hedgehog Love
engineering feelings with social presence. p.64 remember presents? This is the beginning of social presence. Mindfulness is cool, but making eye contact with Henry is the first step into the future.
p.65 back in 1992, our friend Carrie heeter posited that presence-the sensation did you are really there in VR-head treat day mentions. There was personal presents, environmental presents, and social presents, which she basically defined is being around other people who register your existence.
p. 66 the idea that emotion can be not a cause, as sweet so often assumed, but a result of it of behavior
p. 72 in chapter 1, we explain the difference between Mobile VR and PC driven PR. The former is cheaper and easier; all you do is drop your smart phone into a headset, and it provides just about everything can eat. Dedicated VR headsets rely on the stronger processors of desktop PCs and game consoles,So they can provide a more robust sense of presence-usually at the cost of being hit Earth to your computer with cables. (it’s the cost of actual money: dedicated headset systems from hundreds of dollars, while mobile headsets like Samsung’s deer VR or Google’s DayDream View can be had for mere tens of dollars.) There is one other fundamental distinction between mobile VR and high-end VR, though, and that is what you do with your hands-how you input your desires. When VR reemerged in the early 2010s, however, the question of input was open to debate. Actually, more than one debate. p. 73 video game controllers are basically metaphors. Some, like steering wheels or pilot flight sticks, might look like that think they’re supposed to be, but at their essence they are all just collections of buttons. p. 77 HTC sales small wearable truckers that you can affix to any object, or anybody part, to break it into the Vive’s VR.
p. 78 wait a second – you were talking about storytelling.
p. 79 Every Hollywood studio you can imagine-21st Century Fox, Paramount, Warner Bross.-Has already invested in virtual reality. They have made VR experiences based on their own movies, like interstellar or ghost in the Shell, and they have invested in other VR companies. Hollywood directors like Doug Liman (Edge of Tomorrow) and Robert Stromberg (Maleficent) have taken VR project. And the progress is exhilarating. Alejandro GOnzalez Inarritu, a 4-Time Oscar winner for best director 2014 movie Birdman, won best picture, received this special achievement Academy award in 2017 for a VR Schwartz he made. Yet Carne Y Arena, which puts viewers insight a harrowing journey from Mexico to the United States, is nothing like a movie, or even a video game.
When you premiered at the Cannes film Festival in early 2017, it was housed in an airplane hangar; viewers were a shirt, barefoot, into a room with a sand-covert floor, where they could watch and interact with other people trying to make it over the border. Arrests, detention centers, dehydration-the extremity of the human condition happening all around you. India announcement, the Academy of motion picture arts and sciences called the peas “deeply emotional and physically immersive”
p. 83 empathy versus intimacy. Why good stories need someone else
p. 85 empathy vs intimacy: appreciation vs emotion
Both of these words are fuzzy, to say the least. Both have decades of study behind him, but both have also appeared and more magazine covers in just about any words, other than possibly “abs”
Empathy: dear Do it to do identify with and understand dollars, particularly on an emotional level. It involves imagining yourself in the place of another and, therefore, appreciating how do you feel.
Intimacy: a complex sphere of ‘inmost’ relationships with self and others that are not usually minor or incidental (though they may be a transitory) and which usually touch the personal world very deeply. They are our closest relationships with friends, family, children, lovers, but they are also the deep into important experiences we have with self
Empathy necessarily needs to involve other people; intimacy doesn’t. Empathy involves emotional understanding; intimacy involves emotion itself. Empathy, at its base, isn’t act of getting outside yourself: you’re protecting yourself into someone’s else experience, which means that in some ways you are leaving your own experience behind, other than as a reference point. Intimacy, on the other hand, is at its base act of feeling: you might be connecting quit someone or something Else, but you are doing so on the basis of the emotions you feel. p 86. Any type of VR experience perfectly illustrates the surprising gap between empathy and intimacy: life action VR. p. 87 unlike CGI-based storytelling, which full somewhere in between game in movie, live action VR feels much more like the conventional video forms that we are used to from television and movies. Like those media, people have been using VR to shoot everything from narrative fiction to documentary the sports.
p. 92 every single story has only one goal at its base: to make you care. This holds true whether it is a tale told around a campfire at night, one related to a sequence of panels in the comic book, or dialogue-heavy narrative of a television show. The story might be trying to make you laugh, or just scare you, or to make you feel sad or happy on behalf of one of the characters, but those are all just forms of caring, right? Your emotional investment-the fact that what kept us in this tale matters to you-is the fundamental aim of the storyteller.
Storytelling, than, has evolved to find ways to draw you out of yourself, to make you forget that what you are hearing or seeing or reading isn’t real. It’s only at that point, after all, that our natural capacity for empathy can kick in. p. 93 meanwhile, technology continues to evolve to detaches from those stories. For one, the frame itself continues to get smaller. Strangers still, this distraction has happened well stories continue to become more and more complex. Narratively, at least, stories are more intricate then the have ever been. p. 94. Now, with VR storytelling, the distracting power of multiple screens his met it’s match.
p. 101 experiencing our lives- together
What videos two cannot do, though, he’s bringing people together insights VR, the way re-McClure’s sinking-multicoloredat-blogs-at-each-other tag-team project is VVVR does. That’s why even V are filmmaking powerhouses like Within ( https://www.with.in/get-the-app) are moving beyond mere documentary and narrative and trying to turn storytelling into a shared experience.
Make no mistake: storytelling has always been a shirt experience. Being conscripted into the story, or even being the story.
p. 103 like so many VR experiences, life of us defies many of the ways we describe a story to each other. For one, it feels at fonts shorter and longer than its actual seven-minutes runtime; although it’s seems to be over in a flash, flash contains so many details that in retrospect it is as full and vivid is a two-our movie.
There is another think, though, that sets life of us apart from so many other stories-it is the fact that not only was I in the story, but someone else was in there with me. In that someone wasn’t a field character talking to a camera that they some calling about it, or a video game creature that was programmed to look in ‘my’ direction, but a real person-a person who saw what I saw, a person who was present for each of those moments and who know is inextricably part of my old, shard-Like memory of them.
p. 107 what to do and what to do it with . How social VR is reinventing everything from game night to online harassment.
p. 110 VR isn’t given Romo’s first bet on the future. When he was finishing up his masters degree in mechanical engineering, a professor emailed him on behalf of two men who were recruiting for a rocket company there were starting. One of those man was a Elon musk, which is how Romo became the 13th employee at space X. Eventually, she started the company focusing go solar energy, but when the bottom fell out of the industry, she shut down the company and looked for his next opportunity. Romo spent the next year and a half researching the technology and thinking about what kind of company might make sense in the new VR enabled world. He had read Snow crash, but he oh soon you get our hopes for DVR future could very well end up like gay themed flying car: defined-and limited-bite an expectation that might not match perfectly which what we actually want.
p. 116 back in the day, trolling just trim forward to pursuing a provocative argument for kicks. Today, the word used to describe the actions of anonymous mobs like the one that, for instance, Rolf actor Leslie Jones off Twitter with an onslaught of racist and sexist abuse. Harassment has become one of the defining characteristics of the Internet is for use it today. But with the emergernce of VR, our social networks have become, quite literally, embodied.
p. 142 increasing memory function by moving from being a voyeur to physically participating in the virtual activity. embodied presence – bringing not just your head into your hands, but your body into VR-strengthens memories in the number of ways.
p. 143 at the beginning of 2017, Facebook fit published some of its. New Ron’s in internal research about the potential of social VR. Neurons INc. The agency measured eye movements, Brain activity, and pools of volunteers who were watching streaming video on smart phones and ultimately discovered that buffering and lag were significantly more stressful than waiting can line it a store, and even slightly more stressful than watching a horror movie.
p. 145 after the VR experience, more than 80% of introverts — is identified by a short survey participants took before hand-wanted to become friends with the person they had chatted with, as opposed to less than 60% of extroverts
p. 149 Rec Room Confidential: the anatomy in evolution of VR friendships
p. 165 reach out and touch someone; haptics, tactile presence and making VR physical.
p. 171 Zhao laid out two different criteria. The first was whether or not to people are actually in the same place-basically, are they or their stand-ins physically close enough to be able to communicate without any other tools? To people, she wrote, can either have “physical proximity” or “electronic proximity” the latter being some sort of networked connection. The second criterion was whether each person is corporeally there; in other words, is it their actual flesh-and-blood body? The second condition can have three outcomes: both people can be there corporeally; neither can be there corporeally , instead using some sort of stand in like an avatar or a robot; or just one of them can be there corporeally, with the other using case stent in
“virtual copresence” is when a flesh and blood person interacts physically with a representative of a human; if that sounds confusing, 80 good example is using an ATM call mom where are the ATM is a stent in for a bank teller
p. 172 “hypervirtual copresence,” which involves nonhuman devices that are interacting in the same physical space in a humanlike fashion. social VR does not quite fit into any of this category. Zhao refers to this sort of hybrid as a “synthetic environment” and claims that it is a combination of corporeal https://www.waze.com/telecopresence (like Skyping) and virtual telecopresence(like Waze directions )
p. 172 haptic tactics for tactile aptness
Of the five human senses, a VR headset ca currently stimulates only to: vision and hearing. That leaves treat others-and while smell and taste me come some day.
P. 174; https://en.wikipedia.org/wiki/Aldous_Huxley Brave New World. tactile “feelies”
p. 195 XXX-chnage program: turning porn back into people
p. 221 where we are going, we don’t need headsets. lets get speculative
p. 225 Magic Leap. p. 227 Magic Leap calls its technology “mixed reality,” claiming that the three dimensional virtual objects it brings into your world are far more advanced than the flat, static overlays of augmented reality. In reality, there is no longer any distinction between the two; in fact, the air are by now so many terms being accused in various ways by various companies that it’s probably worth a quick clarification.
Virtual reality: the illusion of an all-enveloping artificial world, created by wearing an opaque display in front of your eyes.
augmented reality: Bringing artificial objects into the real world-these can be as simple as a ” heads-up display,” like a speedometer project it onto your car’s windshield, or as complex as seen to be virtual creature woke across your real world leaving room, casting a realistic shadow on the floor
mixed reality: generally speaking, this is synonymous with AR, or eight at least with the part of AR that brings virtual objects into the real world. However, some people prefer “mixed” because they think “augmented” implies that reality isn’t enough.
extended or synthetic reality (XR or SR): all of the above! this are bought catch old terms that encompass the full spectrum of virtual elements individual settings.
p. 231 in ten years, we won’t even have smartphone anymore.
p. 229 Eve VR is these come blink toddler, though, AR/MR is a third-trimester fetus: eat may be fully formed book eat is not quite ready to be out in the world yet. The headsets or large, the equipment is far more expensive than VR Anthony in many cases we don’t even know what a consumer product looks like.
p. 235 when 2020 is hindsight: what life in 2028 might actually look like.
Many educational institutions maintain their own data centers. “We need to minimize the amount of work we do to keep systems up and running, and spend more energy innovating on things that matter to people.”
what’s the difference between machine learning (ML) and artificial intelligence (AI)?
Jeff Olson: That’s actually the setup for a joke going around the data science community. The punchline? If it’s written in Python or R, it’s machine learning. If it’s written in PowerPoint, it’s AI.
machine learning is in practical use in a lot of places, whereas AI conjures up all these fantastic thoughts in people.
What is serverless architecture, and why are you excited about it?
Instead of having a machine running all the time, you just run the code necessary to do what you want—there is no persisting server or container. There is only this fleeting moment when the code is being executed. It’s called Function as a Service, and AWS pioneered it with a service called AWS Lambda. It allows an organization to scale up without planning ahead.
How do you think machine learning and Function as a Service will impact higher education in general?
The radical nature of this innovation will make a lot of systems that were built five or 10 years ago obsolete. Once an organization comes to grips with Function as a Service (FaaS) as a concept, it’s a pretty simple step for that institution to stop doing its own plumbing. FaaS will help accelerate innovation in education because of the API economy.
If the campus IT department will no longer be taking care of the plumbing, what will its role be?
I think IT will be curating the inter-operation of services, some developed locally but most purchased from the API economy.
As a result, you write far less code and have fewer security risks, so you can innovate faster. A succinct machine-learning algorithm with fewer than 500 lines of code can now replace an application that might have required millions of lines of code. Second, it scales. If you happen to have a gigantic spike in traffic, it deals with it effortlessly. If you have very little traffic, you incur a negligible cost.
We can build robot teachers, or even robot teaching assistants. But should we?
the Chinese government has declared a national goal of surpassing the U.S. in AI technology by the year 2030, so there is almost a Sputnik-like push for the tech going on right now in China. At the same time, China is also facing a shortage of qualified teachers in many rural areas, and there’s a huge demand for high-quality language teachers and tutors throughout the country.
President Donald Trump on Monday directed federal agencies to improve the nation’s artificial intelligence abilities — and help people whose jobs are displaced by the automation it enables.
t’s good for the US government to focus on AI, said Daniel Castro, chief executive of the Center for Data Innovation, a technology-focused think tank that supports the initiative.
Silicon Valley has been investing heavily in AI in recent years, but the path hasn’t always been an easy one. In October, for instance, Google withdrew from competition for a $10 billion Pentagon cloud computing contract, saying it might conflict with its principles for ethical use of AI.