P 4. But all that “disruption,” as people love to collect, is over looking the thing that’s the most disruptive of them all call on the way we relate to each other will never be the same. That’s because of something called presence.
Presence is the absolute foundation of virtual reality, and in VR, it’s the absolute foundation of connection-connection with yourself, with an idea, with another human, even connection with artificial intelligence.
p. 28 VR definition
Virtual reality is an 1. artificial environment that’s 2. immersive enough to convince you that you are 3. actually inside it.
1. ” artificial environment ” could mean just about anything. The photograph is an artificial environment of video game is an artificial environment a Pixar movie is an artificial environment the only thing that matters is that it’s not where are you physically are
p. 44 VR: putting the “it” in “meditation” my note: it seems Rubin sees the 21st century VR as the equivalent of the drug experimentation in the 1960s US: p. 46 “VR is potentially going to become a direct interface to the subconscious”
p. 74 serious games, Carrie Heeter. p. 49
The default network in the brain in today’s society is the wandering mind. We are ruminating about the past, and we are worrying about the future, or maybe even planning for the future; there is some productive thinking. But in general, a wandering mind is an unhappy mind. And that is where we spent all of our week in time: not being aware of everything that we are experiencing in the moment.
Hester’s Open meditation had already let her to design apps and studies that investigated mediate meditations ability to calm that wandering mind
p. 51 Something called interoception. It is a term that is gaining ground in psychologist circles in recent years and basically means awareness of battle associations-like my noticing the fact that I was sitting awkwardly or that keeping my elbows on the cheers armrests was making my shoulders hunched slightly. Not surprisingly, mindfulness meditation seems to heighten interoception. And that is exactly how Heeter and Allbritton Strep throat the meditation I am doing on Costa Del sole. First, I connect with the environment; then with my body; Dan I combined the two. The combination of the VR and interception leads to what she describes as “embodied presence”: not only do you feel like you are in the VR environment, but because you have consciously work to integrate your bodily Sensations into VR, it is a fuller, more vivid version of presents.
p. 52 guided meditation VR GMVR
p. 56 VVVR visual voice virtual reality
Just as the ill-fated google glass immediately stigmatized all its wearers as “glassholes”- a.k.a. “techier-than-thou douche bags who dropped $1500 to see an email notification appear in front of their face”-so to do some VR headset still look like face TVs for another it’s
p. 61 Hedgehog Love
engineering feelings with social presence. p.64 remember presents? This is the beginning of social presence. Mindfulness is cool, but making eye contact with Henry is the first step into the future.
p.65 back in 1992, our friend Carrie heeter posited that presence-the sensation did you are really there in VR-head treat day mentions. There was personal presents, environmental presents, and social presents, which she basically defined is being around other people who register your existence.
p. 66 the idea that emotion can be not a cause, as sweet so often assumed, but a result of it of behavior
p. 72 in chapter 1, we explain the difference between Mobile VR and PC driven PR. The former is cheaper and easier; all you do is drop your smart phone into a headset, and it provides just about everything can eat. Dedicated VR headsets rely on the stronger processors of desktop PCs and game consoles,So they can provide a more robust sense of presence-usually at the cost of being hit Earth to your computer with cables. (it’s the cost of actual money: dedicated headset systems from hundreds of dollars, while mobile headsets like Samsung’s deer VR or Google’s DayDream View can be had for mere tens of dollars.) There is one other fundamental distinction between mobile VR and high-end VR, though, and that is what you do with your hands-how you input your desires. When VR reemerged in the early 2010s, however, the question of input was open to debate. Actually, more than one debate. p. 73 video game controllers are basically metaphors. Some, like steering wheels or pilot flight sticks, might look like that think they’re supposed to be, but at their essence they are all just collections of buttons. p. 77 HTC sales small wearable truckers that you can affix to any object, or anybody part, to break it into the Vive’s VR.
p. 78 wait a second – you were talking about storytelling.
p. 79 Every Hollywood studio you can imagine-21st Century Fox, Paramount, Warner Bross.-Has already invested in virtual reality. They have made VR experiences based on their own movies, like interstellar or ghost in the Shell, and they have invested in other VR companies. Hollywood directors like Doug Liman (Edge of Tomorrow) and Robert Stromberg (Maleficent) have taken VR project. And the progress is exhilarating. Alejandro GOnzalez Inarritu, a 4-Time Oscar winner for best director 2014 movie Birdman, won best picture, received this special achievement Academy award in 2017 for a VR Schwartz he made. Yet Carne Y Arena, which puts viewers insight a harrowing journey from Mexico to the United States, is nothing like a movie, or even a video game.
When you premiered at the Cannes film Festival in early 2017, it was housed in an airplane hangar; viewers were a shirt, barefoot, into a room with a sand-covert floor, where they could watch and interact with other people trying to make it over the border. Arrests, detention centers, dehydration-the extremity of the human condition happening all around you. India announcement, the Academy of motion picture arts and sciences called the peas “deeply emotional and physically immersive”
p. 83 empathy versus intimacy. Why good stories need someone else
p. 85 empathy vs intimacy: appreciation vs emotion
Both of these words are fuzzy, to say the least. Both have decades of study behind him, but both have also appeared and more magazine covers in just about any words, other than possibly “abs”
Empathy: dear Do it to do identify with and understand dollars, particularly on an emotional level. It involves imagining yourself in the place of another and, therefore, appreciating how do you feel.
Intimacy: a complex sphere of ‘inmost’ relationships with self and others that are not usually minor or incidental (though they may be a transitory) and which usually touch the personal world very deeply. They are our closest relationships with friends, family, children, lovers, but they are also the deep into important experiences we have with self
Empathy necessarily needs to involve other people; intimacy doesn’t. Empathy involves emotional understanding; intimacy involves emotion itself. Empathy, at its base, isn’t act of getting outside yourself: you’re protecting yourself into someone’s else experience, which means that in some ways you are leaving your own experience behind, other than as a reference point. Intimacy, on the other hand, is at its base act of feeling: you might be connecting quit someone or something Else, but you are doing so on the basis of the emotions you feel. p 86. Any type of VR experience perfectly illustrates the surprising gap between empathy and intimacy: life action VR. p. 87 unlike CGI-based storytelling, which full somewhere in between game in movie, live action VR feels much more like the conventional video forms that we are used to from television and movies. Like those media, people have been using VR to shoot everything from narrative fiction to documentary the sports.
p. 92 every single story has only one goal at its base: to make you care. This holds true whether it is a tale told around a campfire at night, one related to a sequence of panels in the comic book, or dialogue-heavy narrative of a television show. The story might be trying to make you laugh, or just scare you, or to make you feel sad or happy on behalf of one of the characters, but those are all just forms of caring, right? Your emotional investment-the fact that what kept us in this tale matters to you-is the fundamental aim of the storyteller.
Storytelling, than, has evolved to find ways to draw you out of yourself, to make you forget that what you are hearing or seeing or reading isn’t real. It’s only at that point, after all, that our natural capacity for empathy can kick in. p. 93 meanwhile, technology continues to evolve to detaches from those stories. For one, the frame itself continues to get smaller. Strangers still, this distraction has happened well stories continue to become more and more complex. Narratively, at least, stories are more intricate then the have ever been. p. 94. Now, with VR storytelling, the distracting power of multiple screens his met it’s match.
p. 101 experiencing our lives- together
What videos two cannot do, though, he’s bringing people together insights VR, the way re-McClure’s sinking-multicoloredat-blogs-at-each-other tag-team project is VVVR does. That’s why even V are filmmaking powerhouses like Within ( https://www.with.in/get-the-app) are moving beyond mere documentary and narrative and trying to turn storytelling into a shared experience.
Make no mistake: storytelling has always been a shirt experience. Being conscripted into the story, or even being the story.
p. 103 like so many VR experiences, life of us defies many of the ways we describe a story to each other. For one, it feels at fonts shorter and longer than its actual seven-minutes runtime; although it’s seems to be over in a flash, flash contains so many details that in retrospect it is as full and vivid is a two-our movie.
There is another think, though, that sets life of us apart from so many other stories-it is the fact that not only was I in the story, but someone else was in there with me. In that someone wasn’t a field character talking to a camera that they some calling about it, or a video game creature that was programmed to look in ‘my’ direction, but a real person-a person who saw what I saw, a person who was present for each of those moments and who know is inextricably part of my old, shard-Like memory of them.
p. 107 what to do and what to do it with . How social VR is reinventing everything from game night to online harassment.
p. 110 VR isn’t given Romo’s first bet on the future. When he was finishing up his masters degree in mechanical engineering, a professor emailed him on behalf of two men who were recruiting for a rocket company there were starting. One of those man was a Elon musk, which is how Romo became the 13th employee at space X. Eventually, she started the company focusing go solar energy, but when the bottom fell out of the industry, she shut down the company and looked for his next opportunity. Romo spent the next year and a half researching the technology and thinking about what kind of company might make sense in the new VR enabled world. He had read Snow crash, but he oh soon you get our hopes for DVR future could very well end up like gay themed flying car: defined-and limited-bite an expectation that might not match perfectly which what we actually want.
p. 116 back in the day, trolling just trim forward to pursuing a provocative argument for kicks. Today, the word used to describe the actions of anonymous mobs like the one that, for instance, Rolf actor Leslie Jones off Twitter with an onslaught of racist and sexist abuse. Harassment has become one of the defining characteristics of the Internet is for use it today. But with the emergernce of VR, our social networks have become, quite literally, embodied.
p. 142 increasing memory function by moving from being a voyeur to physically participating in the virtual activity. embodied presence – bringing not just your head into your hands, but your body into VR-strengthens memories in the number of ways.
p. 143 at the beginning of 2017, Facebook fit published some of its. New Ron’s in internal research about the potential of social VR. Neurons INc. The agency measured eye movements, Brain activity, and pools of volunteers who were watching streaming video on smart phones and ultimately discovered that buffering and lag were significantly more stressful than waiting can line it a store, and even slightly more stressful than watching a horror movie.
p. 145 after the VR experience, more than 80% of introverts — is identified by a short survey participants took before hand-wanted to become friends with the person they had chatted with, as opposed to less than 60% of extroverts
p. 149 Rec Room Confidential: the anatomy in evolution of VR friendships
p. 165 reach out and touch someone; haptics, tactile presence and making VR physical.
p. 171 Zhao laid out two different criteria. The first was whether or not to people are actually in the same place-basically, are they or their stand-ins physically close enough to be able to communicate without any other tools? To people, she wrote, can either have “physical proximity” or “electronic proximity” the latter being some sort of networked connection. The second criterion was whether each person is corporeally there; in other words, is it their actual flesh-and-blood body? The second condition can have three outcomes: both people can be there corporeally; neither can be there corporeally , instead using some sort of stand in like an avatar or a robot; or just one of them can be there corporeally, with the other using case stent in
“virtual copresence” is when a flesh and blood person interacts physically with a representative of a human; if that sounds confusing, 80 good example is using an ATM call mom where are the ATM is a stent in for a bank teller
p. 172 “hypervirtual copresence,” which involves nonhuman devices that are interacting in the same physical space in a humanlike fashion. social VR does not quite fit into any of this category. Zhao refers to this sort of hybrid as a “synthetic environment” and claims that it is a combination of corporeal https://www.waze.com/telecopresence (like Skyping) and virtual telecopresence(like Waze directions )
p. 172 haptic tactics for tactile aptness
Of the five human senses, a VR headset ca currently stimulates only to: vision and hearing. That leaves treat others-and while smell and taste me come some day.
P. 174; https://en.wikipedia.org/wiki/Aldous_Huxley Brave New World. tactile “feelies”
p. 195 XXX-chnage program: turning porn back into people
p. 221 where we are going, we don’t need headsets. lets get speculative
p. 225 Magic Leap. p. 227 Magic Leap calls its technology “mixed reality,” claiming that the three dimensional virtual objects it brings into your world are far more advanced than the flat, static overlays of augmented reality. In reality, there is no longer any distinction between the two; in fact, the air are by now so many terms being accused in various ways by various companies that it’s probably worth a quick clarification.
Virtual reality: the illusion of an all-enveloping artificial world, created by wearing an opaque display in front of your eyes.
augmented reality: Bringing artificial objects into the real world-these can be as simple as a ” heads-up display,” like a speedometer project it onto your car’s windshield, or as complex as seen to be virtual creature woke across your real world leaving room, casting a realistic shadow on the floor
mixed reality: generally speaking, this is synonymous with AR, or eight at least with the part of AR that brings virtual objects into the real world. However, some people prefer “mixed” because they think “augmented” implies that reality isn’t enough.
extended or synthetic reality (XR or SR): all of the above! this are bought catch old terms that encompass the full spectrum of virtual elements individual settings.
p. 231 in ten years, we won’t even have smartphone anymore.
p. 229 Eve VR is these come blink toddler, though, AR/MR is a third-trimester fetus: eat may be fully formed book eat is not quite ready to be out in the world yet. The headsets or large, the equipment is far more expensive than VR Anthony in many cases we don’t even know what a consumer product looks like.
p. 235 when 2020 is hindsight: what life in 2028 might actually look like.
the type of data: wikipedia. the dangers of learning from wikipedia. how individuals can organize mitigate some of these dangers. wikidata, algorithms.
IBM Watson is using wikipedia by algorythms making sense, AI system
youtube videos debunked of conspiracy theories by using wikipedia.
semantic relatedness, Word2Vec
how does algorithms work: large body of unstructured text. picks specific words
lots of AI learns about the world from wikipedia. the neutral point of view policy. WIkipedia asks editors present as proportionally as possible. Wikipedia biases: 1. gender bias (only 20-30 % are women).
conceptnet. debias along different demographic dimensions.
citations analysis gives also an idea about biases. localness of sources cited in spatial articles. structural biases.
geolocation on Twitter by County. predicting the people living in urban areas. FB wants to push more local news.
danger (biases) #3. wikipedia search results vs wkipedia knowledge panel.
collective action against tech: Reddit, boycott for FB and Instagram.
data labor: what the primary resources this companies have. posts, images, reviews etc.
boycott, data strike (data not being available for algorithms in the future). GDPR in EU – all historical data is like the CA Consumer Privacy Act. One can do data strike without data boycott. general vs homogeneous (group with shared identity) boycott.
the wikipedia SPAM policy is obstructing new editors and that hit communities such as women.
how to access at different levels. methods and methodological concerns. ethical concerns, legal concerns,
tweetdeck for advanced Twitter searches. quoting, likes is relevant, but not enough, sometimes screenshot
social listening platforms: crimson hexagon, parsely, sysomos – not yet academic platforms, tools to setup queries and visualization, but difficult to algorythm, the data samples etc. open sources tools (Urbana, Social Media microscope: SMILE (social media intelligence and learning environment) to collect data from twitter, reddit and within the platform they can query Twitter. create trend analysis, sentiment analysis, Voxgov (subscription service: analyzing political social media)
graduate level and faculty research: accessing SM large scale data web scraping & APIs Twitter APIs. Jason script, Python etc. Gnip Firehose API ($) ; Web SCraper Chrome plugin (easy tool, Pyhon and R created); Twint (Twitter scraper)
Facepager (open source) if not Python or R coder. structure and download the data sets.
TAGS archiving google sheets, uses twitter API. anything older 7 days not avaialble, so harvest every week.
social feed manager (GWUniversity) – Justin Litman with Stanford. Install on server but allows much more.
legal concerns: copyright (public info, but not beyond copyrighted). fair use argument is strong, but cannot publish the data. can analyize under fair use. contracts supercede copyright (terms of service/use) licensed data through library.
methods: sampling concerns tufekci, 2014 questions for sm. SM data is a good set for SM, but other fields? not according to her. hashtag studies: self selection bias. twitter as a model organism: over-represnted data in academic studies.
methodological concerns: scope of access – lack of historical data. mechanics of platform and contenxt: retweets are not necessarily endorsements.
ethical concerns. public info – IRB no informed consent. the right to be forgotten. anonymized data is often still traceable.
table discussion: digital humanities, journalism interested, but too narrow. tools are still difficult to find an operate. context of the visuals. how to spread around variety of majors and classes. controversial events more likely to be deleted.
takedowns, lies and corrosion: what is a librarian to do: trolls, takedown,
development kit circulation. familiarity with the Oculus Rift resulted in lesser reservation. Downturn also.
An experience station. clean up free apps.
question: spherical video, video 360.
safety issues: policies? instructional perspective: curating,WI people: user testing. touch controllers more intuitive then xbox controller. Retail Oculus Rift
app Scatchfab. 3modelviewer. obj or sdl file. Medium, Tiltbrush.
College of Liberal Arts at the U has their VR, 3D print set up.
Penn State (Paul, librarian, kiniseology, anatomy programs), Information Science and Technology. immersive experiences lab for video 360.
CALIPHA part of it is xrlibraries. libraries equal education. content provider LifeLiqe STEM library of AR and VR objects. https://www.lifeliqe.com/
with Google ramping up sales of its Expeditions Kit, and Facebook giving away 500 free Oculus Rift headsets to schools in Arkansas, the number of teachers using VR tools in U.S. classrooms could jump to more than 15 percent by 2021, predicts Futuresource, a market research firm.
A recent study was done by Children and Virtual Reality, a collaboration between researchers, VR companies, universities and health organizations, found that using VR tools could have significant health impacts on children.
What the researchers found in the third phase of the study, published last October, was that usage of VR headsets could impact a child’s vision, balance and spatial awareness
Unimersiv is in the business of educational experiences — works on both Samsung Gear VR and the Oculus Rift, and takes viewers underwater to explore the Titanic and to one of our closest planets through the “Mars: Curiosity Rover,” app
Discovery VR app. The app works on nearly every single VR platform: Google Daydream, Samsung Gear VR, HTC Vive, Oculus Rift and Google Cardboard viewers using both iOS and Android devices.
p. 5 a LibGuide was created that provided a better description of the available software for both the Microsoft Hololens and the HTC Vive and also discussed potential applications for the technology.
Both the HTC Vive and the Hololens were made bookable through the library’s LibCalendar booking system, streamlining the booking process and creating a better user experience.
When the decision was made to bring virtual and augmented reality into the McGill University Library, an important aspect of this project was to develop a collection of related software to be used alongside the technology. In building this software collection a priority was placed on acquiring software that could be demonstrated as having educational value, or that could potentially be used in relation to, or in support of, university courses.
For the Microsoft Hololens, all software was acquired through Microsoft’s Online Store. The store has a number of educationally relevant HoloLens apps available for purchase. The app ARchitect, for example, gives a basic sense of how augmented reality could be used for viewing new building designs. The app Robotics BIW allows user to simulate robotic functions. A select number of apps, such as Land of the Dinosaurs and Boulevard, provide applications for natural history and art. There were a select number of apps related to science, mathematics and medicine, and others with artistic applications. All of the HoloLens applications were free but, compared to what is available for virtual reality, the experiences were much smaller in size and scope.
For the HoloLens, a generic user account was created and shared with person who booked the HoloLens at the time of their booking. After logging into this account – which could sometimes prove to be a challenge because typing is done using the headset’s gesture controls – the user could select a floating tile which would reveal a list of available software. An unresolved problem was that users would then need to refer to the HoloLens LibGuide for a detailed description of the software, or else choose software based on name alone, and the names were not always helpful.
For the Microsoft HoloLens, the three most popular software programs were Land of the Dinosaurs, Palmyra and Insight Heart. Insight Heart allow users to view and manipulate a 3D rendering of a high-resolution human heart, Land of the Dinosaurs provided an augment reality experience featuring 3D renderings of dinosaurs, and Palmyra gave an augmented reality tour of the ancient city of Palmyra.
p. 7 Though many students had ideas for research projects that could make use of the technology, there was no available software that would have allowed them to use augmented reality in the way they wanted. There were no students interested in developing their own software to be used with the technology either.
p. 8 we found that the Microsoft HoloLens received significant use from our patrons, we would recommend the purchase of one only for libraries serving researchers and developers.
Getting Real in the Library: A Case Study at the University of Florida
As an alternative, Microsoft offers a Hololens with enterprise options geared toward multiple users for $5000.
The transition from mobile app development to VR/AR technology also reflected the increased investment in VR/AR by some of the largest technology companies in the world. In the past four years, Facebook purchased the virtual reality company Oculus, Apple released the ARKit for developing augmented reality applications on iOS devices, Google developed Google Cardboard as an affordable VR option, and Sony released Playstation VR to accompany their gaming platform, just to name a few notable examples. This increase of VR/AR development was mirrored by a rise in student interest and faculty research in using and creating new VR/AR content at UF.
Arnhem, J.-P. van, Elliott, C., & Rose, M. (2018). Augmented and Virtual Reality in Libraries. Rowman & Littlefield.
Hammady, R., & Ma, M. (2018). Designing Spatial UI as a Solution of the Narrow FOV of Microsoft HoloLens: Prototype of Virtual Museum Guide. In Proceedings of the 4th International AR & VR Conference 2018. Springer. Retrieved from https://eprints.staffs.ac.uk/4799/
‘HoloMuse’ that engage users with archaeological artefacts through gesture-based interactions (Pollalis, Fahnbulleh, Tynes, & Shaer, 2017). Another research utilised HoloLens to provide in-situ assistant for users (Blattgerste, Strenge, Renner, Pfeiffer, & Essig, 2017). HoloLens also used to provide magnification for low vision users by complementary finger-worn camera alongside with the HMD (Stearns, DeSouza, Yin, Findlater, & Froehlich, 2017). Even in the medical applications, HoloLens contributed in 3D visualisation purposes using AR techniques (Syed, Zakaria, & Lozanoff, 2017) and provide optimised measurements in medical surgeries(Pratt et al., 2018) (Adabi et al., 2017). Application of HoloLens extended to visualise prototype designs (DeLaOsa, 2017) and showed its potential in gaming industry (Volpe, 2015) (Alvarez, 2015) and engaging cultural visitors with gaming activities (Raptis, Fidas, & Avouris, 2017).
To assess the HoloLens’ potential for delivering AR assembly instructions, the cross-platform Unity 3D game engine was used to build a proof of concept application. Features focused upon when building the prototype were: user interfaces, dynamic 3D assembly instructions, and spatially registered content placement. The research showed that while the HoloLens is a promising system, there are still areas that require improvement, such as tracking accuracy, before the device is ready for deployment in a factory assembly setting.
Pollalis, C., Fahnbulleh, W., Tynes, J., & Shaer, O. (2017). HoloMuse: Enhancing Engagement with Archaeological Artifacts Through Gesture-Based Interaction with Holograms. In Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction (pp. 565–570). New York, NY, USA: ACM. https://doi.org/10.1145/3024969.3025094
Gračanin, D., Ciambrone, A., Tasooji, R., & Handosa, M. (2017). Mixed Library — Bridging Real and Virtual Libraries. In S. Lackey & J. Chen (Eds.), Virtual, Augmented and Mixed Reality (pp. 227–238). Springer International Publishing.
We use Microsoft HoloLens device to augment the user’s experience in the real library and to provide a rich set of affordances for embodied and social interactions.We describe a mixed reality based system, a prototype mixed library, that provides a variety of affordances to support embodied interactions and improve the user experience.
Computer science as an engineering discipline has been spectacularly successful. Yet it is also a philosophical enterprise in the way it represents the world and creates and manipulates models of reality, people, and action. In this book, Paul Dourish addresses the philosophical bases of human-computer interaction. He looks at how what he calls “embodied interaction”—an approach to interacting with software systems that emphasizes skilled, engaged practice rather than disembodied rationality—reflects the phenomenological approaches of Martin Heidegger, Ludwig Wittgenstein, and other twentieth-century philosophers. The phenomenological tradition emphasizes the primacy of natural practice over abstract cognition in everyday activity. Dourish shows how this perspective can shed light on the foundational underpinnings of current research on embodied interaction. He looks in particular at how tangible and social approaches to interaction are related, how they can be used to analyze and understand embodied interaction, and how they could affect the design of future interactive systems.
Pollalis, C., Fahnbulleh, W., Tynes, J., & Shaer, O. (2017). HoloMuse: Enhancing Engagement with Archaeological Artifacts Through Gesture-Based Interaction with Holograms. In Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction (pp. 565–570). New York, NY, USA: ACM. https://doi.org/10.1145/3024969.3025094
HoloMuse, an AR application for the HoloLens wearable device, which allows users to actively engage with archaeological artifacts from a museum collection
pick up, rotate, scale, and alter a hologram of an original archeological artifact using in-air gestures. Users can also curate their own exhibit or customize an existing one by selecting artifacts from a virtual gallery and placing them within the physical world so that they are viewable only using the device. We intend to study the impact of HoloMuse on learning and engagement with college-level art history and archeology students.
At WMU, the Libraries is partnering with our central OIT to host a VR lab in the main library. My partnering co-director, Kevin, is really the subject matter expert but I’m managing a lot of the day-to-day operations. Kevin is programming and experimenting with all kinds of hardware but we decided to use Oculus Rifts in our lab primarily because of the greater durability of the hand controllers (compared especially to the Vive). We’re getting all of our games through the Oculus store and have plans to expand into Steam or another provider but haven’t done so yet. We currently have 40+ titles available for gaming and educational purposes. We also teach content creation using Unity, Maya, Blender, and a handful of other tools.
While it may take time to do this reflection, it can have many important benefits: 1) research shows that reflecting on experiences creates an environment in which insights and creativity can flourish; 2) taking a moment to consider the positive experiences (and to learn from the challenging ones) generates positive emotions which can benefit everyone during highly stressful moments in the semester; and 3) your experiences in narrative form provide insights to the committee beyond what is possible through surveys. This helps us to tailor the program in the future.
Here are a few questions/topics you should consider in your reflection:
How well is the program working for you so far?
I was not able to collaborate last year, but this year it has been perfect match with my ID2ID buddy Aura Lippincott. It is just marvelous to work with same-minded and driven person
What have you accomplished so far?
We are well underway with one of our two projects – the VRrelax one the project each of us is teaming up with faculty and staff from our universities. We plan to roll out the test at the end of this month (October), do the research in November and compare notes and results in December. The project aims to establish if VR delivered by Oculus Go may have positive impact on stress reduction for students.
Our second project, the Open Learning one is also gathering speed; we intend to have a research topic determined by the end of the month, while we are gathering resources at the time being.
What else do you need to do? Describe the progress you have made toward meeting your program goals.
What obstacles have you faced that you did not anticipate?
I have difficulty to pinpoint obstacles, because with a determined ID2ID partner and team members, all obstacles start to seem minuscules. We had discussions about the video content of the VR session, or the frequency of the testing and some of these issues is impossible to reconcile for two teams on different campuses, but again, they do not seem crucial when the team is driven by conviction to finish the research
What are your plans for working through them? What are your plans for the rest of the program? Many of you may have chosen to focus on one or more of the ELI Key Issues. If so, briefly summarize and reflect upon your discussions of these key issues.
I see our work falling neatly under: digital and information literacy. The work through ID2ID seems as a intake of fresh air, since digital and information literacy is not considered in the stagnant 90-ish interpretation, as myopically imposed in the library where i work. Our project aims to assert digital literacy as understood by Educause.
To some degree, our work also falls under the ELI issue of “learning space design.” While we advocate for virtual learning spaces, as well as under the ELI issue “academic transformation and faculty development.” Both XR and open learning are ambitious trends, which inadvertently can meet resistance with their novelty and lack of track in former traditional methods of teaching and learning.
case study: an undergraduate senior projects computer science course collaboration whose aim was to develop textual browsing experiences, among other library reference functionality, within the HTC Vive virtual reality (VR) headset. In this case study, readers are introduced to applied uses of VR in service to library-based learning through the research and development of a VR reading room app with multi-user support. Within the VR reading room prototype, users are able to collaboratively explore the digital collections of HathiTrust, highlight text for further searching and discovery and receive consultative research support from a reference specialist through VR.
Library staff met with the project team weekly over the 16 weeks of both semesters to first scope out the functionality of the system and vet requirements.
The library research team further hypothesized that incorporating reference-like support in the VR environment can support library learning. There is ample evidence in the library literature which underscores the importance of reference interactions as learning and instructional experiences for university students
Educational benefits to immersive worlds include offering a deeper presence in engagement with rare or non-accessible artifacts. Sequeira and Morgado (2013, p. 2) describe their Virtual Archeology project as using “a blend of techniques and methods employed by historians and archaeologists using computer models for visualizing cultural artefacts and heritage sites”.
The higher-end graphics cards include devices such as the NVIDIA GeForceTM GTX 1060 or AMD RadeonTM RX 480, equivalent or better. The desktop system that was built for this project used the GeForce GTX 1070, which was slightly above the required minimum specifications.
Collaboration: Library as client.
Specific to this course collaboration, computer science students in their final year of study are given the option of several client projects on which to work. The Undergraduate Library has been a collaborator with senior computer science course projects for several years, beginning in 2012-2013 with mobile application design and chat reference software re-engineering (Hahn, 2015). (My note: Mark Gill, this is where and how Mehdi Mekni, you and I can collaborate)
The hurdles the students had the most trouble with was code integration – e.g. combining various individual software parts towards the end of the semester. The students also were challenged by the public HathiTrust APIs, as the system was developed to call the HathiTrust APIs from within the Unity programming environment and developing API calls in C#. This was a novel use of the HathiTrust search APIs for the students and a new area for the research team as well.
There are alternatives to Unity C# programming, notably WebVR, an open source specification for VR programming on the open web.
A-Frame has seen maturation as a platform agnostic and device agnostic software programming environment. The WebVR webpage notes that the specification supports HTC Vive, Oculus Rift, Samsung Gear VR, Google Daydream and Google Cardboard (WebVR Rocks, 2018). Open web platforms are consistent with library values and educational goals of sharing work that can be foundational in implementing VR learning experience both in VR environments and shareable on the web, too.