Hahn, J. (2018). Virtual reality learning environments | Development of multi-user reference support experiences | Information and Learning Science | Ahead of Print. EmeraldInsight. Retrieved from https://www.emeraldinsight.com/eprint/AU2Q4SJGYQG5YTQ5A9RU/full
case study: an undergraduate senior projects computer science course collaboration whose aim was to develop textual browsing experiences, among other library reference functionality, within the HTC Vive virtual reality (VR) headset. In this case study, readers are introduced to applied uses of VR in service to library-based learning through the research and development of a VR reading room app with multi-user support. Within the VR reading room prototype, users are able to collaboratively explore the digital collections of HathiTrust, highlight text for further searching and discovery and receive consultative research support from a reference specialist through VR.
Library staff met with the project team weekly over the 16 weeks of both semesters to first scope out the functionality of the system and vet requirements.
The library research team further hypothesized that incorporating reference-like support in the VR environment can support library learning. There is ample evidence in the library literature which underscores the importance of reference interactions as learning and instructional experiences for university students
Educational benefits to immersive worlds include offering a deeper presence in engagement with rare or non-accessible artifacts. Sequeira and Morgado (2013, p. 2) describe their Virtual Archeology project as using “a blend of techniques and methods employed by historians and archaeologists using computer models for visualizing cultural artefacts and heritage sites”.
The higher-end graphics cards include devices such as the NVIDIA GeForceTM GTX 1060 or AMD RadeonTM RX 480, equivalent or better. The desktop system that was built for this project used the GeForce GTX 1070, which was slightly above the required minimum specifications.
Collaboration: Library as client.
Specific to this course collaboration, computer science students in their final year of study are given the option of several client projects on which to work. The Undergraduate Library has been a collaborator with senior computer science course projects for several years, beginning in 2012-2013 with mobile application design and chat reference software re-engineering (Hahn, 2015). (My note: Mark Gill, this is where and how Mehdi Mekni, you and I can collaborate)
The hurdles the students had the most trouble with was code integration – e.g. combining various individual software parts towards the end of the semester. The students also were challenged by the public HathiTrust APIs, as the system was developed to call the HathiTrust APIs from within the Unity programming environment and developing API calls in C#. This was a novel use of the HathiTrust search APIs for the students and a new area for the research team as well.
There are alternatives to Unity C# programming, notably WebVR, an open source specification for VR programming on the open web.
A-Frame has seen maturation as a platform agnostic and device agnostic software programming environment. The WebVR webpage notes that the specification supports HTC Vive, Oculus Rift, Samsung Gear VR, Google Daydream and Google Cardboard (WebVR Rocks, 2018). Open web platforms are consistent with library values and educational goals of sharing work that can be foundational in implementing VR learning experience both in VR environments and shareable on the web, too.
Overview of the programmatic standards for general and special education, how these standards are integrated in special education curriculum, and e-portfolio requirements for documenting acquisition of the above standards.
Gaming and Gamification.
why Gaming and Gamification? Vygotsky and ZPD (immersive storytelling is a form of creative play)
from: https://cpb-us-e1.wpmucdn.com/blog.stcloudstate.edu/dist/d/10/files/2015/03/Gaming-and-Gamification-in-academic-and-library-settings-final-draft-1digudu.pdf
play >>> games >>> serious games >>> Game Based learning >>>>+ Digital Game Based learning
“Games are type of cooperative learning. Games embody the essence of constructivism, which for students/gamers means constructing their own knowledge while they interact (learn cooperatively). Learning can happen without games, yet games accelerate the process. Games engage. Games, specifically digital ones, relate to the digital natives, those born after 1976 – 80, who are also known as Generation Y, or Millennials”
is it generational? Is it a fad? is it counter-pedagogical?
what is the difference between GBL (Game Based Learning) and DGBL (Digital GBL): share examples, opinions. Is one better / preferable then the other? Why?
Kahoot game (Yahoo): https://play.kahoot.it/#/k/1412b52c-da28-4507-b658-7dfeedf0864c
hands-on assignment (10 min): split in groups and discuss your experience with games; identify your preferable mode (e.g. GBL vs DGBL) and draft a short plan of transitioning your current curricula to a curricula incorporating games.
What is gamification? Why gamification, if we have games? “Gamification takes game elements (such as points, badges, leaderboards, competition, achievements) and applies them to a non – game setting. It has the potential to turn routine, mundane tasks into refreshing, motivating experiences ”
hands-on assignment (10 min): split in groups and use your electronic devices: smartphones, tablets, laptops to experience any of the following gamification tools:
From the moment you open the browser, you will be presented with immersive experiences that can be enjoyed on a VR headset directly from the Firefox Reality browser. We are working with creators around the world to bring an amazing collection of games, videos, environments, and experiences that can be accessed directly from the home screen.
Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.
Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”
Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs. Where there are choices to be made, that’s where ethics comes in.
our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable.
model-based programming, in which machines do most of the coding work and are able to test as they go.
As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit”
The venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work.
Preliminary Plan for Monday, Sept 10, 5:45 PM to 8 PM
Introduction – who are the students in this class. About myself: http://web.stcloudstate.edu/pmiltenoff/faculty Contact info, “embedded” librarian idea – I am available to help during the semester with research and papers
#FakeNews is a very timely and controversial issue. in 2-3 min choose your best source on this issue. 1. Mind the prevalence of resources in the 21st century 2. Mind the necessity to evaluate a) the veracity of your courses b) the quality of your sources (the fact that they are “true” does not mean that they are the best). Be prepared to name your source and defend its quality.
How do you determine your sources? How do you decide the reliability of your sources? Are you sure you can distinguish “good” from “bad?”
Compare this entry https://en.wikipedia.org/wiki/List_of_fake_news_websites
to this entry: https://docs.google.com/document/d/10eA5-mCZLSS4MQY5QGb5ewC3VAL6pLkT53V_81ZyitM/preview to understand the scope
Do you know any fact checking sites? Can you identify spot sponsored content? Do you understand syndication? What do you understand under “media literacy,” “news literacy,” “information literacy.” https://blog.stcloudstate.edu/ims/2017/03/28/fake-news-resources/
Why do we need to explore the “fake news” phenomenon? Do you find it relevant to your professional development?
So, how do we do academic research? Let’s play another Kahoot: https://play.kahoot.it/#/k/5e09bb66-4d87-44a5-af21-c8f3d7ce23de
If you to structure this Kahoot, what are the questions, you will ask? What are the main steps in achieving successful research for your paper?
Research using social media
what is social media (examples). why is called SM? why is so popular? what makes it so popular?
use SM tools for your research and education:
– Determining your topic. How to?
Digg http://digg.com/, Reddit https://www.reddit.com/ , Quora https://www.quora.com
Facebook, Twitter – hashtags (class assignment 2-3 min to search)
LinkedIn Groups
YouTube and Slideshare (class assignment 2-3 min to search)
Flickr, Instagram, Pinterest for visual aids (like YouTube they are media repositories)
Embedded librarianship holds potential for immersive learning. Come learn how to promote your virtual world communities and the great work of educators in virtual worlds through networking. https://communityvirtuallibrary.wordpress.com/
Chris Luchs (SL: Abacus Capellini, WoW: Cheerwine)
What Can We Learn from the World of Warcraft?
Join us as we host a blended reality session featuring a live stream from the World of Warcraft (WoW) as we explore educational opportunities in a massive multiplayer online roleplaying game (MMORPG). We will have a YouTube live stream, a Discord channel for voice discussion, and an immersive event in WoW. Educators from the International Society for Technology in Education – Games and Simulations Network (ISTE G&SN) will host an immersive event & discuss learning in a multiuser virtual environment (MUVE).
Click Try for Free and download the Blizzard Launcher, which manages the download. You’ll need 52GB for the game. Create an account, select Sisters of Elune realm and create a troll if you are new to WoW and using a Free Trial account.
Location: In the World of Warcraft and for those who do not have the game, over a YouTube Live stream (available that day) and hosted after the event over https://www.youtube.com/user/gamesmooc/videos
Howard Gardner’s “Theory of Multiple Intelligences” explored through an Interactive, Immersive Experience in Second Life
Dr. Gardner has proposed 8 different types of intelligence, ranging from Interpersonal to Kinesthetic. Join us to discover your own most innate type. You may be surprised, like many of the teachers who have tried this challenge as part of our whole-brain training program.
This is a summary of various performance-based activities in Second Life and how performance studies can provide an insight into the experience of virtual worlds.
In his book, “Experience on Demand,” Jeremy Bailenson, the founding director of Stanford University’s Virtual Human Interaction Lab, writes, “No medium, of course can fully capture the subjective experience of another person, but by richly evoking a real-seeming, first-person experience, virtual reality does seem to promise to offer new, empathy-enhancing qualities.” Bailenson contrasts experiencing virtual reality with reading news accounts and watching documentaries. Those latter activities, he writes, require “a lot of imaginative work,” whereas virtual reality can “convey the feeling” of, say, a refugee camp’s environment, and the “smallness of the living quarters, the size of the camp.”
Caldwell—who used Google Expeditions to deliver a virtual reality experience set in the Holocaust—says that when his students first put on the goggles, they viewed them as a novelty. But within a minute or two, the students became quiet, absorbed in what they were seeing; they realized the “reality of the horror of what was in front of them.” Questions ensued.
Ron Berger, the Chief Academic Officer of EL Education, points to another factor schools should consider. He thinks virtual reality can be a powerful way to introduce kids to situations that require empathy or adopting different perspectives. However, he thinks no one tool or experience will bring results unless it is “nested in a broader framework of a vision and goals and relationships.”
Berger says virtual reality experiences have to be accompanied by work beforehand and follow-up afterwards. Kids, he says, need to be reflective and think critically.
immersion experiences like virtual reality should be “embedded in positive” adult and peer relationships. He adds that ideally, there’s also a resulting action where kids do something productive with the information they’ve learned, to help their own growth and to help others. He mentions an example where students interviewed local immigrants and refugees, then wrote the stories they heard. They published the stories in a book, and the profits went to legal fees for local refugees.
saving virtual reality for “very special experiences,” keeping it “relatively short” and not getting students dizzy or disoriented. A report Bailenson co-authored for Common Sense Media highlights the research that has—and has not—explored the effects of virtual reality on children. It states that the “potentially negative outcomes of VR include impacts on children’s sensory systems and vision, aggression, and unhealthy amounts of escapism and distraction from the physical world.”
+++++++++++++++++++++++++
The Brain Science Is In: Students’ Emotional Needs Matter
What the neuro-, cognitive, and behavioral research says about social-emotional learning
Teachers, like parents, have always understood that children’s learning and growth do not occur in a vacuum, but instead at the messy intersection of academic, social, and emotional development.
• Malleability: Genes are not destiny. Our developing brains are largely shaped by our environments and relationships—a process that continues into adulthood.
• Context: Family, relationships, and lived experiences shape the physiological structure of our brains over time. Healthy amounts of challenge and adversity promote growth, but toxic stress takes a toll on the connections between the hemispheres of our brain.
• Continuum: While we’ve become familiar with the exponential development of the brain for young children, it continues throughout life. The explosion of brain growth into adolescence and early adulthood, in particular, requires putting serious work into much more intentional approaches to supporting that development than is common today.
Over the past year, interest in eXtended reality (XR) technologies (such as virtual, augmented, immersive, and mixed reality) has surged. New and more affordable XR technologies, along with voice activation and sophisticated visual display walls, provide promising directions and opportunities to immerse learners in the curriculum, offering deeper and more vivid learning experiences and extending the learning environment. But what’s the curricular reality with respect to these technologies? What is hype and what is substance? Specifically:
What practical applications do “XR technologies” have for teaching, learning, and research?
How are these technologies being applied to engage learners as consumers and creators of XR experiences?
What evidence is there to support XR technologies as effective tools in the learning environment?
How can these technologies be integrated into learning spaces?
What are the ethical questions we should consider as we explore XR?
Juxtapose JS from Knight Lab is a free tool for making and hosting side-by-side comparisons of images. The tool was designed to help people see before and after views of a location, a building, a person, or anything else that changes appearance over time. Juxtapose JS will let you put the images into a slider frame that you can embed into a webpage where viewers can use the slider to reveal more or less of one of the images.
Storyline JS is one of the newer Knight Lab offerings. This beta product is designed to help students tell stories with data. The basic purpose of Storyline JS is to help students can create interactive line graphs. Students add annotations to the data points their line graphs. Those annotations are used to tell the story of the data represented in the graph.
Scene VR is another beta product from Knight Lab. The purpose of Scene VR is to enable users to stitch together panoramic images and VR images to create an immersive photo story. Stories published through Scene VR can be embedded into websites and viewed on desktop computers as well as on tablets and mobile phones.
If you want to embed audio into a written story, Soundcite JS is the tool for you. Soundcite JS lets you add audio clips to a written story. When Soundcite JS is properly used, a play button appears where you specify in the text. For example, if I wrote a story that included a scene in which a dog barks, I could have “the dog barks at the stranger” be highlighted with a play button that when clicked plays the sound of a dog barking.
StoryMap JS lets you combine elements of timelines and maps to create mapped stories. On StoryMap JS you create slides that are matched to locations on your map. Each slide in your story can include images or videos along with text. As you scroll through your story there are simple transitions between each slide. StoryMap JS integrates with your Google Drive account. To get started with StoryMap JS you have to grant it access to your Google Drive account. StoryMap JS will create a folder in your Google Drive account where all of your storymap projects will be saved. With StoryMap JS connected to your Google Drive account you will be able to pull images from your Google Drive account to use in your StoryMap JS projects. Here’s a good tutorial video made by Jan Serie Center’s Digital Liberal Arts initiative at Macalester College.