Description: Join Marialice and Jaime as they join GlobalMindED to present, Solving Problems in Local, Global, and Digital Communities in AR. During their webinar, they will Identify and solve real problems in local, global and digital communities using augmented reality. Participants will gain a better understanding of how digital citizens can confidently and positively engage with emerging technologies to think critically and act creatively. Examples of students using the global goals to make a positive impact on society will be shared.
IM 690 lab plan for March 31, online: Virtual Worlds
If at any point you are lost in the virtual worlds, please consider talking/chatting using our IM 690 zoom link:https://minnstate.zoom.us/j/964455431 or call 320 308 3072
Readings: Currently, if you go to the SCSU online dbases
,if they are working at all, don’t be surprised when clicking on EBSCOhost Business Source Complete to see this msg:
and if you execute a search:
“AltSpaceVR” + “education”, you will find only meager 1+ results.
Google Scholar, naturally, will yield much greater number.
So, search and find an article of your interest using Google Scholar. I used “immersive learning” + “education” for my search.
I chose to read this article: https://journal.alt.ac.uk/index.php/rlt/article/view/2347/2657
since it addressed design principles when applying mixed reality in education. What article did you find/choose/read/are ready to share your analysis with?
Tuesday, March 31, 5PM lab
As usually, we will meet at this Zoom link: https://minnstate.zoom.us/j/964455431 All of us will be online and we will meet in the Zoom room. Please come 10 min earlier, so we can check our equipment and make sure everything works. Since we will be exploring online virtual worlds, please be prepared for technical issues, especially with microphones.
For this lab, please download and install on your computers the AltSpaceVR (ASVR) software: https://www.microsoft.com/en-us/p/altspacevr/9nvr7mn2fchq?activetab=pivot:overviewtab Please consider the impediment that Microsoft has made the 2D mode for PC available only for Windows. If you are a Mac user and don’t have PC available at home, please contact me directly for help.
In addition, pls have a link to the video tutorial; https://blog.stcloudstate.edu/ims/2020/03/13/im690-asvr-2d-tutorial/
pls be informed about MediaSpace issues of the last two weeks, which can result in poor rendering of the video. If issues persist and you still need help downloading and installing the software, contact me directly for help. Please do your best to have ASVR installed on your computer before the lab starts on Tues, March 31, 5PM, so we can use our time during the lab for much more fun activities!
Intro to ASVR.
Please watch this 5 min video anytime you feel a bit lost in ASVR
pls consider the issues with MediaSpace and be patient, if the video renders and/or does not play right away. The video is meant to help you learn how to navigate your avatar in ASVR.
the first 15-20 min in the lab, we will “meet” in ASVR, figure out how to work on our ASVR avatar, how to use the computer keyboard to move, communicate and have basic dexterity. We must learn to “make friends” with Mark Gill (ASVR name: MarkGill47), Dr. Park (ASVR name: dhk3600) and Dr. Miltenoff (ASVR name: Plamen), as well as with your class peers, who will be sharing their ASVR contact info in the Zoom Chat session. Once we learn this skills, we are ready to explore ASVR.
Mark Gill will “lead” us through several virtual worlds, which you will observe and assess from the point of view of an Instructional Designer and an educator (e.g. how these worlds can accommodate learning; what type of teaching do these virtual worlds offer, etc.)
Eventually, Mark Gill will bring us to the SCSU COSE space, created by him, where he will leave us to discuss.
Discussion in the COSE ASVR room
We will start our discussion with you sharing your analysis of the article you found in Google Scholar for today’s class (see above Readings). How do your findings from the article match your impressions from the tour across virtual worlds in ASVR? How does learning happen?
Final projects
the rest of the time in the lab will be allocated for work on your final projects.
Dr. Park and Dr. Miltenoff will work individually with your groups to assist with ideas, questions regarding your projects,
As part of our involvement with the Extended Reality Community of Practice, InforMedia Services and SCSU VizLab are offering the following workshops / introductions in augmented and virtual reality:
–Wednesday,March 18, 3PM, MC 205 (directions to MC 205: https://youtu.be/jjpLR3FnBLI ) Intro to 360 Video: easy adoption of virtual reality in your classroom
Plamen Miltenoff will lead exploration of resources; capturing 360 images and videos; hands-on session on creating virtual tours with existing and acquired imagery.
–Wednesday, March 25, 3PM, MC 205 (directions to MC 205: https://youtu.be/jjpLR3FnBLI ) Intro to Augmented Reality
Alan Srock and Mark Gill will demonstrate the use of the Merge Cube and other augmented reality tools in their courses.
Plamen Miltenoff will lead hands-on session on creating basic AR content with Metaverse.
–Wednesday, April 1, 3PM, MC 205 (directions to MC 205: https://youtu.be/jjpLR3FnBLI ) Intro to Virtual Reality Mark Gill, Alan Srock and Plamen Miltenoff will demonstrate AltSpaceVR and Virbela.
Hands-on session on creating learning spaces in virtual reality.
These sessions will share ready-to-go resources as well as hands-on creation of materials suitable for most disciplines taught on this campus.
The event requires no registration, and is virtual only, free, and open to the public. Platform access is required, so please install one of the above platforms to attend the International Summit. You may attend in 2D on a desktop or laptop computer with a headphone and microphone (USB gaming headphone recommended), or with a virtual device such as the Oculus Go, Quest, and Rift, Vive, and other mobile and tethered devices. Please note the specifications and requirements of each platform.
Charlie Fink, author, columnist for Forbes magazine, and Adjunct Faculty member of Chapman University, will be presenting “Setting the Table for the Next Decade in XR,” discussing the future of this innovative and immersive technology, at the 2020 Educators in VR International Summit. He will be speaking in AltspaceVR on Tuesday, February 18 at 1:00 PM EST /
This workshop with Dr. Sarah Jones will focus on developing a relevant and new literacy for virtual reality, including the core competencies and skills needed to develop and understand how to become an engaged user of the technology in a meaningful way. The workshop will develop into research for a forthcoming book on Uncovering a Literacy for VR due to be published in 2020.
Sarah is listed as one of the top 15 global influencers within virtual reality. After nearly a decade in television news, Sarah began working in universities focusing on future media, future technology and future education. Sarah holds a PhD in Immersive Storytelling and has published extensively on virtual and augmented reality, whilst continuing to make and create immersive experiences. She has advised the UK Government on Immersive Technologies and delivers keynotes and speaks at conferences across the world on imagining future technology. Sarah is committed to diversifying the media and technology industries and regularly champions initiatives to support this agenda.
Currently there are limited ways to connect 3D VR environments to physical objects in the real-world whilst simultaneously conducting communication and collaboration between remote users. Within the context of a solar power plant, the performance metrics of the site are invaluable for environmental engineers who are remotely located. Often two or more remotely located engineers need to communicate and collaborate on solving a problem. If a solar panel component is damaged, the repair often needs to be undertaken on-site thereby incurring additional expenses. This triage of communication is known as inter-cognitive communication and intra-cognitive communication: inter-cognitive communication where information transfer occurs between two cognitive entities with different cognitive capabilities (e.g., between a human and an artificially cognitive system); intra-cognitive communication where information transfer occurs between two cognitive entities with equivalent cognitive capabilities (e.g., between two humans) [Baranyi and Csapo, 2010]. Currently, non-VR solutions offer a comprehensive analysis of solar plant data. A regular PC with a monitor currently have advantages over 3D VR. For example, sensors can be monitored using dedicated software such as EPEVER or via a web browser; as exemplified by the comprehensive service provided by Elseta. But when multiple users are able to collaborate remotely within a three-dimensional virtual simulation, the opportunities for communication, training and academic education will be profound.
Michael Vallance Ed.D. is a researcher in the Department of Media Architecture, Future University Hakodate, Japan. He has been involved in educational technology design, implementation, research and consultancy for over twenty years, working closely with Higher Education Institutes, schools and media companies in UK, Singapore, Malaysia and Japan. His 3D virtual world design and tele-robotics research has been recognized and funded by the UK Prime Minister’s Initiative (PMI2) and the Japan Advanced Institute of Science and Technology (JAIST). He has been awarded by the United States Army for his research in collaborating the programming of robots in a 3D Virtual World.
Augmented Reality Lens is popular among young people thanks to Snapchat’s invention. Business is losing money without fully using of social media targeting young people (14-25). In my presentation, Dominique Wu will show how businesses can generate more leads through Spark AR (Facebook AR/Instagram AR) & Snapchat AR Lens, and how to create a strategic Snapchat & Instagram AR campaigns.
Domnique Wu is an XR social media strategist and expert in UX/UI design.She has her own YouTube and Apple Podcast show called “XReality: Digital Transformation,” covering the technology and techniques of incorporating XR and AR into social media, marketing, and integration into enterprise solutions.
Mark Christian, EVP, Strategy and Corporate Development, GIGXR
Mixed Reality devices like the HoloLens are transforming education now. Mark Christian will discuss how the technology is not about edge use cases or POCs, but real usable products that are at Universities transforming the way we teach and learn. Christian will talk about the products of GIGXR, the story of how they were developed and what the research is saying about their efficacy. It is time to move to adoption of XR technology in education. Learn how one team has made this a reality.
As CEO of forward-thinking virtual reality and software companies, Mark Christian employs asymmetric approaches to rapid, global market adoption, hiring, diversity and revenue. He prides himself on unconventional approaches to building technology companies.
Virtual Reality is an effective medium to impart education to the student only if it is done right.The way VR is considered gimmick or not is by the way the software application are designed/developed by the developers not the hardware limitation.I will be giving insight about the VR development for educational content specifically designed for students of lower secondary school.I will also provide insights about the development of game in unity3D game engine.
Game Developer and VR developer with over 3 years of experience in Game Development.Developer of Zombie Shooter, winner of various national awards in the gaming and entertainment category, Avinash Gyawali is the developer of EDVR, an immersive voice controlled VR experience specially designed for children of age 10-18 years.
Virtual Reality Technologies for Learning Designers
Margherita Berti
Virtual Reality (VR) is a computer-generated experience that simulates presence in real or imagined environments (Kerrebrock, Brengman, & Willems, 2017). VR promotes contextualized learning, authentic experiences, critical thinking, and problem-solving opportunities. Despite the great potential and popularity of this technology, the latest two installations of the Educause Horizon Report (2018, 2019) have argued that VR remains “elusive” in terms of mainstream adoption. The reasons are varied, including the expense and the lack of empirical evidence for its effectiveness in education. More importantly, examples of successful VR implementations for those instructors who lack technical skills are still scarce. Margherita Berti will discuss a range of easy-to-use educational VR tools and examples of VR-based activity examples and the learning theories and instructional design principles utilized for their development.
Margherita Berti is a doctoral candidate in Second Language Acquisition and Teaching (SLAT) and Educational Technology at the University of Arizona. Her research specialization resides at the intersection of virtual reality, the teaching of culture, and curriculum and content development for foreign language education.
Amanda Fox, Creative Director of STEAMPunks/MetaInk Publishing, MetaInk Publishing
There is a barrier between an author and readers of his/her books. The author’s journey ends, and the reader’s begins. But what if as an author/trainer, you could use gamification and augmented reality(AR) to interact and coach your readers as part of their learning journey? Attend this session with Amanda Fox to learn how the book Teachingland leverages augmented reality tools such as Metaverse to connect with readers beyond the text.
Amanda Fox, Creative Director of STEAMPunksEdu, and author of Teachingland: A Teacher’s Survival Guide to the Classroom Apolcalypse and Zom-Be A Design Thinker. Check her out on the Virtual Reality Podcast, or connect with her on twitter @AmandaFoxSTEM.
Christian Jonathan Angel Rueda specializaes in didactic activity of the use of virtual reality/virtual worlds to learn the fundamentals of design. He shares the development of a course including recreating in the three-dimensional environment using the fundamentals learned in class, a demonstration of all the works developed throughout the semester using the knowledge of design foundation to show them creatively, and a final project class scenario that connected with the scenes of the students who showed their work throughout the semester.
Christian Jonathan Angel Rueda is a research professor at the Autonomous University of Queretaro in Mexico. With a PhD in educational technology, Christian has published several papers on the intersection of education, pedagogy, and three-dimensional immersive digital environments. He is also an edtech, virtual reality, and social media consultant at Eco Onis.
How we can bridge the gap between eLearning and XR. Richard Van Tilborg discusses combining brain insights enabled with new technologies. Training and education cases realised with the CoVince platform: journeys which start on you mobile and continue in VR. The possibilities to earn from your creations and have a central distribution place for learning and data.
Richard Van Tilborg works with the CoVince platform, a VR platform offering training and educational programs for central distribution of learning and data. He is an author and speaker focusing on computers and education in virtual reality-based tasks for delivering feedback.
During Lab work on Jan 28, we experienced Video 360 cardboard movies
let’s take 5-10 min and check out the following videos (select and watch at least three of them)
F2F students, please Google Cardboard
Online students, please view on your computer or mobile devices, if you don’t have googles at your house (you can purchase now goggles for $5-7 from second-hand stores such as Goodwill)
Both F2F and online students. Here directions how to easily open the movies on your mobile devices:
Copy the URL and email it to yourself.
Open the email on your phone and click on the link
If you have goggles, click on the appropriate icon lower right corner and insert the phone in the goggles
Open your D2L course on your phone (you can use the mobile app).
Go to the D2L Content Module with these directions and click on the link.
After the link opens, insert phone in the goggles to watch the video
Videos: While watching the videos, consider the following objectives:
– Does this particular technology fit in the instructional design (ID) frames and theories covered, e.g. PBL, CBL, Activity Theory, ADDIE Model, TIM etc. (https://blog.stcloudstate.edu/ims/2020/01/29/im-690-id-theory-and-practice/ ). Can you connect the current state, but also the potential of this technology with the any of these frameworks and theories, e.g., how would Google Tour Creator or any of these videos fits in the Analysis – Design – Development – Implementation – Evaluation process? Or, how do you envision your Google Tour Creator project or any of these videos to fit in the Entry – Adoption – Adaptation – Infusion – Transformation process?
– how does this particular technology fit in the instructional design (ID) frames and theories covered so far?
– what models and ideas from the videos you will see seem possible to be replicated by you?
Find one F2F and one online peer to form a group.
Based on the questions/directions before you started watching the videos:
– Does this particular technology fit in the instructional design (ID) frames and theories covered. e.g. PBL, CBL, Activity Theory, ADDIE Model, TIM etc. (https://blog.stcloudstate.edu/ims/2020/01/29/im-690-id-theory-and-practice/ ). Can you connect the current state, but also the potential of this technology with the any of these frameworks and theories, e.g., how would Google Tour Creator or any of these videos fits in the Analysis – Design – Development – Implementation – Evaluation process? Or, how do you envision your Google Tour Creator project or any of these videos to fit in the Entry – Adoption – Adaptation – Infusion – Transformation process?
– how does this particular technology fit in the instructional design (ID) frames and theories covered so far?
– what models and ideas from the videos you will see seem possible to be replicated by you?
exchange thoughts with your peers and make a plan to create similar educational product
Evaluate the ability of the game you watched to be incorporated in the educational process
Assignment: In 10-15 min (mind your peers, since we have only headset), do your best to evaluate one educational app (e.g., Labster) and one leisure app (games).
Use the same questions to evaluate Lenovo DayDream:
– Does this particular technology fit in the instructional design (ID) frames and theories covered, e.g. PBL, CBL, Activity Theory, ADDIE Model, TIM etc. (https://blog.stcloudstate.edu/ims/2020/01/29/im-690-id-theory-and-practice/ ). Can you connect the current state, but also the potential of this technology with the any of these frameworks and theories, e.g., how would Google Tour Creator or any of these videos fits in the Analysis – Design – Development – Implementation – Evaluation process? Or, how do you envision your Google Tour Creator project or any of these videos to fit in the Entry – Adoption – Adaptation – Infusion – Transformation process?
– how does this particular technology fit in the instructional design (ID) frames and theories covered so far?
– what models and ideas from the videos you will see seem possible to be replicated by you?
As the new year begins, it’s important to look towards what’s coming in 2020. The augmented and virtual reality landscape is shifting rapidly, and it’s crucial to fully understand what place AR and VR have in your organization’s L and D strategy. Take time at the front of the new year to come up with a clear and informed strategy on how AR and VR should, can, and will impact learning in your organization. Spend a few days taking a deeper dive into the topic by joining us March 31 – April 2, 2020 in Orlando at Realities360 Conference and Expo.
FEATURED KEYNOTE: HOW XR WILL SHARE THE FUTURE OF ENTERPRISE
We’ve just added a new keynote to the lineup! During this focused keynote, Stephanie Llamas will explore how XR is on its way to revolutionizing enterprise and what this means for the enterprise XR industry as a whole. She will look at the key opportunities for content creators, where commercial organizations are investing their money, and finally, the ROI. You’ll leave this keynote with insights into how to take advantage of XR’s enormous opportunities and how the market’s future will affect your bottom line.
Realities360 is co-located with Learning Solutions Conference and Expo, the learning and development event that takes ideas beyond theory and into practice. Your Realities360 registration includes access to everything offered at Learning Solutions Conference – including 120+ additional L and D focused sessions to choose from, and much more!
Register and pay by NEXT FRIDAY, February 7, and save $150 with the Last Chance registration discount – in addition to all other discounts you may qualify for!
Plus, employees of academic institutions save an additional 25% off registration!
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. recognition technology.
Facial recognition technology has always been controversial. It makes people nervous about Big Brother. It has a tendency to deliver false matches for certain groups, like people of color. And some facial recognition products used by the police — including Clearview’s — haven’t been vetted by independent experts.
Clearview deployed current and former Republican officials to approach police forces, offering free trials and annual licenses for as little as $2,000. Mr. Schwartz tapped his political connections to help make government officials aware of the tool, according to Mr. Ton-That.
“We have no data to suggest this tool is accurate,” said Clare Garvie, a researcher at Georgetown University’s Center on Privacy and Technology, who has studied the government’s use of facial recognition. “The larger the database, the larger the risk of misidentification because of the doppelgänger effect. They’re talking about a massive database of random people they’ve found on the internet.”
Part of the problem stems from a lack of oversight. There has been no real public input into adoption of Clearview’s software, and the company’s ability to safeguard data hasn’t been tested in practice. Clearview itself remained highly secretive until late 2019.
The software also appears to explicitly violate policies at Facebook and elsewhere against collecting users’ images en masse.
while there’s underlying code that could theoretically be used for augmented reality glasses that could identify people on the street, Ton-That said there were no plans for such a design.
In May of last year, San Francisco banned facial recognition; the neighboring city of Oakland soon followed, as did Somerville and Brookline in Massachusetts (a statewide ban may follow). In December, San Diego suspended a facial recognition program in advance of a new statewide law, which declared it illegal, coming into effect. Forty major music festivals pledged not to use the technology, and activistsare calling for a nationwide ban. Many Democratic presidential candidates support at least a partial ban on the technology.
facial recognition bans are the wrong way to fight against modern surveillance. Focusing on one particular identification method misconstrues the nature of the surveillance society we’re in the process of building. Ubiquitous mass surveillance is increasingly the norm. In countries like China, a surveillance infrastructure is being built by the government for social control. In countries like the United States, it’s being built by corporations in order to influence our buying behavior, and is incidentally used by the government.
People can be identified at a distance by their heart beat or by their gait, using a laser-based system. Cameras are so good that they can read fingerprints and irispatterns from meters away. And even without any of these technologies, we can always be identified because our smartphones broadcast unique numbers called MAC addresses.
The data broker industry is almost entirely unregulated; there’s only one law — passed in Vermont in 2018 — that requires data brokers to register and explain in broad terms what kind of data they collect.
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.
Companies like Microsoft, Google and the start-up Magic Leap have all released AR glasses over the years, but none have gained massive consumer adoption.