Searching for "cloud computing"

metaverse definition

What the metaverse will (and won’t) be, according to 28 experts

metaverse (hopefully) won’t be the virtual world of ‘Snow Crash,’ or ‘Ready Player One.’ It will likely be something more complex, diverse, and wild.

The metaverse concept clearly means very different things to different people. What exists right now is a series of embryonic digital spaces, such as Facebook’s HorizonEpic Games’ FortniteRoblox‘s digital space for gaming and game creation, and the blockchain-based digital world Decentraland–all of which have clear borders, different rules and objectives, and differing rates of growth.

TIFFANY ROLFE

different layers of realities that we can all be experiencing, even in the same environment or physical space. We’re already doing that with our phones to a certain extent—passively in a physical environment while mentally in a digital one. But we’ll see more experiences beyond your phone, where our whole bodies are fully engaged, and that’s where the metaverse starts to get interesting—we genuinely begin to explore and live in these alternate realities simultaneously.

RONY ABOVITZ, FOUNDER, MAGIC LEAP

Xverse

It will have legacy parts that look and feel like the web today, but it will have new nodes and capabilities that will look and feel like the Ready Player One Oasis (amazing gaming worlds), immersion leaking into our world (like my Magicverse concept), and every imaginable permutation of these. I feel that the Xverse will have gradients of sentience and autonomy, and we will have the emergence of synthetic life (things Sun and Thunder is working on) and a multitude of amazing worlds to explore. Building a world will become something everyone can do (like building a webpage or a blog) and people will be able to share richer parts of their external and inner lives at incredibly high-speed across the planet.

YAT SIU, COFOUNDER AND EXECUTIVE CHAIRMAN OF GAMING AND BLOCKCHAIN COMPANY ANIMOCA BRANDS

Reality will exist on a spectrum ranging from physical to virtual (VR), but a significant chunk of our time will be spent somewhere between those extremes, in some form of augmented reality (AR). Augmented reality will be a normal part of daily life. Virtual companions will provide information, commentary, updates and advice on matters relevant to you at that point in time, including your assets and activities, in both virtual and real spaces.

TIMONI WEST, VP OF AUGMENTED AND VIRTUAL REALITY, UNITY:

I think we can all agree our initial dreams of a fully immersive, separate digital world is not only unrealistic, but maybe not what we actually want. So I’ve started defining the metaverse differently to capture the zeitgeist: we’re entering an era where every computer we interact with, big or small, is increasingly world-aware. They can recognize faces, voices, hands, relative and absolute position, velocity, and they can react to this data in a useful way. These contextually aware computers are the path to unlocking ambient computing: where computers fade from the foreground to the background of everyday, useful tools. The metaverse is less of a ‘thing’ and more of a computing era. Contextual computing enables a multitude of new types of interactions and apps: VR sculpting tools and social hangouts, self-driving cars, robotics, smart homes.

SAM HAMILTON, HEAD OF COMMUNITY & EVENTS FOR BLOCKCHAIN-BASED METAVERSE CREATOR THE DECENTRALAND FOUNDATION

NITZAN MEKEL-BOBROV, CHIEF AI OFFICER, EBAY

as carbon is to the organic world, AI will be both the matrix that provides the necessary structural support and the material from which digital representation will be made. Of all the ways in which AI will shape the form of the metaverse, perhaps most essential is the role it will play in the physical-digital interface. Translating human actions into digital input–language, eye movement, hand gestures, locomotion–these are all actions which AI companies and researchers have already made tremendous progress on.

HUGO SWART, VICE PRESIDENT AND GM OF XR, QUALCOMM

Qualcomm views the metaverse as an ever-present spatial internet complete with personalized digital experiences that spans the physical and virtual worlds, where everything and everyone can communicate and interact seamlessly.

IBRAHIM BAGGILI, FOUNDING DIRECTOR, CONNECTICUT INSTITUTE OF TECHNOLOGY AT UNIVERSITY OF NEW HAVEN

As an active researcher in the security and forensics of VR systems, should the metaverse come into existence, we should explore and hypothesize the ways it will be misused.

CHITRA RAGAVAN, CHIEF STRATEGY OFFICER AT BLOCKCHAIN DATA ANALYTICS COMPANY ELEMENTUS 

I picture [the metaverse] almost like The Truman Show. Only, instead of walking into a television set, you walk into the internet and can explore any number of different realities

JOHN HANKE, CEO OF POKÉMON GO CREATOR NIANTIC

We imagine the metaverse as reality made better, a world infused with magic, stories, and functionality at the intersection of the digital and physical worlds.

CAROLINA ARGUELLES NAVAS, GLOBAL PRODUCT MARKETING, AUGMENTED REALITY, SNAP

Rather than building the “metaverse,” a separate and fully virtual reality that is disconnected from the physical world, we are focused on augmenting reality, not replacing it. We believe AR–or computing overlaid on the world around us–has a smoother path to mass adoption, but will also be better for the world than a fully virtual world.

URHO KONTTORI, COFOUNDER AND CTO OF AR/VR HEADSET MAKER VARJO

In the reality-based metaverse, we will be able to more effectively design products of the future, meet and collaborate with our colleagues far away, and experience any remote place in real-time.

ATHERINE ALLEN, CEO OF IMMERSIVE TECH RESEARCH CONSULTANCY LIMINA IMMERSIVE

I prefer to think of the metaverse as simply bringing our bodies into the internet.

BRANDS IN THE METAVERSE

VISHAL SHAH, VP OF METAVERSE, FACEBOOK

The metaverse isn’t just VR! Those spaces will connect to AR glasses and to 2D spaces like Instagram. And most importantly, there will be a real sense of continuity where the things you buy are always available to you.

SAYON DEB, MANAGER, MARKET RESEARCH, CONSUMER TECHNOLOGY ASSOCIATION

At its core will be a self-contained economy that allows individuals and businesses to create, own or invest in a range of activities and experiences.

NANDI NOBELL, SENIOR ASSOCIATE AT GLOBAL ARCHITECTURE AND DESIGN FIRM CALLISONRTKL

the metaverse experience can be altered from the individual’s point of view and shaped or curated by any number of agents—whether human or A.I. In that sense, the metaverse does not have an objective look beyond its backend. In essence, the metaverse, together with our physical locations, forms a spatial continuum.

NICK CHERUKURI, CEO AND FOUNDER OF MIXED REALITY GLASSES MAKER THIRDEYE

The AR applications of the metaverse are limitless and it really can become the next great version of the internet.

SAM TABAR, CHIEF STRATEGY OFFICER, BITCOIN MINING COMPANY BIT DIGITAL

It seems fair to predict that the actual aesthetic of any given metaverse will be determined by user demand. If users want to exist in a gamified world populated by outrageous avatars and fantastic landscapes then the metaverse will respond to that demand. Like all things in this world the metaverse will be market driven

+++++++++++++++
More on meta-verse in this blog
https://blog.stcloudstate.edu/ims?s=metaverse

NSF AI Institute for Adult Learning and Online Education (ALOE)

NSF investing $20 million in Georgia-led effort to transform online education for adults

Project centers on artificial intelligence; new National Institute in AI to be headquartered at Georgia Tech

https://gra.org/blog/209

“The goal of ALOE is to develop new artificial intelligence theories and techniques to make online education for adults at least as effective as in-person education in STEM fields,” says Co-PI Ashok Goel, Professor of Computer Science and Human-Centered Computing and the Chief Scientist with the Center for 21stCentury Universities at Georgia Tech

Research and development at ALOE aims to blend online educational resources and courses to make education more widely available, as well as use virtual assistants to make it more affordable and achievable. According to Goel, ALOE will make fundamental advances in personalization at scale, machine teaching, mutual theory of mind and responsible AI.

The ALOE Institute represents a powerful consortium of several universities (Arizona State, Drexel, Georgia Tech, Georgia State, Harvard, UNC-Greensboro); technical colleges in TCSG; major industrial partners (Boeing, IBM and Wiley); and non-profit organizations (GRA and IMS).

+++++++++++++++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artificial+intelligence

https://blog.stcloudstate.edu/ims?s=online+education

 

Computational Thinking

https://www.edsurge.com/news/2019-05-21-computational-thinking-is-critical-thinking-and-it-works-in-any-subject/

Computational thinking is one of the biggest buzzwords in education—it’s even been called the ‘5th C’ of 21st century skills.

Document-based questions have long been a staple of social studies classrooms

Since the human brain is essentially wired to recognize patterns, computational thinking—somewhat paradoxically—doesn’t necessarily require the use of computers at all.

In a 2006 paper for the Association for Computing Machinery, computer scientist Jeanette Wing wrote a definition of computational thinking that used terms native her field—even when she was citing everyday examples. Thus, a student preparing her backpack for the day is “prefetching and caching.” Finding the shortest line at the supermarket is “performance modeling.” And performing a cost-benefit analysis on whether it makes more sense to rent versus buy is running an “online algorithm.” “Computational thinking will have become ingrained in everyone’s lives when words like algorithm and precondition are part of everyone’s vocabulary,” she writes.

three main steps:

Looking at the data: Deciding what’s worth including in the final data set, and what should be left out. What are the different tools that can help manipulate this data—from GIS tools to pen and paper?

Looking for patterns: Typically, this involves shifting to greater levels of abstraction—or conversely, getting more granular.

Decomposition: What’s a trend versus what’s an outlier to the trend? Where do things correlate, and where can you find causal inference?

++++++++++++++++++++
more on critical thinking in this IMS blog
https://blog.stcloudstate.edu/ims?s=critical+thinking

accessibility in XR

Regine Gilbert talks about accessibility in XR

https://skarredghost-com.cdn.ampproject.org/c/s/skarredghost.com/2021/05/19/regine-accessibility-xr/amp/

What do you think is the current status of XR experiences? Are most of them accessible?

As with all technology, XR is evolving. The current status in terms of accessibility is that more folks need to be educated about accessibility in the VR space. In general, most experiences are not accessible, yet.

Oculus For Developers has some documentation for Designing accessible VR. You can find it here:
https://developer.oculus.com/learn/design-accessible-vr/

My current research with VEIL (Virtual Experience Interaction Lab https://www.veilab.org/) involves examining Design Patterns in VR. My future work involves research into inclusive and accessible XR. In addition, I am working on a book that will be related to XR and spatial computing.

How can people support you?

You can check out my book that is available online at this link: https://www.apress.com/gp/book/9781484250150

++++++++++++++++++
more on XR in this IMS blog
https://blog.stcloudstate.edu/ims?s=extended+reality

Facebook AR

Facebook Lab Reveals Direction Of AR Smartglasses

https://www.forbes.com/sites/charliefink/2021/03/18/facebook-lab-reveals-direction-of-ar-smartglasses/

FRL presented their concept of the “intelligent click,” a series of gestures, some large, some nearly unconscious nerve impulses, detected by a wrist band. This would communicate intent to the operating AI which would know, and anticipate, what the user needs to know, before the user knows they need it.

its goal is a “human centered interface,” which will use preferences and surroundings to infer intent, creating an “ultra low friction” computing experience.
++++++++++++++++
more on AR in this IMS blog
https://blog.stcloudstate.edu/ims?s=Augmented+reality

AI data and infodemic

AI progress depends on us using less data, not more

A minimal-data practice will enable several AI-driven industries — including cyber security, which is my own area of focus — to become more efficient, accessible, independent, and disruptive.

1. AI has a compute addiction. The growing fear is that new advancements in experimental AI research, which frequently require formidable datasets supported by an appropriate compute infrastructure, might be stemmed due to compute and memory constraints, not to mention the financial and environmental costs of higher compute needs.

MIT researchers estimated that “three years of algorithmic improvement is equivalent to a 10 times increase in computing power.”

2. Big data can mean more spurious noise. 

++++++++++++++
more on infodemic in this IMS blog
https://blog.stcloudstate.edu/ims?s=infodemic

immersive and goggles

The tech industry is looking to replace the smartphone — and everybody is waiting to see what Apple comes up with

https://www.cnbc.com/2021/02/20/apple-facebook-microsoft-battle-to-replace-smartphone-with-ar.html

Apple’s working on solving this problem, too, according to a report in Nikkei Asia. The newspaper says that Apple is working with TSMC, its primary processor manufacturer, to develop a new kind of augmented reality display that’s printed directly on wafers, or the base layer for chips.

If Apple does eventually reveal a big leap forward in AR display technology — especially if the technology is developed and owned by Apple instead of a supplier — Apple could find itself with multi-year head-start in augmented reality as it did when the iPhone vaulted it to the head of the smartphone industry.

Apple is also adding hardware to its iPhones that hint at a headset-based future. High-end iPhones released in 2020 include advanced Lidar sensors embedded in their camera.

Microsoft has invested heavily in these kind of technologies, purchasing AltspaceVR, a social network for virtual reality, in 2018. Before it launched Hololens, it paid $150 million for intellectual property from a smartglasses pioneer.

Facebook CEO Mark Zuckerberg speaks the most in public about his hopes for augmented reality. Last year, he said, “While I expect phones to still be our primary devices through most of this decade, at some point in the 2020s, we will get breakthrough augmented reality glasses that will redefine our relationship with technology.”

+++++++++
more on immersive in this IMS blog
https://blog.stcloudstate.edu/ims?s=immersive

Musk’s brain-computer startup

Elon Musk’s brain-computer startup is getting ready to blow your mind

Musk reckons his brain-computer interface could one day help humans merge with AI, record their memories, or download their consciousness. Could he be right?

https://www.zdnet.com/article/elon-musks-brain-computer-startup-is-getting-ready-to-blow-your-mind/

The idea is to solve these problems with an implantable digital device that can interpret, and possibly alter, the electrical signals made by neurons in the brain.

the latest iteration of the company’s hardware: a small, circular device that attaches to the surface of the brain, gathering data from the cortex and passing it on to external computing systems for analysis.

Several different types of working brain-computer interfaces already exist, gathering data on electrical signals from the user’s brain and translating them into data that can be interpreted by machines.

++++++++++++

If we put computers in our brains, strange things might happen to our minds

Using a brain-computer interface can fundamentally change our grey matter, a view of ourselves and even how fast our brains can change the world.

https://www.zdnet.com/article/if-we-put-computers-in-our-brains-strange-things-might-happen-to-our-minds/

++++++++++++
more on AI in this IMS blog
https://blog.stcloudstate.edu/ims?s=artificial+intelligence

Hands-on is “goggles-on”

https://www.insidehighered.com/digital-learning/blogs/online-trending-now/hands-classes-distance-and-emerging-virtual-future

As we enter the Fourth Industrial Revolution (4IR), we must be vigilant to keep our classes relevant to the rapidly changing workplace and the emerging digital aspects of life in the 2020s.

deployment of 5G delivery to mobile computing

Certainly, 5G provides a huge upgrade in bandwidth, enabling better streaming of video and gaming. However, it is the low latency of 5G that enables the most powerful potential for distance learning. VR, AR and XR could not smoothly function in the 4G environment because of the lag in images and responses caused by a latency rate of 50 milliseconds (ms). The new 5G technologies drop that latency rate to 5 ms or less, which produces responses and images that our brains perceive as seamlessly instant.

+++++++++++++
more on the 4IR in this IMS blog
https://blog.stcloudstate.edu/ims?s=industrial+revolution

XR anatomy

The EDUCAUSE XR (Extended Reality) Community Group Listserv <XR@LISTSERV.EDUCAUSE.EDU>

Greetings to you all! Presently, I am undertaking a masters course in “Instruction Design and Technology” which has two components: Coursework and Research. For my research, I would like to pursue it in the field of Augmented Reality (AR) and Mobile Learning. I am thinking of an idea that could lead to collaboration among students and directly translate into enhanced learning for students while using an AR application. However, I am having a problem with coming up with an application because I don’t have any computing background. This, in turn, is affecting my ability to come up with a good research topic.

I teach gross anatomy and histology to many students of health sciences at Mbarara University, and this is where I feel I could make a contribution to learning anatomy using AR since almost all students own smartphones. I, therefore, kindly request you to let me know which of the freely-available AR app authoring tools could help me in this regard. In addition, I request for your suggestions regarding which research area(s) I should pursue in order to come up with a good research topic.

Hoping to hear from you soon.

Grace Muwanga Department of Anatomy Mbarara University Uganda (East Africa)

++++++++++++

matthew.macvey@journalism.cuny.edu

Dear Grace, a few augmented reality tools which I’ve found are relatively easy to get started with:

For iOS, iPhone, iPad: https://www.torch.app/ or https://www.adobe.com/products/aero.html

To create AR that will work on social platforms like Facebook and Snapchat (and will work on Android, iOS) try https://sparkar.facebook.com/ar-studio/ or https://lensstudio.snapchat.com/ . You’ll want to look at the tutorials for plane tracking or target tracking https://sparkar.facebook.com/ar-studio/learn/documentation/tracking-people-and-places/effects-in-surroundings/

https://lensstudio.snapchat.com/guides/general/tracking/tracking-modes/

One limitation with Spark and Snap is that file sizes need to be small.

If you’re interested in creating AR experiences that work directly in a web browser and are up for writing some markup code, look at A-Frame AR https://aframe.io/blog/webxr-ar-module/.

For finding and hosting 3D models you can look at Sketchfab and Google Poly. I think both have many examples of anatomy.

Best, Matt

+++++++++++

“Beth L. Ritter-Guth” <britter-guth@NORTHAMPTON.EDU>

I’ve been using Roar. They have a 99$ a year license.

++++++++++++

I have recently been experimenting with an AR development tool called Zappar, which I like because the end users do not have to download an app to view the AR content. Codes can be scanned either with the Zappar app or at web.zappar.com.

From a development standpoint, Zappar has an easy to use drag-and-drop interface called ZapWorks Designer that will help you build basic AR experiences quickly, but for a more complicated, more interactive use case such as learning anatomy, you will probably need ZapWorks Studio, which will have much more of a learning curve. The Hobby (non-commercial) license is free if you are interested in trying it out.

You can check out an AR anatomy mini-lesson with models of the human brain, liver, and heart using ZapWorks here: https://www.zappar.com/campaigns/secrets-human-body/. Even if you choose to go with a different development tool, this example might help nail down ideas for your own project.

Hope this helps,

Brighten

Brighten Jelke Academic Assistant for Virtual Technology Lake Forest College bjelke@lakeforest.edu Office: DO 233 | Phone: 847-735-5168

http://www.lakeforest.edu/academics/resources/innovationspaces/virtualspace.php

+++++++++++++++++
more on XR in education in this IMS blog
https://blog.stcloudstate.edu/ims?s=xr+education

1 2 3 4 5 6 11