Extended reality – also called XR for short- is a term that props up ever more frequently in blog posts, articles and social media, and is gaining irreversible traction. In such a field marked by rapid evolution, it is worth wondering, what is the shape of XR today and how can it shape our collective tomorrow?
XR describes a spectrum of interactive environments which combine physical with virtual space to varying degrees with Virtual Reality sitting at one end and physical reality at the other.
The Virtual Reality end of this spectrum is a combination which can be clearly described and
communicated. It describes experiences where human vision to the physical world is entirely blocked and the user is encompassed instead with virtual content. Such experiences require a specialised device to be delivered, called a virtual reality headset.
The headset is typically also equipped with speakers for spatial audio output, so that the resulting experience occupies two of the user’s predominant senses for perceiving the environment. Virtual reality applications use human physical movement as an input modality for driving interactions, frequently assisted using controllers which can also provide haptic input to the sense of touch, thus situating the user in an interactive virtual reality environment in an immersive way.
Augmented reality, on the other hand, is not as straightforward to describe. The definition for AR is the overlay of virtual content on physical space, but the execution of this idea entails variations that end up offering a range of experiences with significant differences.
First, unlike with the specialised devices required to run VR applications, the public so far has associated the technology of Augmented Reality with commonplace smartphone devices that have the necessary hardware and software components to support AR applications. In hardware terms, that involves adequate computation power, one or more embedded cameras and one or more inertia measurement units (IMU).
In software terms, it involves at the minimum algorithms for tracking translations in a relative coordinate system by using the input from the IMUs, image analysis with feature extraction and matching for correcting positional drift, and depth estimation and mesh generation for understanding the geometry of the physical space.
The application can then overlay digital content on the feed from the smartphone’s camera in a way that respects geometry and device orientation, and display the composite on the smartphone device screen. Users are able to interact with the digital content by interacting with the device, e.g. with touches on the touchscreen, as in other non-XR mobile applications.
It follows that viewing and interacting with the AR application as a user experience is not innovative per se. AR technology in this context has so far been used for very popular games, like Pokémon Go, and in the art space, e.g. for AR art galleries like in Artivive.
On another line of thought, the public is slowly but steadily getting introduced to devices called smart-glasses or AR glasses or AR displays – the terminology is still not standardised, since this is a rather new field for mass production. For the scope of this post, we will adopt the term AR displays.
AR displays are glass frames with an embedded display, either only on one lens (monocular, for less obtrusion) or on both (binocular, for more immersion). AR displays can cast a projection on the surface of the lens so that the virtual content is interjected between the user’s eye and physical space. In other words, AR displays can be perceived as portable monitor displays. Just like with traditional monitors, the size of the projection area and its resolution define its visualisation capabilities.
AR displays may have no or limited computation power themselves and usually depend on their connection to an external device. AR displays in the monocular category can usually be connected wirelessly to a smartphone device for increased portability, while AR displays in the binocular category (which typically support higher resolution) can relate to a cable to a variety of devices, notably laptops and handheld consoles.
Common use cases for monocular AR displays include short interactions on the go, like reading messages, previewing notifications, getting directions, viewing photos or watching videos. Common use cases for binocular AR displays include a great range of everyday tasks, like office work, multimedia editing or gaming. AR displays may also have a built-in camera, so that the user can take pictures of their surroundings in a less intrusive manner (as you would e.g., with a GoPro camera).
In conclusion, it is important to differentiate that although these devices indeed succeed in overlaying virtual content on physical space, the composite result is restricted to overlaying a flat 2D screen on 3D space, unlike in the previously mentioned smartphone AR mobile applications, where the virtual content is placing in a way that respects depth and perspective to simulate 3D objects.
The next question that sensibly comes to mind is: Is there an AR display with compositing capabilities on par with AR mobile applications? Here is where the waters become even more murky. It can be stated that there are currently no consumer-oriented devices that support such an experience. The lay of the land is shifting fast though, and change is coming from two opposing directions towards this desired middle ground.
On the one hand, the VR space is working towards lifting the impenetrable visual barrier between the user and the physical space. Producers are testing out VR headsets with front-facing colour cameras of adequate resolution to record and stream the physical environment back to the VR display, so that the user can re-establish visual contact with their surroundings while wearing the headset.
This feature, called pass-through, can allow developers to produce applications which combine physical world data with virtual world data in novel ways. One could not only view 3D virtual objects positioned in their physical space in a more immersive way compared to a flat smartphone screen, but one could also manipulate said objects using controllers and/or their hands, as well as define how these objects interact with the rest of the physical world (such as collisions or context-aware behaviours). This combination is frequently called Mixed Reality.
On the other hand, the AR space is also evolving rapidly by experimenting with AR devices that have a glass-like form factor, but adequate sensors and computational power to independently support AR like a smartphone device would. Such devices can vary greatly in their concept and the related keywords used for communication can become even more confusing.
Terms like head mounted AR device, holographic device and/or Mixed Reality device can be sometimes used interchangeably. Regardless of the choice of words, the decisive factor remains the same: these headsets allow for a more seamless and powerful blend between the virtual and the physical to offer higher valuable experiences to their users.
It is worth noting that such devices are currently restricted by short battery times (approximately 2 hours depending on intensity of use), may exhibit (over)heating issues due to their compressed form factor that also relate to intensity and duration of use, and may be reported as uncomfortable by their users due to weight or fitting issues.
They also may compete or require additional lens fittings for individuals with vision- correction glasses. Due to their high price ranges, they are currently out of reach for the greater public and are mainly used by enterprises in research and industry. Nonetheless, popularisation is growing fast, as the VR industry has proven in the past 5 years.
Therefore, it is reasonable to expect decisive changes in the few coming years that will determine if one tendency or the other will take hold and which concepts will pass the experimental phase and establish themselves as the XR standards for the future. The future is promising and unraveling fast!
Tareq Chihabi
Project Manager.
Projects mainly focus on Medical and Health, IT, and Cybersecurity, XR/AR domains. both on the management and technical level.
Moreover, a deep background in Business modelling and Business planning, constructing an exploitation plan for a specific product.