Photo by James Yarema on Unsplash

VOXReality: Immersive XR experiences combining language and vision AI models

Authors: Apostolos Maniatis, Stavroula Bourou, Zacharias Anastasakis, Kostantinos Psychogios

In recent years, Artificial Intelligence (AI) technology has seen significant growth due to advancements in machine learning (ML) and data processing, as well as the availability of large amounts of data. The integration of AI with eXtended Reality (XR) technologies such as Virtual Reality (VR) and Augmented Reality (AR) can create innovative solutions and provide intuitive interactions and immersive experiences across various sectors, including education, entertainment and healthcare. 

The presented paper describes the innovative Voice-drive interaction in XR spaces (VOXReality) initiative, funded by the European commission, that integrates language and vision-based AI with unidirectional or bidirectional exchanges to drive AR and VR, allowing for natural human interactions with XR systems and creating multi-modal XR experiences. It aligns Natural Language Processing (NLP) and Computer Vision (CV) parallel progress to design novel models and techniques that integrate language and visual understanding with XR, providing a holistic understanding of goals, environment, and context. 

VOXReality plans to validate its visionary approaches through three use cases such as a XR personal assistant, real-time verbal communication in virtual conferences, and immersive experience for the audience of theatrical plays.

Shopping Basket