Voice-driven interaction in XR spaces
Challenge:
Integration Challenge
Objectives
CrossSense aims to support independence of individuals with Mild Cognitive Impairment (MCI) and early- to mid-stage dementia in their home environment. Our primary goal is to slow cognitive decline while improving daily functioning, emotional wellbeing, and quality of life.
Solution
We have developed an AI-powered augmented reality (AR) app for smartglasses that delivers real-time, context-aware cognitive support. Using multimodal machine learning — including object recognition, depth perception, natural language processing, and vision-language models — the system identifies people, objects, and environments in real-time. It overlays personalised, multisensory cues (e.g., name-tags, audio prompts, visual highlights) directly onto the user’s field of view, reinforcing memory and guiding daily tasks. The digital assistant, Wispy, adapts to each user’s cognitive profile, helping them form and recall associations through gentle, adaptive prompts. Built on open-source models and deployed via OpenXR on DigiLens Argo and NTT QONOQ MiRZA smartglasses. We are also leveraging the NVIDIA Inception Program to enhance AI performance and scalability.
Impact
In our early evaluation users reported increased confidence and caregivers thought the solution could reduce the need for daily care. These qualitative outcomes, combined with our AI-driven personalisation framework, position CrossSense as a scalable, non-invasive intervention with the potential to transform care for cognitively impaired, helping people live independent, fulfilling lives for longer.
Cognitive Aid for Individuals with MCI and Dementia:
Designer, including immersive interfaces for mental health and XR, focusing on channelling user requirements, streamlining interaction and building visual identity.
Full stack developer and technical lead, excelling in integrating research with development and design to deliver robust, user-centred software solutions.
Astrophysist and data scientist solving complex challenges involving multimodal machine learning, leading backend development, tuning, and deploying AI and ML models.
Service designer and frontend developer specialising in co-design and accessibility, leading user-centred design and testing processes to ensure software meets user needs.
Finance, operations, and management specialist coordinating Animorph’s resources, controlling scope, streamlining communication, prioritizing tasks, and mitigating obstacles for timely project delivery.
Jon is a developer/ technologist /artist. He creates dynamic experiences at the intersection of mind, body, and the digital world. From innovative apps to ground-breaking collaborations with dancers, his work pushes the boundaries of technology and creativity.
Clinical neurological researcher, ensures the software aligns with research evidence and ethical guidelines, leading evaluation findings review.
Animorph
“Working with VOXReality gave us the AI tools to make our assistant Wispy truly smart. Our mentors shepherded us through technical challenges and consistently supported us in building the app that can help others in unique ways. This genuine drive to positively impact society connected us with VOXReality Team and is the invisible thread that will helps us find each other in the future!”
CERTH