AIXTRA, one of the winners of the VOXReality Open Call, has successfully completed a major testing phase. From July 21st to 25th in Leipzig, 34 participants took part in both single-user and multi-user VR training scenarios. The goal was to assess the user experience and technical performance of AI-powered VR training environments. Special attention was given to the AI trainer and voice processing features, with the team gathering end-user perceptions, subjective feedback, and objective data to ensure statistically meaningful insights.
A Diverse Group of Participants
The study’s participants, ranging from 18 to 44 years old, came from various backgrounds with differing levels of familiarity with AI and XR technologies. While many had professional experience with technology, most were “not much experienced in XR.” Concerns about AI were common, with participants frequently citing “data privacy and security, inaccuracy of AI-generated answers, dependence on AI reducing human skills, ethical considerations, bias in AI decisions, and environmental impact.” Reflecting this apprehension, a significant portion of participants (41.67%) had only “neutral trust” in AI systems.
Participants’ native languages included German, English, Ukrainian-Russian, Dutch, and Vietnamese. While English proficiency ranged from beginner to fluent, many users acknowledged their accent influenced how well they were understood. “Some noted a mixed or regional accent… responses ranged from ‘never’ to ‘sometimes’ when asked if they had difficulties being understood.”
Feedback from the testing sessions offered invaluable insights into the AIXTRA system’s real-world performance. In the single-user mode, participants found the AI assistant helpful, with one stating, “it felt intuitive and safe to work with the AI assistant” and that it “helped me manage tasks better.” However, technical challenges also emerged. A common issue was the need for users to “repeat or rephrase their inputs” to be understood. One participant experienced this firsthand, recounting, “I had difficulties pronouncing the English word ‘Barometer’ correctly, and the AI only recognised it after the 4th or 5th attempt.”
In the multi-user environment, users noted translation delays and occasional inaccuracies. One participant pointed out an issue with audio quality, stating, “Poor sound quality of the assistant was difficult”, highlighting how hardware can affect clarity and immersion. Another noted a lack of comprehensive audio cues, commenting, “There wasn’t audio feedback at every intermediate step”, which could impact the guided learning process.
Standout Feedback and Future Plans
The sessions yielded several memorable quotes that captured the dual nature of AI’s impact. One user in the single-user scenario noted the “Advantage: no fear of asking the soulless AI for advice and help without blaming myself”, but immediately followed with a perceived “Disadvantage: less independent thinking and reflection”. revealing a broader concern about “Over-Reliance and Complacency”. In the multi-user setting, a participant, despite pointing out a critical error, enthusiastically concluded, “Otherwise it was pretty cool”.
Following this successful testing phase, the AIXTRA project is now moving into its final stages. The team will use a structured user study, approved by an ethics committee, to combine user and developer feedback for deeper insights. The project will also make its demo application available as Open Access and has two more publications planned to increase public visibility. A scientific paper is also in the works to “evaluate the results and show future trends in the field of AI and XR environment”.