VOXReality template LinkedIn v3 (2).pdf (Instagram Post) (50)

Partner Journey #7 with UM

In this interview, we sit down with the project’s Scientific Coordinator, Maastricht University, to discuss how the consortium navigated the “turbo-fast” pace of AI development, the challenges of deploying models in noisy environments, and the ethical frameworks that ensured responsible innovation. From publishing 16 scientific papers to releasing dozens of open-access models, VOXReality provides a blueprint for the future of immersive, intelligent technology.

As the scientific coordinator of VOXReality, how would you summarize the project’s main scientific achievements, particularly in the areas of NLP, CV, and AI integration?

At its core, the project had two main goals: advancing natural language processing literature and making sure these technologies actually work inside real XR applications. To do that, not only did we work on the fundamental research but also, we developed applications to try them in real world cases. We published 16 scientific papers where we introduced new methods and models. We also made our work available to the public by releasing 25 models and 6 datasets shaped by our scientific outcomes and the needs of XR use cases. On top of that, we developed three optimization tools to make these models lighter and efficient enough to run in XR environments. From the end-users’ perspective, what really matters is how this work performs outside the lab. We demonstrated the capabilities of our models through three applications developed within the project and five more built by our external partners, each combining our models in different ways. These eight applications were tested with real end users in pilot studies, and the results gave us solid scientific evidence that the project’s achievements are not just academically sound, but also practical and valuable to be used in real-world XR scenarios. 

Did Maastricht University’s work on neural machine translation and automatic speech recognition evolve throughout the project? Were there any key breakthroughs or lessons learned?

For sure. It would be weird if it didn’t. The project started in October 2022 and two months later ChatGPT was launched and since then there has been a new model every week released. And while ASR and MT were already robust with the typical models released, we had to follow through to see how we can update our models and incorporate improvements such as handling the context in MT or introducing robustness for the ASR. At the same time, in some cases we had to fine-tune models for some of the consortium languages (namely Greek) in order to adapt our models to our use case. 

You played a central role in supervising the ethical aspects of AI within the project. How did the consortium approach ethics from both a research and deployment standpoint, and what frameworks or practices were most valuable?

In the project lifetime, UM became the primary partner to ensure responsible data collection and management. We had 3 different use-cases with different requirements per case, so the first decision was to submit 3 different applications (which made things both easier (e.g. share common issues among the 3 use cases) and harder (e.g. handle 3 applications with different locations etc.). Ethical approval (which is necessary for all studies involving humans) is primarily a reflection of the protocols we use and it is our impression that the whole VOX consortium benefitted from assessing what procedures are necessary for running the 3 successful pilots, how can we be as less intrusive as possible and in the end standardize the pilots and help us have valid results. 

How did the pilot studies and open calls contribute to validating the developed technologies, and what insights did you gain regarding their scalability and future adoption?

Something that became very clear from both the pilot studies and the open calls was that it is difficult to disentangle the model from the deployed version of it, e.g. while ASR and MT models worked perfectly in our “lab/research” setup, their efficient deployment (i.e. dealing with multiple users at the same time, multiple and different types of devices, noise/real-world conditions) proved challenging and it was sometimes not easy to pinpoint where the error came from. If anything, this error chasing made our models more robust to deploy in edge devices, and as such it makes them more valuable for future applications. Same with open calls, we were more than happy to see how models can be extended both to new languages we did not specifically test (e.g. Latvian) but also other use cases (which we haven’t accounted for). 

Looking ahead, how do you envision NLP, CV, and AI advancing XR even further beyond the VOXReality scope? What directions or applications are you most excited about?

Both fields are going to move at a “turbo-fast” pace (perhaps AI a little bit faster due to the huge interest). In VOXReality, as we said previously, we developed 25 models and 6 datasets, which are available for further experimentation. In the current climate, where the trend seems to be about “closing” models so that each business takes advantage of their in-house technology, our resources can play a big role by providing open access to researchers and small businesses. As XR applications are being democratized (both in terms of cost and wider adoption), it’s only natural to see more integration with AI. Also, walready have ideas and new directions for our consortium such as education and collaborative virtual spaces (e.g. immersive language environments with real-time ASR and feedback), cultural heritage (e.g. digital museums), accessible XR environments for people with disabilities etc. 

Picture of Jerry Spanakis

Jerry Spanakis

Assistant Professor at Maastricht University

&

Picture of Yusuf Can Semerci

Yusuf Can Semerci

Assistant Professor at Maastricht University

Twitter
LinkedIn
Picture1

The Future of Extended Reality: From Intelligent Interaction to Hybrid Intelligence

Extended Reality (XR) is rapidly evolving from visually immersive environments toward intelligent, responsive systems capable of adapting to human intent. A key milestone in this evolution has been achieved through the VoxReality project, which demonstrated how Artificial Intelligence (AI) can be effectively integrated into XR to enable more natural, voice-driven, and context-aware interactions. Building on these outcomes, future XR applications will offer more personalized and immersive experiences through understanding the user needs and preferences.

Looking ahead, the integration of AI into XR applications is expected to increasingly rely on advanced human-AI collaborative frameworks, enabling the emergence of XR hybrid intelligent systems in which human expertise and AI capabilities are tightly coupled within immersive environments. Immersive spaces allow AI systems to observe user actions, gestures, speech, and attention in context, while users can directly perceive and interact with AI behavior. XR hybrid intelligent systems will facilitate not only intelligent interactions and immersive experiences, but also personalised content generation that takes into account users’ knowledge, skills, and performance. For example, a virtual training assistant could dynamically generate immersive scenarios that specifically target and bridge individual knowledge gaps. Beyond content personalisation, XR hybrid intelligent systems will leverage user feedback to continuously adapt and improve system performance in alignment with evolving user needs. Key feature in these systems is to empower users in guiding, correcting, and shaping AI behavior over time.

In the context of XR hybrid intelligence, Explainable AI (XAI) and human-in-the-loop technologies become critical enablers. As XR systems grow more autonomous and intelligent, users should be able to understand why an AI agent behaves in a certain way. XAI techniques can make AI reasoning transparent within XR environments, for instance by visualising decision pathways, highlighting relevant contextual cues, or providing natural language explanations. Moreover, the integration of XAI into XR environments introduces new opportunities for sense-making and reflection. By embedding explanations directly into immersive experiences, users can not only observe system outcomes but also explore underlying reasoning processes in a spatial and interactive manner. This can transform XR from a passive visualization medium into an active cognitive workspace where users learn with AI, rather than merely from it. In parallel, human-in-the-loop approaches encourage user feedback, enabling continuous system adaptation, performance improvement, and trust calibration. Such transparency not only builds trust but also supports effective human-AI collaboration.

In conclusion, the VoxReality project represents an important step toward intelligent XR environments grounded in natural interaction. The future of XR will build upon these foundations by embracing hybrid intelligence, transforming XR into a space where humans and AI systems work together seamlessly, transparently, and creatively. This convergence will redefine how we interact with digital worlds: not as users of technology, but as partners within intelligent immersive environments.

Picture of Konstantia Zarkogianni

Konstantia Zarkogianni

Associate Professor of Human-Centered AI
Department of Advanced Computing Sceinces
Maastricht University

Twitter
LinkedIn
VOXReality template LinkedIn v3 (3).pdf (1920 x 1080 px) (LinkedIn Post) (9)

VOXReality Final Newsletter: Celebrating Innovation and Impact

The final edition of the VOXReality newsletter is now available. As the project concludes its journey, this comprehensive update highlights the major milestones achieved through the integration of AI and Extended Reality (XR).

Key Highlights from the Final Edition

The newsletter provides an in-depth look at the project’s results and the legacy it leaves behind within the immersive technology landscape:

  • Completion of the €1 Million Open Call: Learn about the five innovative projects—AIXTRA, CrossSense, VAARHeT, WELD-E, and XR-CareerAssist—that successfully integrated advanced AI into XR environments to modernize sectors ranging from healthcare to heavy industry.

  • The Final General Assembly in Athens: A report on the consortium’s recent meeting hosted by Synelixis, where partners validated the completion of all project goals and prepared for the final review with the European Commission.

  • Use Case Pilot Success: Detailed findings from our real-world testing phases.

    • AR Training Assistant (Voxy): Highlighting a strong user preference for voice-driven AI interaction in industrial assembly.

    • VR Conference: Evidence that 80% of participants found AI-driven navigation and real-time translation more intuitive for professional events.

    • AR Theatre: Outstanding results from the Athens Epidaurus Festival, where 87% of participants expressed interest in future AI-enhanced performances.

  • Beyond VOXReality: Information on how the project’s vision continues through the BeyondXR Cluster, a collaboration of EU projects dedicated to driving long-term societal impact through XR, AI, and robotics.

A Legacy of Innovation

While the VOXReality project is reaching its conclusion, the work of our partners continues to evolve. This final newsletter serves as a record of the breakthroughs achieved and a guide to the future of inclusive, user-friendly technology!

VOXReality template LinkedIn v3 (3).pdf (1920 x 1080 px) (LinkedIn Post) (8)

VOXReality Wraps €1M Open Call: Five Success Stories Shaping the Future of XR and AI

The VOXReality Open Call has officially concluded, marking a year of innovation where €1 million in funding was transformed into five solutions. By integrating advanced AI into XR, these projects have moved beyond prototypes to deliver measurable real-world results across Europe.

The Human Impact: Voices from the Field

Beyond the technical milestones, the true success of the VOXReality programme lies in the stories of the people and organizations it transformed.

“Participating was transformative. We didn’t just strengthen our ecosystem; we gave our company a unique selling point in the enterprise sector. Our work has already been adopted by several clients in their upcoming projects.”: David Lähner, AIXTRA Project Manager, MindPort

“VOXReality gave us the AI tools to make our assistant truly smart. Their mentors shepherded us through technical challenges with a genuine drive to positively impact society. This is the invisible thread that will help us find each other in the future!”: Szczepan Orlowski, Animorph (CrossSense)

“As heritage professionals, we wanted to present history in a modern way for the younger generation. Testing this technology helped us gain a deeper understanding of our visitors’ expectations and what we can implement in the future.”Eva Koljera, Āraiši Ezerpils Archaeological Parks (VAARHeT)

“RoboWeldAR changed how we approach exhausting, hazardous jobs. Workers can now focus on quality while the robot handles the fumes and heat. It shows how robotics makes even the toughest jobs safer and more sustainable.”:  Shipbuilding Partner (WELD-E)

“The project pushed us to solve complex technical challenges around real-time multilingual processing and interactive data visualisation. The genuine enthusiasm from participants who found the platform both engaging and practically useful confirmed that we had created something that could meaningfully impact how people approach career planning.”: Nikolaos Tantaroudas, DASKALOS APPS (XR-CareerAssist)

Final Outcomes at a Glance

The final outcomes of the VOXReality Open Call demonstrate an impact across diverse industries by successfully merging AI with immersive technology. In the education sector, the use of AI training partners led to a 400% faster time to market for new training features, while in healthcare, smart AR assistants significantly increased the confidence and independence of individuals with cognitive impairments. Cultural heritage experienced a transformation through voice activated AR, achieving an 87% satisfaction rate for multilingual museum tours. Industrial manufacturing saw a major shift with AI robotic welding, which reduced health risks and enhanced precision in shipbuilding, and career development was modernized through AI mentorship providing real time path visualization in under 10 minutes.

A Vision Realized

As these five projects conclude their 12-month journey with us, the results speak for themselves: faster development cycles, more inclusive healthcare tools, safer factories, and more engaging education. The VOXReality consortium is now preparing for our final review with the European Commission. We are excited to showcase how this €1M investment has not only funded five companies but has paved the way for a more immersive, interactive, and user-friendly XR landscape across Europe.

Picture of Ana Rita Alves

Ana Rita Alves

Ana Rita Alves is an International Project Manager and current Communication Manager at F6S. With a background in European project management and a Master’s in Psychology from the University of Minho, she excels in collaborative international teams and driving impactful dissemination strategies.

Twitter
LinkedIn
VOXReality template LinkedIn v3 (3).pdf (1920 x 1080 px) (LinkedIn Post) (7)

VOXReality Marks Final Milestone at Athens Consortium Meeting

The journey of the VOXReality consortium has reached a significant turning point! We gathered, on December 10 and 11, in the historic city of Athens for our final consortium meeting, hosted by our partners at Synelixis. It was a productive and inspiring two days dedicated to celebrating our collective achievements and looking toward the future of XR. 

Reflecting on Success and Innovation 

The meeting provided a platform for all partners to share the results and major breakthroughs from each work package. We are thrilled to announce that the project goals have been successfully achieved. Our highlights include:

  • Knowledge Sharing: A series of high-impact publications detailing our research on AI integration in XR environments

  • Pilot Success: Results from our use-case pilots, spanning VR Training, AR Theater, and VR Conference applications, demonstrated incredible potential. 

  • User Validation: Final users provided glowing feedback, showing a high level of interest and engagement with the immersive experiences we’ve built together. 

Preparing for the Grand Finale 

With our research and pilot phases concluded, the consortium is now hard at work preparing for the final review with the European Commission. This will be our opportunity to showcase the full scope of our work and the impact VOXReality has made on the XR landscape.

Stay Connected! 

The project may be approaching its review, but the innovation doesn’t stop here. Stay tuned for updates on the continued work of our partners and the legacy of VOXReality.

Don’t miss a beat, subscribe to our newsletter to get the latest news delivered straight to your inbox! 📩

Picture of Ana Rita Alves

Ana Rita Alves

Ana Rita Alves is an International Project Manager and current Communication Manager at F6S. With a background in European project management and a Master’s in Psychology from the University of Minho, she excels in collaborative international teams and driving impactful dissemination strategies.

Twitter
LinkedIn
VOXReality template LinkedIn v3 (3).pdf (1920 x 1080 px) (LinkedIn Post) (6)

Reimagining the Virtual Stage: How the VR Conference Room Powers Global Presentations

Presenting to a global audience in VR isn’t just about visuals and sound—it’s about making sure everyone, regardless of language or location, can follow and engage. In large, one-to-many settings like keynotes or panel discussions, translation needs to be seamless, scalable, and non-intrusive.

In VOXReality, we tackled this challenge by designing a dedicated VR Conference Room: a space built from the ground up to support real-time multilingual presentations, immersive slide sharing, and interactive Q&A, all within an acoustically optimized 3D environment.

Unlike physical spaces, where sound depends on proximity, the conference room was designed with equal audio clarity across all seats. No matter where a user sits—even in the back row—they can hear the presenter just as clearly as those at the front.

To ensure a smooth presentation flow, the speaker can share any window from their device, not just a file or a tab. This live window feed is then projected onto the blackboard surface of the auditorium—not a virtual screen—enhancing immersion and realism while maintaining focus.

Here’s how the system works:

  • When a speaker enters the stage, the system recognizes them as the presenter, activating automatic transcription and translation of their speech.
  • Subtitles appear clearly above the blackboard in each participant’s preferred language—up to six supported—ensuring everyone receives the message in real time.
  • Users can toggle subtitles on or off based on personal preference.

The conference room layout mimics a real-world auditorium, promoting attentiveness and focus. During the Q&A session, participants raise their hand virtually. The presenter then grants speaking permission to one participant at a time. As the audience member speaks, their voice is transcribed, translated, and shown in a movie-style subtitle panel directly in front of the presenter—making multilingual dialogue feel effortless and intuitive.

By limiting microphone access to a single speaker at any time and integrating with the VOXReality translation pipeline, the system avoids audio conflicts and reduces computational load. Translations are processed once and streamed to all relevant participants, making the experience scalable without sacrificing quality.

The result is a smooth, inclusive presentation environment—where users from different countries can sit side by side in a virtual room, hear the same talk, and even participate in the discussion. Whether they’re presenting or asking questions, the technology fades into the background, letting human connection take center stage.

Picture of Georgios Nikolakis

Georgios Nikolakis

Software Engineer @Synelixis

Twitter
LinkedIn
VOXReality template LinkedIn v3 (2).pdf (Instagram Post) (49)

Partner Journey #6 with Gruppo Maggioli

In this edition of Partner Journey, we speak with Greta Ioli, EU Project Manager at Gruppo Maggioli, the coordinator of the VOXReality project. Maggioli, a leading provider of software and digital services for public administrations, guided the consortium in developing an advanced XR ecosystem that integrates linguistic AI, computer vision, and multimodal interaction. In this interview, Greta provides insights into the project’s objectives, key achievements, and how the diverse consortium of academic, industrial, and creative partners worked together to realize a vision of multilingual immersive experiences spanning theater, virtual conferences, and industrial training.

Can you provide an overview of the VOXReality project’s objectives and achievements, and reflect on how the consortium worked together to realize its vision?

The objectives of the VOXReality project were to develop an advanced XR ecosystem integrating linguistic AI, computer vision, and multimodal interaction to improve multilingual immersive experiences. The goals of the project were mainly related to three spheres. One was the creation of pre-trained linguistic and visual models for XR environments. The second sphere was the development of spatiotemporal digital assistants and real-time contextual translation. The last one was validation through the three use cases: multilingual immersive theater, virtual conferences, and industrial AR training.  

The main Key project outcomes were: 

  • AI-vision language models for virtual and real environments; 
  • Instruments for the visualisation of 3D in Augmented Reality; 
  • Multilingual speech translation models 
  • XR experience for both the theatre and the conferences.

The consortium has gathered partners in different fields such as academics, industrials, and creatives, in order to combine scientific, technical, and artistic expertise. The cooperation was based on shared management of the IPR and a common vision of collaborative exploitation and open innovation.  

How has the VOXReality project helped to address key challenges or gaps in the XR field, and what do you consider its most significant contributions?

The VOXReality project addresses several key gaps in the XR sector. It enhances multilingualism and accessibility by developing contextual automatic translation and intelligent voice assistants that break linguistic and cultural barriers. It advances natural interaction by combining language and vision technologies to create XR assistants capable of understanding spatial and contextual information. It promotes scalability and reusability by providing open-source, pre-trained, and modular models that can be easily adapted to future applications. It also fosters innovation in collaboration, entertainment, and industrial training by offering concrete use cases that improve the transfer of technology to real-world environments. Overall, VOXReality makes a significant contribution to defining new European XR technologies, strengthening digitalisation and the competitiveness of the sector. 

How did the project approach the integration of ethical considerations alongside technical innovation in the development and deployment of XR applications?

The VOXReality project integrates ethical and legal considerations from the very beginning, in alignment with the AI Act and the GDPR. It ensures compliance with data privacy, security, and intellectual property regulations. The project promotes an ethical, inclusive, and sustainable approach to XR development, paying attention to social and gender impacts such as cybersickness and accessibility for different groups of users. It also emphasizes transparency and accountability in the development and deployment of AI technologies and immersive content. 

What were the main challenges faced in translating the project’s XR solutions to real-world contexts, and what opportunities have emerged for future adoption and scalability?

The VOXReality project faced several challenges in translating its XR solutions into real-world applications. These included technical complexity, high hardware costs, and difficulties in bridging the gap between research and industry, particularly in terms of patenting and commercialization. Despite these challenges, the project revealed significant opportunities for future growth. The XR market is expanding rapidly, with estimates suggesting a value between 35 and 65 billion euros in Europe in 2025. The development of 5G and 6G networks will further enhance performance and accessibility. There are also new possibilities for creating spin-offs, launching open calls, and fostering co-development with small and medium-sized enterprises. Moreover, VOXReality’s outcomes show strong potential for cross-sector scalability, extending from training and education to tourism, culture, and the arts. 

How do you see the VOXReality project influencing the broader trajectory of digital transformation and technological innovation, particularly within the rapidly evolving XR ecosystem?

The VOXReality project contributes significantly to Europe’s digital transformation by creating open-source AI and XR models that combine language and computer vision. It promotes collaborative ecosystems and transnational networks, accelerating the adoption of ethical and inclusive artificial intelligence within the metaverse. The project also influences new forms of human-machine interaction with meaningful impacts on education, entertainment, and remote work. Overall, VOXReality drives European XR toward a more human-centered, interoperable, and sustainable paradigm, serving as a model for the future convergence of language, AI, and extended reality. 

Picture of Greta Ioli

Greta Ioli

Greta Ioli is an EU Project Manager in the R&D department of Maggioli Group, one of Italy's leading companies providing software and digital services for Public Administrations. After earning a degree in International Relations – European Affairs from the University of Bologna, she specialized in European projects. Greta is mainly involved in drafting project proposals and managing dissemination, communication, and exploitation activities.

Twitter
LinkedIn
VOXReality template LinkedIn v3 (2).pdf (Instagram Post) (47)

Partner Journey #5 with Synelixis

As VOXReality comes to a close, Synelixis reflects on a transformative journey at the intersection of artificial intelligence and immersive technologies. In this interview, Georgios Nikolakis discusses how the project pushed the boundaries of virtual agents, blending advanced language understanding, computer vision, and real-time XR interaction. He shares the breakthroughs achieved, the lessons learned, and the vision guiding Synelixis as it helps shape the next generation of intelligent, human-centred XR experiences.

As VOXReality reaches its conclusion, what do you consider the most significant breakthroughs your team achieved in developing intelligent virtual agents and XR applications?

Throughout VOXReality, one of our most important achievements was creating virtual agents capable of operating seamlessly in real‑time, human‑centred XR environments.

A major breakthrough was our ability to integrate these capabilities directly inside live XR scenes, so agents could perceive what users were asking, respond to their intent, and adapt their answers accordingly.

Your work combined Natural Language Processing and Computer Vision to enable more natural human–machine interaction. What new understanding did this convergence bring to your approach in designing responsive XR systems?

Bringing NLP and Computer Vision together fundamentally changed how we design XR experiences. We moved away from treating modalities as isolated components and began designing systems that interpret user behaviour holistically—language, gestures, spatial context, and emotional cues.

It also deepened our understanding of contextual grounding—the agent’s ability to relate linguistic meaning to visual, spatial, and social context—which is essential for immersive experiences.

Developing adaptable and context-aware agents is a complex challenge. What have you learned about balancing real-time performance, personalization, and inclusivity across different XR environments?

We learned that achieving this balance requires a flexible architecture where each module can scale according to user needs and device capabilities. Real‑time performance depends not only on efficient AI models but also on smart orchestration—sending only the necessary information at the right time.

As AI agents become more capable and autonomous, how do you see ethical design and user trust shaping their development and acceptance in immersive environments?

Ethical design and transparency are no longer optional—they are foundational. Users need to understand when an agent is listening, how data is processed, and how decisions are made.

In immersive environments, the boundary between human and machine can blur, making responsible AI practices even more critical. We believe the next generation of XR agents must:

  • Provide clear consent and control over data usage
  • Offer explainable behaviour to avoid unexpected or opaque actions
  • Be designed to avoid bias in language understanding and decision‑making
Looking beyond VOXReality, how do you envision Synelixis building upon these results to influence the next generation of AI-driven experiences in XR and beyond?

VOXReality has positioned Synelixis to expand its role in the next wave of intelligent, interactive systems. Moving forward, we plan to:

  • Incorporate our multimodal agent capabilities into commercial XR solutions, such as remote training, education, collaborative workspaces, and virtual events.
  • Extend our real‑time translation and communication technologies to support cross‑platform, multilingual collaboration tools.
  • Push further into edge‑AI architectures, enabling immersive experiences on lightweight devices with stronger privacy guarantees.
Picture of Georgios Nikolakis

Georgios Nikolakis

Software Engineer @Synelixis

Twitter
LinkedIn
VOXReality template LinkedIn v3 (2).pdf (Instagram Post) (46)

Partner Journey #4 with ADAPTIT

In this interview Georgios Kliafas, from ADAPT IT, reflects on the team’s experience integrating XR technologies into live theatre. Throughout the project, the team explored AR theatre, balancing technical innovation with the practical realities of performance. In this conversation, Kliafas shares insights on team collaboration, design strategies, key development milestones, and lessons learned from testing and deploying XR solutions in a live, dynamic environment. He also discusses the potential future applications of these technologies and methods in cultural settings and beyond.

Over the course of the VOXReality project, your team worked on integrating XR technologies into the live theatre environment. What were the most valuable insights you gained about aligning technical innovation with the needs of a real-world performance?

AR theatre is a new and rather unexplored or undocumented domain, and there was little previous experience in the team on implementing such a performance. Therefore, we had to develop our own way of working with each other. One lesson that we face repeatedly in this regard and whose importance cannot be overstated, is that a shared language to coordinate across the team is crucial to clearly express expectations and requirements. That language may be not just textual, but also visual – it can require low fidelity mockups, role-playing, or interactive prototypesDevoting time for this team-building process early in the project is a worthy investment. We also learned that it was important to understand more deeply the existing workflows of theatre and try to incorporate the XR component as smoothly as possible in those pipelines, instead of suggesting an altogether new pipeline that was very robust but more inspired by software engineering rather than e.g. performative arts. For example, some tasks in the AR theatre pipeline had overlaps with tasks typically performed with commercial professional applications, and users expected commonalities or even interoperability. We would benefit from higher user engagement if we had prioritized such considerations  

You designed the AR Theatre application with modular and cross-platform principles in mind. How did these design choices prove beneficial during development and testing?

This approach shined during the rehearsal period, which is a short period of high intensity where a lot of design decisions materialize, and new information is revealed constantly. Our design approach helped us perform fast iterations and examine alternative setups very quickly during that phase. A concrete example regarding the cross-platform benefits is the fact that we produced quickly and easily a non-AR version of the audience app for internal use among the production crew to monitor the services in real-time. Another example, this time regarding the modularity, is the fact that we could easily put components together to produce a new client in our system to handle the spatialized audio, which was not planned initially, yet arose as a need with the evolution of the play.

Across the different iterations of the AR Theatre application, which development phase or milestone do you feel best demonstrated the project’s potential or innovation?

The second iteration (Pilot two) was certainly the most important milestone, since it made future paths for adoption finally clear enough to lay out. We could concretely recognize needs for optimizing and streamlining the whole operations for AR theatre, not just the technical and user requirements of the software side. For example, viewing the users’ reactions and needs was instrumental in understanding how to best accommodate them during onboarding, during the play itself and after the experience. For example, we realized that some users had either very low digital skills, even lower than we anticipated, while others seemed to fully master the application, even requiring more functionalities. That gap was hard to predict, but also to cater for. This helped us re-evaluate the usefulness of each feature and prioritize potential extensions and upgrades to the system, while also sketching out new alternative designs for the clients.

Considering the challenges of synchronization, spatial tracking, and security, how did your team ensure that the final system performed reliably in the complex environment of a live production?

Exhaustive testing! Given the complexity and innovation of this experiment, it was through testing sessions with variable conditions that we revealed the limits of the technology. There were no best practices or clear guidelines to adopt – rather through trial and error we figured out what worked and what didn’t, and most importantly, how reliable. We meticulously recorded our process and findings for future reference, but transferability is not guaranteed without a more rigid methodology. For example, we noticed that the AR devices lost their tracking when the ambient light dropped too low during dramatic scene changes. We didn’t have access to a sensor to record ambient luminosity though in order to provide quantitative guidelines for future reference, only qualitative and observational tips were provided.

As the project concludes, how do you envision the technologies, methods, or lessons from VOXReality shaping your future XR work, either in cultural applications or in other sectors?

We are quite pleased with the technical side of the AR theatre project, we consider it a successful technological milestone, and we now think there is substantial potential and investigation areas ahead of us on the practical implementation. For example, we consider many user experience design questions still open, especially regarding usability and accessibility. This type of work would help unlock more of the existing value in the technical offering. We are also very excited to look into the scalability of the technology, as well as its transferability to other similar fields within performative arts.

Picture of Georgios Kliafas

Georgios Kliafas

Twitter
LinkedIn
VOXReality template LinkedIn v3 (2).pdf (Instagram Post) (45)

Partner Journey #3 with F6S

In this interview, Mateusz Kowacki, EU Project Manager at F6S, reflects on the pivotal role the team played in managing and delivering the VOXReality Open Call for Third Parties. From navigating tight timelines and unexpected challenges to supporting five innovative third-party projects through a full year of development, F6S brought resilience, collaboration, and a commitment to impact. Mateusz shares key lessons learned, the lasting influence of VOXReality on F6S’s future programs, and why he believes that the success of third parties is, ultimately, a shared success.

Looking back over the course of the VOXReality project, what do you consider F6S’s most significant achievement or contribution to its success?

F6S did a great job with all the preparation and implementation of the Open Call opportunities for Third Parties. I really think that we should be particularly proud of these great results achieved by TPs as we have been also part of this process. I must say, sometimes, it wasn’t easy. We have faced several challenges and issues, but my perception was that no matter what we will find a way out and, in the end, we will showcase great results. Now, I think if you ask TPs about their experience with us there will be only humble and I believe positive feedback. And just to make it more poetic I guess TPs success is our success, as we have been there for them.

What were some of the key challenges F6S faced during the open call process, and how did your team overcome them?

At the very beginning of implementation, we had some issues with the timeline. However, thanks to the motivation coming from all project partners and TPs we managed to erase this initial delay. We knew also that some things during the first review didn’t go as we planned. Solution for this was a co-creation session of what and how we do better. I must say it was teamwork all the time, without pointing fingers, putting blame etc, I was and still I’m very proud of this accomplishment 

In what ways has participating in VOXReality shaped or influenced F6S’s approach to future innovation programs or open call management?

I think that you should be ready for everything, that could be a lesson from it. While preparing the program we need to allocate some time for risk management and potential mitigation. This for sure will facilitate our work in the future Maybe, a bit joking, but we might always ask us what the worst-case scenario could be – and based on this we can shape our mitigation plan. Obviously, it’s something that we have already introduced but sometimes everyday situations might be even more surprising than this risks log excels  

What impact do you think VOXReality has had on the third-party projects you supported, and how do you envision their growth beyond the project’s conclusion?

I know, because I spoke with all of them, that the impact has been significant. They really appreciate this opportunity created by VOXReality project, and this learning curve during 12 months of implementation. Considering that these 5 TPs have been selected amongst dozens of other applications it proves already that their initial idea was great. After this implementation they are on a different level now with different challenges. I’m sure that it’s not the last time we have heard about them and that they will still finetune and develop their services and products. 

As the VOXReality project wraps up, what are your main takeaways, and what advice would you give to other organizations participating in similar EU-funded initiatives?

That only by working with ‘as a team’ mindset can we achieve great things and see great results. It’s not true that brilliant ideas are created by brilliant individuals being alone. Those great things are always the result of the exchange of thoughts, opinions, perspectives which we can only get by working in a team.  

Picture of Mateusz Kowacki

Mateusz Kowacki

EU Project Manager @ F6S

Twitter
LinkedIn