Full-AAR

The Full-AAR project explores the narrative possibilities and technical practices of Audio Augmented Reality (AAR) using accurate head and body tracking in six-degrees-of-freedom (6DoF). The idea is to obtain an illusion of virtual sounds coexisting with the real world and use that illusion to tell stories with a strong connection to the surrounding environment.

The project is hosted by WHS, a contemporary circus and visual-theatre group based in Helsinki, Finland, and is financially supported by the European Union NextGenerationEU fund.

As a sub-genre of Augmented Reality (AR), AAR enhances the real world with virtual sounds instead of overlaid visual images. The Full-AAR project concentrates specifically on experiences where the user can move freely in a real-world space while the story—or other information—is mediated through headphones by virtual sounds embedded in the environment. In the optimal situation, regardless of the user's head and body movements, the sounds stay fixed to their positions relative to the environment and thus appear as coexisting with the reality. In addition, user's location, movements and direction of glance can be used to trigger interactive cues, thus advancing the non-linear narrative.

Full-AAR: Two visitors

6DoF AAR carries many intriguing possibilities for storytelling and immersive experiences. For example, it can be used to convey an alternative narrative of a certain place through virtual sounds interplaying with the real world. The medium is also potentially powerful in creating plausible illusions of something happening out of sight of the user, for instance, behind or inside an object. Unlike in traditional visual AR, in AAR the user's sight is not disrupted at all. In addition to the artistic possibilities it opens, this may be beneficial in places where situation awareness is important such as museums, shopping centres and other urban environments.

Full-AAR: A normal day at the office

With a hands-on approach, the Full-AAR project is a contribution to the development of the 'language' of this nouveau medium of '6Dof AAR'—a medium still without a more convenient name. Any outcomes of the project, including findings, best-practices, toolkits, software, and manuals, will be shared publicly.

During the two-year project period (2022−2023) we are also preparing a series of demos and narrative experiences open for public. The first one of them will be premiered during this year at the Unika gallery space at the WHS Teatteri Union in Helsinki. The story is based on the history of the venue and utilises the fact that the users will be experiencing the surrounding real environment with all their senses.

Focus areas

There are no ready-made technical solutions or artistic tools yet available for this medium. Therefore, in the Full-AAR project, we're testing and using different technical approaches and setups to find optimal means for content-creation for AAR with 6DoF. We're particularly interested in these topics:

1. Use of spatialisation and auralisation of virtual audio to enable plausible acoustic illusions

2. Use of accurate 6DoF head and body tracking to enable plausible acoustic illusions and kinesthetic interaction

3. Creation of interactive stories and search for useful and characteristic narrative techniques for the 6DoF AAR medium

4. Letting simultaneous users experience the same story with different narrative viewpoints and alternate audio content

5. Finding useable and fluent workflows and methods for content-creation

Game engine and virtual audio

Full-AAR: Computers

The experience content is running on a game engine, currently Unity. We can support two simultaneous users, but will be aiming at 10 to 20 for later demos. In the current setup, the virtual audio processing is handled by the dearVR plugin from Dear Reality. While providing rather good externalisation and natural sonic quality, other alternatives to dearVR are also being researched for enabling authentic simulation of sound propagation together with, e.g., selectable HRTF (Head-Related Transfer Function) profiles.

The current prototype uses an external computer from where the audio is fed to the users using wireless headphones. However, we're also taking a look at possibilities in running the experience in personal mobile devices or even powerful SBCs (Single-Board Computers) should their performance be enough for the required virtual audio processing (with small enough size and weight).

Tracking system

Currently, in the quest for an optimal positional tracking system, we have constructed a system combining the use multiple depth cameras by Stereolabs with body-tracking algorithms, a Quuppa indoor positioning system (IPS) using BLE (Bluetooth Low Energy) tags and an array of AoA (Angle of Arrival) antennas, and an IMU (Inertial Measurement Unit) installed on the headphones for orientation tracking. This solution follows an outside-in principle, and seems rather optimal for us during the current phase of the project. However, inside-out tracking options are also on the table, e.g. installing cameras on the user's headphones and estimating its location and orientation in the same manner as standalone VR and AR headsets do.

Full-AAR: Avatars on a map

Narrative techniques

Many storytelling approaches and narrative ideas using the possibilities of 6DoF AAR have already been implemented and tested within the project. Yet, it will be important to get the first demo ready and test it with real audience in order to deduce which narrative techniques work and which don't. It will also be extremely interesting to see how the interaction and emotional connection between multiple users may work in practice.

I will be updating this page once in a while during the project, and will be sharing any public material as soon as we have some!