The Full-AAR project explores the narrative possibilities and technical practices of Audio Augmented Reality (AAR) using accurate head and body tracking in six-degrees-of-freedom (6DoF). The idea is to obtain an illusion of virtual sounds coexisting with the real world and use that illusion to tell stories with a strong connection to the surrounding environment.
The project concentrates mainly on indoor experiences with the challenges of indoor positional tracking, not so much on outdoor experiences benefitting from satellite navigation technology embedded in modern mobile devices.
Full-AAR is hosted by WHS Theatre Union, a contemporary circus and visual-theatre group based in Helsinki, Finland, and is financially supported by the European Union NextGenerationEU fund.
As a sub-genre of Augmented Reality (AR), AAR enhances the real world with virtual sounds instead of overlaid visual images. The Full-AAR project concentrates specifically on experiences where the user can move freely in a real-world space while the story—or other information—is mediated through headphones by virtual sounds embedded in the environment. In the optimal situation, regardless of the user's head and body movements, the sounds stay fixed to their positions relative to the environment and thus appear as coexisting with the reality. In addition, user's location, movements and direction of glance can be used to trigger interactive cues, thus advancing the non-linear narrative.
6DoF AAR carries many intriguing possibilities for storytelling and immersive experiences. For example, it can be used to convey an alternative narrative of a certain place through virtual sounds interplaying with the real world. The medium is also potentially powerful in creating plausible illusions of something happening out of sight of the user, for instance, behind or inside an object. Unlike in traditional visual AR, in AAR the user's sight is not disrupted at all. In addition to the artistic possibilities it opens, this may be beneficial in places where situation awareness is important such as museums, shopping centres and other urban environments.
With a hands-on approach, the Full-AAR project is a contribution to the development of the 'language' of this nouveau medium of '6Dof AAR'—a medium still without a more convenient name. Any outcomes of the project, including findings, best-practices, toolkits, software, and manuals, will be shared publicly.
During the two-year project period (2022−2023) we are also preparing a series of demos and narrative experiences open for public. The first one of them will be premiered during this year (2022) at the Unika gallery space at the WHS Teatteri Union in Helsinki. The story is based on the history of the venue and utilises the fact that the users will be experiencing the surrounding real environment with all their senses.
So far, we have encountered only a handful of headphone-based indoor AAR projects utilising six-degrees-of-freedom. In the list below there are a few works we are aware of and have been inspired by. After them, a few interesting location-based outdoor examples are listed − utlising GNSS (satellite navigation) capabilities of mobile phones − even though our focus is currently greatly on tackling the challenges of indoor tracking.
Sound of Things by Holger Förterer: Two simultaneous users hearing virtual sounds produced by items on a table. Nice, poetic sound design with accurate 6DoF optical tracking using infrared LEDs mounted on wireless headphones.
Sounds of Silence by Idee und Klang at Bern Museum of Communication (2018−19): Dozens of simultaneous users moving freely in a large exhibition space with head-tracking and 2D location tracking. Content mainly location-triggered headlocked linear audio scenes with some environment-embedded augmented interactive sounds. Using Usomo system with UWB and IMU.
University of Florida research project on enhancing museum exhibitions with 3D audio. Using ultrasonic location and orientation tracking by Marvelmind.
Growl Patrol by Queen's University in Ontario, Canada (2011): A geolocative audio game in a park utilising head tracking and spatialised audio on a horizontal 2D plane.
Sonic Traces by Scopeaudio at Heldenplatz in Vienna: A large-scale location-based AAR experience about the history and future of Vienna. The experience supports headphones with head-tracking capability.
The Planets by Sofilab UG and Münchner Philharmoniker: A interactive audio walk through the orchestral suite 'The Planets' by Gustav Holts. The app-based experience is available in multiple parks in European and US cities. Dramaturgy and sound design by Mathis Nitschke.
There are no ready-made technical solutions or artistic tools yet available for this medium. Therefore, in the Full-AAR project, we're testing and using different technical approaches and setups to find optimal means for content-creation for AAR with 6DoF. We're particularly interested in these topics:
1. Use of spatialisation and auralisation of virtual audio to enable plausible acoustic illusions
2. Use of accurate 6DoF head and body tracking to enable plausible acoustic illusions and kinesthetic interaction
3. Creation of interactive stories and search for useful and characteristic narrative techniques for the 6DoF AAR medium
4. Letting simultaneous users experience the same story with different narrative viewpoints and alternate audio content
5. Finding useable and fluent workflows and methods for content-creation
Many storytelling approaches and narrative ideas using the possibilities of 6DoF AAR have already been implemented and tested within the project. One key subject seems to be the relationship of virtual sounds to their physical counterparts. In other words, there may be many narrative consequences depending on whether a sound is attached to a real-world object or not, whether it matches with the object or not, etc. Further, the acoustic space around the listener may or may not match with the real one, and that appears to be a powerful storytelling tool in this medium. In the project, we are also utilising the interactive capabilities the tracking technology offers us. Hence, those possibilities combined with the virtual spatial audio brings a whole new subset of narrative techniques to be explored.
Yet, it will be important to get the first demo ready and test it with real audience in order to deduce which narrative techniques work and which don't. It will also be extremely interesting to see how the interaction and emotional connection between multiple users may work in practice.
The experience content is running on a game engine, currently Unity. We can support two simultaneous users, but will be aiming at 10 to 20 for later demos. In the current setup, the virtual audio processing is handled by the dearVR plugin from Dear Reality. While providing rather good externalisation and natural sonic quality, other alternatives to dearVR are also being actively researched for enabling plausible room acoustics and sound propagation simulations together with, e.g., selectable HRTF (Head-Related Transfer Function) profiles.
The current prototype uses an external computer from where the audio is transmitted wirelessly to the users' headphones. However, we're also taking a look at possibilities in running the experience in personal mobile devices or even powerful SBCs (Single-Board Computers) should their performance be enough for the required virtual audio processing (with small enough size and weight).
In the quest for an optimal positional tracking system, we have constructed a system combining the use multiple depth cameras by Stereolabs with body-tracking algorithms, a Quuppa indoor positioning system (IPS) using BLE (Bluetooth Low Energy) tags and an array of AoA (Angle of Arrival) antennas, and an IMU (Inertial Measurement Unit) installed on the headphones for orientation tracking. Apart from orientation tracking, this solution follows an outside-in principle, and seems rather optimal for us during the current phase of the project. However, inside-out tracking options are also on the table, e.g. installing cameras on the user's headphones and estimating its location and orientation in the same manner as standalone VR and AR headsets do.
I will be updating this page once in a while during the project, and will be sharing any public material as soon as we have some!
Matias Harju - design, research, story, programming
Ville Walo - design, story, bureaucracy
Miranda Kastemaa - programming
Mikko Honkanen - programming
Anne Jämsä - narrative research
Emilia Lehtinen - script consultancy