Invisible Voices is an augmented reality (AR) installation with audio only. The listener wears wireless headphones equipped with a head-tracking sensor (described here) and is tracked by a Kinect depth/motion camera.
When moving around the space the listener finds hidden sounds and voices that are not really there, but can be heard through the headphones. Due to head tracking, position tracking and authentic binaural spatialising the room magically comes to life.
The installation and the technical solutions behind it have been developed as a part of my studies at Aalto University MediaLab. Invisible Voices was first set up at the MediaLab Demo Day in Dipoli, Espoo, in December 2017, but I will still improve it and take it further.
The installation is influenced by The Sound of Things by Holger Förterer.
The head-tracking headphones are built with
Adafruit BNO055 9-axis absolut orientation sensor
Sparkfun Fio (Arduino-compatible) microcontroller
Digi XBee radio units
USB power bank
Sennheiser HDR 120 wireless headphones
Position tracking uses
Microsoft Kinect depth/motion camera
Processing programme with SimpleOpenNI library
Wekinator machine learning software
Orientation and position data are combined in the Unity game engine running a scene with audio sources positioned in the 3D space. DearVR spatialiser is used to emulate a binaural stereo image. Audio output is fed to the headphones through the wireless headphone base station.