Emotive VR, part 1

Since the end of January I’ve been working on a neuro-interactive 360º film project “Emotive VR” in Nantes and Le Mans, France. I think this is a good moment to reflect some of my experiences so far. As my responsibilities are in post-production sound and interactive music I will focus on those areas here.


Freud – La dernière hypnose

Two years ago a French filmmaker Marie-Laure Cazin realised an interactive film project Cinéma émotif where the storyline of her film “Madamoiselle Paradis” was changing according to spectators’ emotions. A couple of selected audience members were wearing EEG headsets, and their valence (positive–negative) and excitement levels were measured as they were watching the film. This data was then used to change the course of the film in real-time.

Emotive VR is a similar project, however this time the spectator will be inside a 360º film called “Freud – La dernière hypnose”. The film takes the audience into Sigmund Freud’s last hypnosis session with his young patient Karl. Unlike in the Cinéma émotif now the emotions won’t change the storyline, but they will affect the soundscape, music and some visual elements in the film. The spectator is placed in either Freud’s or Karl’s subjective position, and sometimes in an objective vantage point.

The topics of hypnosis, psychoanalysis, self-knowledge, and psychology in general are extremely interesting in the context of VR, and exploring their relationships in terms of sound design will be a new and intriguing challenge to me.

The core components of Emotiv VR are two 10-minute cinematic 360 degree monoscopic videos with actors and dialogue. These linear film sequences will be complemented with spatialised sound design. The videos will be running in Unity game engine which receives control parameters from the EEG system and other input devices. Audio in 2nd order ambisonics will run separately but (hopefully) in sync in Wwise sound engine. Wwise is used in order to easily create and implement interactive music and sound effects. I will talk about these topics more detailed in my later posts.

Emotiv VR is a joint research project by the University of Nantes / Polytech Nantes, RFI Ouest Industries Creatives, Le Laboratoire des Sciences du Numérique de Nantes (LS2N), École superieur des beaux-arts TALM (ESBA-TALM), Le Crabe Fantôme film production company in Nantes, and DVgroup VR production company in Paris. My own involvement in this project is through an internship programme between ESBA-TALM and Aalto University.

Location sound recording for a 360 film

The 360 video sequences were shot over three days in a small château in Nantes. The actors were alone in one of the rooms decorated to match the film’s narrative, and the crew were behind the wall in an adjacent room.

The camera rig was a custom-built array of eight GoPros. An additional small Samsung Gear 360 camera was used to provide real-time monitoring for the director.

SoundField ambisonic microphone and audio bag placed under the 360 camera.

As no cables, mics or crew members were allowed to be visible in any direction (except below the camera, an area that was refilmed after each camera position to mask the tripod and other camera equipment), booming the actors was not an option. Therefore the dialogue was recorded only with wireless lavalier mics. And maybe not so surprisingly there were a lot of problems with clothing noise. But otherwise the dialogue came through clean and nice for the most part. The production sound mixer Martin Gracineu used Wisycom transmitters and receivers, Sanken COS-11D capsules, and Sound Devices 633 recorder.

After each scene Martin recorder wild foleys with a “real” microphone (Schoeps CMC 6 with MK41 supercardioid capsule). Schoeps was also used for some off-screen dialogue tracks as well as one special effects scene where only a small portion of the 360 camera image was to be used, and thus the mic boom and the rest of the crew didn’t ruin the shot.

My task was to record the scenes in ambisonics using a SoundField ST450 MKII microphone. Original idea was to use ambisonics only for two audio-only scenes. In these prologue and epilogue sequences Freud walks around the spectator in darkness while having his monologue. The boom mic would capture the voice, and the ambisonic mic would capture the spatial ambience. However, as we had the SoundField mic at our use we decided to try it for all the scenes just in case. Hence for each shot I carefully rigged the mic and audio bag under the GoPro array in the limited space between battery packs,  cables and the Samsung camera.

P1150401 (1)
Me and Martin. Photo by Jérôme Fihey.

The usefulness of the ambisonic material is still to be seen, but according to early test mixes it looks like I can actually use it behind the dialogue to provide perspective and feeling of space with authentic room reverberations. I would prefer this approach over trying to reproduce the same with plugins.

The four ambisonic B-format channels from the ST450 control unit were recorder into a Sonosax SX-R4+. It has a built-in WiFi for remote control and metering using a web browser on a computer or cellphone. That turned out to be very useful as we had to be in another room, and running in and out of the set between takes would have been unpractical. However, the wireless connection was extremely unreliable through the wall, and for one or two scenes I was forced to operate the recorder manually. Also there was no way to monitor the audio over the WiFi, so I had to “instrument fly” and let the recorder roll without listening. A wireless IEM system would have solved the problem, but we didn’t have an extra set available. However, as the mic was stationary all the time and I had set the gains quite conservatively (by the way, the Sonosax does not have limiters) there was not much to monitor.

Remote control for the Sonosax SX-R4+ running in a web browser window. The headphones are for the director and script supervisor listening to the lav mics through wireless IEM sets.

In the end we didn’t have time to record the prologue and epilogue, so those recordings will be done separately in the end of March. The producer is trying to find a quiet room with a nice-sounding wooden floor!

Pre-stitching and editing the videos

After the shooting was completed the material from the eight GoPros needed to be stitched together to create the 360 videos. During the following weeks an intern college of mine, Christophe, together with another intern at DVgroup in Paris pre-stitched the material using Autopano software by Kolor. Audio from the internal mic of one of the GoPros was selected as the guide audio track for the video clips.

However no audio from the production sound recorders was synced to the stitched videos. Editing was done with the GoPro sound, and for example any parts of the dialogue recorded outside of the set was not audible. Only after the editing had started and when the director knew which takes would be used I was asked to deliver the mixdown of the dialogue tracks for each stitched video clip to be synced on the timeline of Premiere Pro. I prepared the mixdowns in Reaper syncing them with the slate and exported them as mono files. (Later I realised that we lost some valuable metadata in the process that could have been useful when the sound editing was about to start.)

Christophe got now some extra work as he had to take extra steps to attach external audio to already edited video clips. My mistake was also not to deliver him multichannel audio files at this point, but only the mono mixdown. Even though the mixdown served the picture edit fine, it caused me extra work afterwards as I needed to sync the individual mic tracks manually in DAW when starting the dialogue editing.

Although it wouldn’t have solved my syncing problems in sound editing, taking the mixdown track from the field recorder and aligning that with the video in Autopano would have been a quick way to get at least nice and clean audio for the picture edit. 

During these first weeks – while Marie-Laure and Christophe were editing – I spent my days digging deeper into the 3D audio workflows, software and plugins, and I also started to create the music material. With Christophe we also spent a few days installing SteamVR and HTC Vive to a Mac Pro and checking that the VR headset and 360 videos worked in all computers and required software. I will talk more about these issues later, as they will get a concrete role in the production a bit later.

Christophe checking edits using Premiere Pro with GoPro VR Player plugin.

Sound editing begins

In the beginning of March the rough-edits were finally done. The easiest way for me to start with dialogue and sound editing would have been to take AAF or OMF exports from Premiere and open them in Reaper. In that way I would have had all the raw dialogue audio synced and trimmed to the picture edits while preserving their original length so that I could easily manipulate and replace any audio, even go back to the slate frame if needed.

But. Reaper does not understand AAF or OMF! Which is unbelievable. The AATranslator converter software should do the trick, but it’s not a cheap option. With Vordio it’s possible to convert XML export to a Reaper project, but for some reason our Premiere failed to export XML and instead outputted a list of errors. 

So as I write this I’m still thinking the options. I could start editing with Pro Tools, which suits me well as I’m much more familiar and quicker with that compared to Reaper, but at some point I need to start working with spatialising, and for that I need Reaper with its multichannel tracks and VST plugins.

EDIT: We managed to get the XML export work, so Vordio is now making the transfers easy. However, as I already mentioned, there are no original mic tracks in the video edit project but only the mixdown. So I still need to sync the individual tracks manually in order to start with dialogue editing. This must be streamlined for the next project!

Next posts

In the next posts I will share my experiences on sound editing and 3D spatialisation as well as creating the film’s musical atmosphere and making it interactive in Wwise. At some point I will also describe how we and the University of Nantes team are coping with integrating everything to Unity and adding the EEG interaction. That’s going to be interesting!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s