Game audio

Thanks to the New Media studies at Aalto University I’ve gotten myself into several small game projects as an audio designer and composer. It’s nice to get exposed to a new working culture and a number of new software and techniques in a short period of time.

“Unison” was a small game jam project that was made during the first week of our studies in August 2017. It was made for an exhibition with the theme “love”, so we tried to create a playable exhibition item with the noble message of universal love! The idea in the game is to find musical fragments and collectively build a song by joining the pieces together. I was responsible for the audio and music, which was somewhat a challenging task as all the musical elements had to fit with each other and create meaningful combinations in all possible permutations.

In the short demo video below the heartbeat sound is quite harsh: in the exhibition it was played back through a subwoofer hidden inside a wooden booth box, and due to the difficult acoustic properties of the box I had to do some radical eq to get it sound nice.

Project management: Helena Sorva
Programming: Xiaoxiao Ma, Yuanqi Shan, Juhani Halkomäki
Art: Veera Hokkanen, Veera Krouglov
Audio and music: Matias Harju

The next project was a clone of the indie horror game Slender. I was again the audio designer / composer. This was a very educational project in terms of project management and use of a middleware (Fmod) together with Unity integration and some C# scripting. I got to produce many basic sound design elements from footsteps to a rainy forest ambience – and of course several creepy sounds from distant wood-chopping to a generative music bed using slowed-down and reversed clips of Sean Spicer’s speech (don’t ask where that idea came from).

Currently I’m working on an experimental VR game for HTC Vive where the player moves inside IFS fractals while solving a mystery presented to her/him by audio cues. The project is still getting its shape and has some programming challenges that need still to be solved, but the prototypes have already proven to be extremely fascinating.

This time I’m using Wwise as the audio middleware with Google Resonance spatializer plugin. Music will be generative with interaction with the transforming fractals.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at

Up ↑