Maximilian Treitler

Maximilian Treitler is an Austrian multimedia content creator with an emphasis on Audio Design. With his origins in music, being a drummer since his 8th year of life, he managed to create projects in a wide variety of fields. This cross-disciplinary approach is a continuous personal development which leads in constant acquisition of new tools regarding soft- and hardware as well as novel ways of creating content from a creative perspective.

1 Semester Project

No Drummer Drums – NDD

No Drummer Drums creates an unusual speaker. So called Exciters are used to set the drumheads of a drumset into swinging. So in other words the drumheads are used as the membrane. This creates the possibility of moving inside and within the playing drum set resulting in an unique, immersive and novel experience.

Software & Hardware used:
DAW : Ableton Live 9 Suite
Mixer: Behringer X32 Producer (serving as a interface and a mixer)
Amplification: Denon DN-A100 Hifi Stereo Amplifier
Exciter: Visaton EX60s, 8 Ohm
Drumset: Pearle Chad Smith Signature Drumset: Tom1: 12″, Tom 2: 14″ , Kickdrum: 20″
Drumheads: Remo Ambassador Coated

2 Semester Project

Mood into Sine – MiS

The Project “Mood Into Sine” describes an unconventional mirror. Instead of reflecting the face visually, it reflects the mood of the opponent – expressed through their face –  sonically. The possible triggered emotions are: happiness, sadness, anger, surprise and disgust. Speaking in a more technical way: the data given from the facial expression is processed by an algorithm in MAX MSP which results in the creation of procedural, synthetic music in real time. 

The data of the face is generated and tracked by Kyle McDonald’s open source software FaceOSC and passed into MAX MSP through the OSC protocol. With an easy to handle support vector machine from the ml.lib library, the data is processed and the mood predicted. This prediction of the emotional state is sent to the matrix. 

The matrix is best described as a 2 dimensional emotional space. 4 areas are placed in each of the four corners and one directly in the middle. In the middle of these areas, the emotion is the highest and the further away from the center of the areas, the weaker the emotions get – this a fundamental quality of the algorithm and implemented in the sonification.  Between these 4 areas is “neutral space”. This neutral space serves one main function: Just like in real life emotions do not change from happy to angry, for example, within the blink of an eye – there has to be an emotional neutral state in between. Inside this neutral space is a point that is moved by the predicted emotional state sent by the support vector machine, which gives information about the triggered emotion and it’s value.

Finally the information from the matrix is transferred into sound. As soon as the program is started, there is always music playing. When an emotion is triggered, the music changes – sounding like the emotional state might sound. These changes result from differences in rhythm, tempo, chord composition and sequence, melody and of course sound. 

For additional visual feedback, the waveforms of the instruments and dots – created by the tracked face – are displayed on a screen.

Software used:

FaceOSC by Kyle McDonald:
https://github.com/kylemcdonald/ofxFa…

MAX 8.0 by Cycling ’74
Additional libraries used:
ml.lib (machine learning)
HISSTools Impulse Response Toolbox (HIRT) (Spectrogram)

3 Semester Project

Blind Beggar

In the scope of this project a 3D Audio Only Game was created. The game algorithm is written in Max MSP and the sound is controlled using the IEM Plugin Suite. The interaction for the gamer is provided through interacting with a small basked equipped with sensors. This data is processed with an Arduino using the Arduino IDE.
In it’s final form, the game is played in a speaker array organised in the shape of a sphere but there is also a binaural decoding option enabling the game to be played with headphones.

Introduction

What is it about? 

The protagonist of the game is a blind beggar set in a medieval marketplace. The player is sitting on the floor with covered eyes and can therefore only rely on audible information. The gaming object, providing interaction, is a small basket equipped with sensors which provide information about the basket’s motions and is placed in the players hand. 

How is it played?

Basically the game is based on a simple principle: sit, wait and hear as different characters will pass by. These characters can be divided into two groups: one group is generous and contributes to an increasing score. The other group will, under certain circumstances, lead to a decrease in the score. If, however, the score is changed through passing characters is depending on the players interaction with the basket. The player can either lift up the basket, which can lead to an increase in the score, or hide the basket through placing his/her hand on top of the basket. Which motion is the correct one is depending on the character passing by. If the basket is raised, or not hidden, while a not so generous character passes by, not only the score will be decreased but also guards will come around and not hiding the basket when the guards come around leads to a great decrease in the score. The game is subdivided into three Levels. In Level two the wind will begin to blow; in Level three it will start to rain. This additional sound sources are meant to introduce extra difficulties to the game. 

TECHNICAL

The algorithm of the game is written in the software MAX 8.0 by Cycling ’74. More precisely, considering only the software component, Blind Beggar exists in its final form as a MAX Project. Additionally an Arduino MKR NB 1500 and a basket provided with sensors depict the hardware component. 

SENSORS & ARDUINO

Two interactions are relevant for the game: lifting the basket and hiding the basket. For the first interaction, the lifting, sensory data from an accelerometer and a gyroscope, packed into one device the MPU 6050, are used. For the hiding, which is equivalent to placing the hand on the basket, a photon resistor, providing information about the incidence of light, is used. 

The reading of the data is carried out by the Arduino. Therefore the Arduino was equipped with a code written in the Arduino IDE. This code is based on the MPU6050_light library by Romain JL. Fetick and Takuto Sato. A GitHub-Link is provided below. To get the information of the light resistor sensor the code was additionally adapted by myself. Further the interface into MAX is provided by the [serial] object. In MAX all the data information is received and furtherly processed.

MAX 8.0

The MAX Project consists of two main patches. One main patch’s job is to process the sensor data and send out the game specific predicted motions. This follows a defined set of steps: firstly the data stream is translated  into read- and interpretable groups of data; these groups are then routed accordingly and the data behaviour is visualised. In the next step, the data, or rather the movement, is predicted by a support vector machine from the ml.lib library and finally sent via OSC into the second main patch. 

In the second main patch the gaming algorithm is defined. This comprises all information and processing of the logistics behind the game: score (and therefore the levels), the triggering of characters, randomised ambience (market place), audible movements and all encoding and decoding of audio sources. 

The spatialisation of the sound is based on third order Ambisonics. For that the Open-Source IEM PlugIn Suite from the Institute of Electronic Music and Acoustics based in Graz is used. The game offers the opportunity to be played either in a speaker array or with headphones. This is possible through the different decoders provided by the PlugIn Suite and integrated in algorithm. 

Dividing the CPU Power

The division of the algorithm into a sensor processing and a game algorithm patch comes from the fact, that ambisonics can draw on a lot of CPU power. Therefore a second device can be used to process the sensor data and send the predicted motion to the game algorithm using OSC. This enables an efficient way to outsource a not irrelevant amount of processing power. 

Arrangement of the sound

The sound in the game is divided into two main layers. The ground of the audible scope is provided by the ambience which consists of constant sound sources such as a crowd of people, the rain or the wind. It, however, also consists of sound objects, such as a church bell, different animals and a forge. These sound objects are defined in space and partly randomised in their movement and audibility. The bell, for example, does follow an artificial time which is comprised through increases or decreases of strokes over time. 

In the second layer the characters are defined. These characters are also based on randomised parameters. But furthermore, also which character is triggered follows a randomisation process. The characters “pass by” the player meaning that they are moved from left to right, passing the player in the front, or right to left.  Within the moving parameters there is a critical value. This critical value is defined as the area in which the according movement has to be made by the player in order to receive points or not loose any. It is the area in which the characters are in front of the player. 

The sound sources are based on recordings done by myself, a sound library and procedurally produced sounds. The arrangement of these sounds is defined within the algorithm. 

FUTURE WORK

Future work might consist of bringing the game into a more easy-to-use format. This can be provided by not only creating a smartphone app based on binaural representation but furthermore using sensor data from the smartphone and therefore the smartphone, instead of the basket, as the human – interaction component of the game. 

DOCUMENTATION

Hardware

*) Arduino MKR NB 1500

*) GY-512 MPU-6050 3-Axis Gyroscope and Acceleration Sensor for Arduino

*) KY-018 Photo LDR Resistor Diode Photo Resistor Sensor for Arduino

Software

*) Arduino IDE

*) Cycling ’74 MAX MSP 
-) ml.lib library (Support Vector Machine) 

*) IEM PlugIn Suite: https://plugins.iem.at

ARDUNIO SOURCE CODE

MPU6050_light by Romain JL. Fetick and Takuto Sato: https://github.com/rfetick/MPU6050_light