Maximilian Treitler is an Austrian multimedia content creator with an emphasis on Audio Design. With his origins in music, being a drummer since his 8th year of life, he managed to create projects in a wide variety of fields. This cross-disciplinary approach is a continuous personal development which leads in constant acquisition of new tools regarding soft- and hardware as well as novel ways of creating content from a creative perspective.
1 Semester Project
No Drummer Drums – NDD
No Drummer Drums creates an unusual speaker. So called Exciters are used to set the drumheads of a drumset into swinging. So in other words the drumheads are used as the membrane. This creates the possibility of moving inside and within the playing drum set resulting in an unique, immersive and novel experience.
Software & Hardware used:
DAW : Ableton Live 9 Suite
Mixer: Behringer X32 Producer (serving as a interface and a mixer)
Amplification: Denon DN-A100 Hifi Stereo Amplifier
Exciter: Visaton EX60s, 8 Ohm
Drumset: Pearle Chad Smith Signature Drumset: Tom1: 12″, Tom 2: 14″ , Kickdrum: 20″
Drumheads: Remo Ambassador Coated
2 Semester Project
Mood into Sine – MiS
The Project “Mood Into Sine” describes an unconventional mirror. Instead of reflecting the face visually, it reflects the mood of the opponent – expressed through their face – sonically. The possible triggered emotions are: happiness, sadness, anger, surprise and disgust. Speaking in a more technical way: the data given from the facial expression is processed by an algorithm in MAX MSP which results in the creation of procedural, synthetic music in real time.
The data of the face is generated and tracked by Kyle McDonald’s open source software FaceOSC and passed into MAX MSP through the OSC protocol. With an easy to handle support vector machine from the ml.lib library, the data is processed and the mood predicted. This prediction of the emotional state is sent to the matrix.
The matrix is best described as a 2 dimensional emotional space. 4 areas are placed in each of the four corners and one directly in the middle. In the middle of these areas, the emotion is the highest and the further away from the center of the areas, the weaker the emotions get – this a fundamental quality of the algorithm and implemented in the sonification. Between these 4 areas is “neutral space”. This neutral space serves one main function: Just like in real life emotions do not change from happy to angry, for example, within the blink of an eye – there has to be an emotional neutral state in between. Inside this neutral space is a point that is moved by the predicted emotional state sent by the support vector machine, which gives information about the triggered emotion and it’s value.
Finally the information from the matrix is transferred into sound. As soon as the program is started, there is always music playing. When an emotion is triggered, the music changes – sounding like the emotional state might sound. These changes result from differences in rhythm, tempo, chord composition and sequence, melody and of course sound.
For additional visual feedback, the waveforms of the instruments and dots – created by the tracked face – are displayed on a screen.
FaceOSC by Kyle McDonald:
MAX 8.0 by Cycling ’74
Additional libraries used:
ml.lib (machine learning)
HISSTools Impulse Response Toolbox (HIRT) (Spectrogram)
3 Semester Project
In the scope of this project a 3D Audio Only Game will be created. The game algorithm will be written in Max MSP and the sound will be controlled using the IEM Plugin Suite. The interaction for the gamer will be provided through interacting with a small basked equipped with sensors. This data will be processed with an Arduino using the Arduino IDE.
In it’s final form, the game will be played in a speaker array organized in the shape of a sphere.