My name is Maximilian Treitler and I am an Amstetten based audio engineer, composer and musician.
In 2019 I finished my Bachelor’s degree with a specification in Audio & Video.
Since 2015 I’ve been a an audio engineer at Graveyard Studio Amstetten and was part of numerous Album and EP production in the Genre of Rock, Punk & Metal.
My passion is playing the drums. I’m playing in numerous bands creating our own music as well as in cover bands playing weddings and parties of all kind.
1 Semester Project
No Drummer Drums – NDD
No Drummer Drums tries to create an unusual speaker. So called Exciters are used to set the drumheads of a drumset into swinging. So in other words the drumheads are used as the membrane.
Two different kinds of signals are used:
First NDD is used as a „drummer“, which means percussive rhythms and sounds are used as the source signal.
Secondly it is used as a speaker consisting of a tweeter, midrange speaker & a woofer. The signal in this case is the Rock song „Inertia“ by the Band Why Goats Why.
Software & Hardware used:
DAW : Ableton Live 9 Suite
Mixer: Behringer X32 Producer (serving as a interface and a mixer)
Amplification: Denon DN-A100 Hifi Stereo Amplifier
Exciter: Visaton EX60s, 8 Ohm
Drumset: Pearle Chad Smith Signature Drumset: Tom1: 12″, Tom 2: 14″ , Kickdrum: 20″
Drumheads: Remo Ambassador Coated
2 Semester Project
Mood into Sine – MiS
The Project “Mood Into Sine” describes an unconventional mirror. Instead of reflecting the face visually, it reflects the mood of the opponent – expressed through their face – sonically. The possible triggered emotions are: happiness, sadness, anger, surprise and disgust. Speaking in a more technical way: the data given from the facial expression is processed by an algorithm in MAX MSP which results in the creation of procedural, synthetic music in real time.
The data of the face is generated and tracked by Kyle McDonald’s open source software FaceOSC and passed into MAX MSP through the OSC protocol. With an easy to handle support vector machine from the ml.lib library, the data is processed and the mood predicted. This prediction of the emotional state is sent to the matrix.
The matrix is best described as a 2 dimensional emotional space. 4 areas are placed in each of the four corners and one directly in the middle. In the middle of these areas, the emotion is the highest and the further away from the center of the areas, the weaker the emotions get – this a fundamental quality of the algorithm and implemented in the sonification. Between these 4 areas is “neutral space”. This neutral space serves one main function: Just like in real life emotions do not change from happy to angry, for example, within the blink of an eye – there has to be an emotional neutral state in between. Inside this neutral space is a point that is moved by the predicted emotional state sent by the support vector machine, which gives information about the triggered emotion and it’s value.
Finally the information from the matrix is transferred into sound. As soon as the program is started, there is always music playing. When an emotion is triggered, the music changes – sounding like the emotional state might sound. These changes result from differences in rhythm, tempo, chord composition and sequence, melody and of course sound.
For additional visual feedback, the waveforms of the instruments and dots – created by the tracked face – are displayed on a screen.
FaceOSC by Kyle McDonald:
MAX 8.0 by Cycling ’74
Additional libraries used:
ml.lib (machine learning)
HISSTools Impulse Response Toolbox (HIRT) (Spectrogram)