Research & Results

so ummm, yeah…i figured out some really interesting things around generative/algorithmic/interactive audio/music coding this year…i used fmod studio and it’s API for a lot of my video game work. It’s a different way of creating music for things, based on events and parameters and defining things based on other things….once i learnt programming and saw the flow, it became very creative for me…i went very deep into this world. created lots of really cool DSP and generative approaches and it is a big part of my life now…I researched a lot of tools and libraries in the open source world.

so there is this program called supercollider that works by using ugens (unit generators) they are like VSTi but it’s just pure code which means you don’t need to compile anything….(you can if you want) it’s all done with objects, expressions and functions which talk to internal and external libraries of 1000’s of different types, from signal processing to sound generators and modulation control….I/O, sequencers, patterns, data processing……ect…. the actual physical sound/signal is made by the server which is just like a DAW that just talks to your audio hardware….which could be anything…and it also likes talking to MIDI,OSC, data ect… and as mentioned, just about any other I/O currently out there….

it means infinity music that can change based on what’s going on from whatever….like, weather, clouds….humidity, location then define your parameters and functions around that…like…Cmaj is sunny, Amin is rain…scales intermingle….relatives, harmonic maj/min…all the different layers….types, colors, moods… could change the scale or tone or cycle based on country and location…a time of day sound….it also has physical models (sound objects) so you can assign resonance from the sound of pretend piano to something like a virtual space changing…or you could take live data streams from webcams, movement, posenet (browser based motion capture) or anything really…the raspberry pi has so many sensor hardware add ons too so that could be all your data inputs, infrared, motion, AI camera vision…pretty much the whole entire digital and real world electromagnetic spectrum…. you can also stream data from pre-defined or real time internet/computer sources such as Json (javascript)

there are millions of data sets all over the world….so you could just take those values, parameters and scales and make music for (minimum wage in europe since 1999) assign the values to notes and sounds and modulations of those…effects, levels (amplitudes) and let it create itself in real time based on those definitions of those values….could even update it live…nuts huh?

here is a bunch of cool public datasets that could be interesting as sources of modulation and parameters.

anway….i’ve made blocks and blocks of my own reusable code effects and instruments and sequences, modulation and GUI controls ect…..can even make unlimited channel virtual audio/midi mixers….or anything really….it’s a bit buzzy….browser, andriod, iOS, HTML5, and mobile devices..computers….no wav files unless I want them….very interesting…it would work well pretty much for anything…games, films, live, performances, albums….hmmm…..

Here is some code other people have jammed with….you can see you can create blocks or different types and build your own bag of tricks to talk to the ugens…or parameters of them…change values of anything…. it’s made me look at normal music and software and hardware very strangely….working on that…

Here is me again showcasing a bit of music and sound design i did with it….digital nature !

Heres Edward Loveall talking about it via YouTube

Post your comments...