The idea for Chromata came after a friend showed me this rather wonderful video...
It shows colourful light-reflecting cells - chromatophores - in squid skin, reacting to the signal from some headphones that were connected to a nerve in the skin. Nature's audio visualiser. Although, the authors of the experiment say that there was only a response for frequencies under 100Hz. So it seems bassy songs may be best for lively squid skin responses.
A squid skin audio visualiser?
Well, I liked the look of it, and wanted to create something inspired by the behaviour. Not a simulation of squid skin, but an appropriation of the the aesthetic. Something to visualise music, or any audio signal, using a colourful field of interconnected, audio-reactive, cells.
In particular, the shape and distribution of the cells in the video reminded me of voronoi tessellation, so that's where I started experimenting with this project.
Despite working fairly slowly, mostly on quiet evenings, the core parts of the program seemed to come together quite quickly.
At this point, the list of features I wanted to add was still quite sizeable, and I knew it would be a challenge to keep everything running smoothly. Here's a summary of what's going on in the final version...
- Process the audio buffer with FFT analysis and beat detection.
- Generate / update a point-cloud and the voronoi field.
- Apply audio input to drive the animation of voronoi cells.
- Apply audio input to drive vertex and fragment shader parameters.
- Send data to GPU and render.
- Selectable audio input: Local MP3, Soundcloud, Microphone.
- Scrubbable timeline, track info, Soundcloud search.
- Audio waveform rendering.
Nothing jars quite like a stuttering framerate or audio buffer underrun, so performance was always in mind during development. Luckily I was in no rush, so had the time to do a bit of planning and R&D to see what would work, and how much I could get away with.
As is often the case, the greatest source of performance drain at runtime was from object allocation and garbage collection. Maintaining a stable framerate was only possible with a bit of planning, object pooling, and the use of a pre-allocated chunk of memory with fast access; At runtime, data for the GPU, audio processing, and perlin noise all use different areas of this shared memory block.
I was also using Haxe, which (other than generally being a nice, fast language to work with) helped with function inlining, dead-code-elimination, and the option to target different languages. Indeed, some of the earliest work was testing perlin noise and voronoi decomposition in a win32 application that rendered huge images of randomly generated fields. All later builds use the same core code from those experiments.
So how does it work?
As mentioned above, rendering of the field is performed on the GPU; each polygon is built from fan of triangles radiating form the voronoi-centre of each cell. I worked on the rendering code separately and added it to Chromata after everything was working and performing well. There's a version of it on github, from back in 2012.
Making the display react to the audio in a pleasing way involved many hours of tweaking values and wrangling random numbers into usable ranges. Though, to be honest, some of the time I would forget what I was doing and end up watching it for a while... I guess those are the times when you know something is right.
It wasn't just a case of fiddling with numbers though, there's quite a lot going on to manage and animate the state, scale, and colour of each cell.
In general though, the size (area) of a cell influences how it will react to the audio. The larger cells react more to low frequencies and smaller ones to the high-end. Cells are also interconnected; there's a constant interchange of information between them, and one activated cell can pass on energy to excite its neighbour(s). Colour sequences are picked and interpolated from a large bank of preset inputs and then mapped across the range of cell sizes and energies.
There are a few more parts of the project that could be discussed, like the vertex and fragment shaders, waveform rendering, and MP3 track-image loader - but I'll leave those for another time.
After looking back at it for this write-up, I am a bit tempted to revisit or re-use parts of the project. Though it would be for WebGL or a native C++ target. Handily, much of the code could be ported with little or no change.
Since I started work on it, the use and relevance of Flash as a browser plugin, has continued to decline. On top of that, Soundcloud changed their crossdomain policy to only allow access from Flash files hosted on their severs. So the Soundcloud API will now only work when running the SWF locally or if packaged as an AIR application.
There are 3 versions of the project below, taken from different stages in development.
03 is the final build, and has Soundcloud integration. But, as mentioned, that feature won't work in the browser any more. You'll need to run the SWF locally in a standalone Flash player, or download and install the AIR version of it to play with the Soundcloud stuff.