14 Nov 2011 · Patter

Loom: a generative music platform

UPDATE: Since this post, the project has been released under the name Patter. Read more and come play!

Over the past year of generative music experiments in performances and installations, I’ve been chipping away at a homebrew, Ruby-based platform for Ableton Live which I call Loom—named for the textile pattern-generating ancestor of the computer. In hopes of getting more ears on it, I’ve recently distilled it all down to a lean and modular (albeit very alpha) core, and published the source on GitHub, where you’ll also find a slightly more technical introduction than the pontificating, hyperlinking, and screencasting below.

For the uninitiated, you can think of it as computer-aided composition: you describe the music in high-level terms of characteristics and motifs, do your sound design, and let the computer generate the specific MIDI patterns (melodies, rhythms, etc.). You can surface parameters to tweak for a live performance, or you can just let it run for hours (days, weeks…) as an installation or radio station. But, ideally, you’ll always be surprised (maybe even pleasantly!) by the music that emerges.

Loom’s architecture has been emerging from the application of Ruby idioms to generative music. At its core, it’s just a smattering of Ruby APIs hooked into a Max event loop. Writing a Loom module, for example, is just a matter of monkeypatching right in. But these idioms—mixin modularity, the teasing out of randomness from controller logic (kinda inspired by MVC)—have begun to propose a deeper architecture which has just begun to come into focus. Compared to the excellent, more established generative music platforms below, Loom is shaping up to be: more opaque than diagrammatic; more emergent than hierarchical; neither a sonification of a visualized/physical process, nor a live-coding environment. Loom is actually defiantly oblivious to the medium of sight. (I tend to agree with artist Eva Schindling’s criticism of music visualizers; once the visual cortex is engaged, musical listening may suffer.) I’d like to get a little more visual feedback out of those Live knobs, though!

The screencast mainly explains modules and generators, and doesn’t even touch Ruby. (Nor does it include early experiments I have yet to factor into the new alpha architecture, like just intonation or exponential rhythms.) I typically stay in the Live UI while music-making, as dropping into Ruby means engaging a whole abstract-symbol-manipulation brain thing that can stunt my musical thinking. I like this separation: the depth and opacity of code with high-level parameters at the surface. There are many depths to swim at.

But, theory aside, there is something thrilling in the unpredictability of working with Loom. Electronic music made composition more of a textural, quantitative, alchemical process, in which you “mix” amounts of known quantities rather than labor away at the production of each sound. (That said, there’s no substitute for musicianship. I typically turn to physical instruments as tools for thought to work out nascent generative ideas!) Generative music pushes this even further, so that you’re not just mixing volumes or textures, but qualities, characteristics, motifs, as well as their underlying quantities, parameters, gaussian random number distributions. It’s a whole other mode of composition, more like steering than performing any kind of athletic feat, and it brings with it all the attendant dangers of automation. But, so so fun.

Many thanks to Mark Trayle, my advisor for the early life of this project.