08 Oct 2015 · Patter • Work

Patter: realtime / freeform generative music

Today I’m proud to announce that Patter, the product and productization of my long-running rumination and experimentation in generative music, is available as a Max For Live device in the Ableton store.

Beginning as a series of prototypes for performing live, improvisatory electronic music during my MFA at CalArts, the project metamorphosed, first into the grandiose (an open-source platform! a universal syntax for music!), and then, through a process of conceptual whittling and technological overhauling, into the succinct plug-in form it takes today.

Here are a few somewhat didactic examples of music generated by Patter:

The elemental question animating those earliest experiments was how to model a truly free-as-in-free-jazz approach to musical improvisation in software. A coder’s first job is to build a mental model, and so I set my sights on the psychic attitudes and socially-agreed-upon forms of the immensely talented improvising musicians, such as creative musician Wadada Leo Smith, whom I found myself surrounded by at CalArts.

That radical experimentalism with musical form stands in contrast to a strictly-quantized electronic “beats” form that I grew up inhabiting, but its leanings can be subtly detected in the off-kilter grooves of recent genres (“trap”, with its twitching spasms, and “juke”, with its psychedelic vortices, to name a couple). Rather than apportion and subdivide time in fixed units, my experiments would always invert that model, instead beginning with a single pulse—generating one or a few notes at a time—and building up, accumulatively, from there.

I was aided to this insight my composition professor, the late Mark Trayle, who asked if my software would be able to produce tuplets—subdivisions of time by a number other than two. The very suggestion begged two further questions: How much time does one allow oneself at a time (when improvising freely), and how might it be arranged internally?

At that point, I consulted the library, to learn how musicologists rationalize and explain rhythmic groupings. I was especially struck by authors who explained musical accent in the terms of prosody: accented and unaccented syllables, trochees and iambs—a model in which I found an intuitively human, linguistic musicality which informed my experiments. I was also turned to James Tenney, Trayle’s forebear, who, in his thesis on the phenomenology of contemporary music, applied Gestalt theory to musical form, suggesting that rhythmic groupings may be manifested in the ear of the listener—which in fact lifted a burden for me, suggesting that if my computer-generated music came out to be too abstruse, a forgiving ear would make sense of it.

To the essential question of real-time generativity—how much music does an improviser plan at a given time in performance—I had a workable answer: one grouping. I called these groupings “gestures” to begin, to identify the smallest unit of musical intent—just a few notes, or a few seconds—a volley in a group improvisation. I later renamed them to “segments” as I nailed down their form in the software model: a segment would be a series of one or more notes of equal duration (specified as a ratio of the global pulse), with rests of equal duration (occurring before or after). The broader notion of a gesture which may exhibit more lyrical nuances, exponential rhythms, etc., I set aside for now.

As I applied this musical model in solo laptop performances, in which I generated melodies and percussions in irregular, unending lines, I was then eager to model the other, social part of improvisation. How might these lines intersect or interact? In performance, one musician may “cue” another with eye contact, or may be moved to respond to another, intuitively—in software terms, “push” or “pull” events, respectively. Patter would enable the same: a device, generating notes for one instrument, could cede control to another device, driving another instrument, or even the same one, and so on—so that one could create large systems of decentralized control, with musical agents passing spontaneous, bounding rhythms back and forth and around and back again. Again in software terms, this is called “agent-based modeling”, as I learned in deeper research dives.

I decided to productize these experiments in some gleeful moment when I felt that they were too fun to keep to myself—a moment I only dimly remember after the ensuing years of agonizing about minimum viability, UI layout, and of course, cross-platform native C development. I am told by my beta testers that the product is indeed still fun!

Patter is my own volley into the expanding market of generative music frameworks, which range from attractive, graphically-oriented mobile apps, to the cerebral feats of “live coding”, to the “Randomize” button still occasionally found on sequencers. I’ve personally found Patter useful as a composition tool, to produce variations on a theme; as an improvisation partner, to practice an instrument; but especially in performance, where one can have the rare pleasure of being alternately gratified and deceived by one’s own software onstage. I recommend you take the chance yourself.