Update, January 2016: This project has now been released on bandcamp

It’s a lovely word but a little overused by self-help and marketing gurus: serendipity.

Those familiar with talks on my slippery chicken algorithmic composition software are no doubt aware of the claims I make for the explorative potential of such systems. I argue that experimenting with the expression of musical ideas in software can lead to unimagined realms, sometimes by stumbling on a lovely yet serendipitous combination of materials (yes, the adjective is even lovelier than the noun).

This project is a good example for such claims: Whilst working on my piano and computer piece altogether disproportionate in 2010 I stumbled upon the sound world for this app—I was experimenting with sample libraries, in particular of gongs. Making a mental note to explore something that had a lot of potential but wasn’t applicable to altogether disproportionate, I went on later to create this app, using recycled MIDI data from the piano piece.

The sound example above was recorded directly from the app. Anyone interested in downloading the app should get in touch.

My students will be aware of my generally negative views of random processes in algorithmic compositions. Of course there are exceptions (Xenakis springs to mind, naturally); it’s the lazy use of random processes that I take issue with. My survey paper for the Communications of the ACM goes into detail, so suffice it to say that, for the sake of a never-ending reconfiguration of a basic 16-part structure, in this project I’m going against my usual preference for deterministic algorithms in allowing controlled random choices of materials.

Music via Apps

mpc-app

This is music for delivery primarily as a computer application. This is an exciting new arena for music dissemination. With more and more people playing their music from computers, it’s now reasonable to ask why we should limit ourselves to playing fixed audio recordings, especially at old/lower sampling rates and quantisation depths. Out of this arises another question: can we not deliver musical forms with varied but coherently defined structures? Brian Eno has been saying this for years. About his Discreet Music (1975), Eno said: “Since I have always preferred making plans to executing them, I have gravitated towards situations and systems that, once set into operation, could create music with little or no intervention on my part. That is to say, I tend towards the roles of planner and programmer, and then become an audience to the results”.

Yet more technical questions arise: How best to deliver these music-generating apps? As application downloads for computers? As phone or tablet apps? As a streaming service? If streaming, then how do we deliver the highest possible quality without hitting bandwidth limits?

My first solution is a Macintosh app made with MaxMSP. This is, then, for home, personal, or perhaps gallery usage—as an installation, for example. It is also, however, quite self-consciously ‘background music’, despite occasional bombast, hence the title Music for Parallel Consumption. Of course, careful listening will reveal all kinds of detail, and I perhaps flatter myself to think that it will repay considerable attention, but the piece generated, being endless, does not have an unfolding form such as we are used to in Western concert music. In this way it is more akin to sound art than composition in the traditional sense.

On the other hand, the piece the app generates is significantly though not fundamentally different each time it is played. There is a tension here between composition and open-ended generative systems, between studio production and automatic layering. The solution in this app is a studio piece with various alternative layers or mixes: it is a never-ending set of variations and/or repetitions of a basic c. 15 minute piece.

The foreground layer of the mix uses standard Kontakt sample libraries. This is a first for me. Part of the reason for using these at the time was didactic. My students use them, and I often hear significant problems in pieces where they are used. So I wanted to investigate the pitfalls and advantages, and develop techniques to improve their integration in complete musical projects, as opposed to sample-rendered demos. The main tool I used to improve the sample libraries was quite simple. I found I needed good (analogue) EQ with some compression. It was mainly the mid-spectrum (300-600Hz) which I found lacking in most cases. I also found careful use of room simulation very useful—you can hear this extensively in this project. In particular I used the Lexicon PCM91, the TC Electronic Fireworx, and SSL’s X-Verb plugin for reverb and strong distance effects. Detuning effects were also invaluable, especially with the gongs, as limited pitches were available and I wanted to avoid constant pitch repetition. When generating the mixes for the 16 sections of this project I drew detuning curves over the whole structure.

Main properties of the piece

Once started the piece runs until you stop it. It constantly reconfigures itself and recycles its sonic materials in an infinite variety of combinations, yet retains its design and sonic characteristics throughout.

There are four structural layers, each containing its own set of pre-mixed and rendered sound files. The app itself is only mixing sound files; there is no real-time synthesis or signal processing. The mixing—and therefore to an extent the compositional challenge—is to pre-set levels and select textures that work well with each other no matter what the  combination of sounds will be.

The piece is made of samples, mainly of gongs, but also many other sources, as well as some virtual analogue synthesiser sounds made using UltraAnalog modelling software.

Although the app was created with MaxMSP, the programming logic and compositional design was implemented in JavaScript, after sound file definition and test mixing was performed with Nuendo. Sequencing within the app is performed by timed JavaScript task callbacks.

The unfolding of the structure

There are 16 sections in the piece: 1 2 3(2,10,11,12) 4 5(10,15,16) 6 7 8(5) 9 10 11 12 13(2,5) 14 15 16

Bracketed numbers refer to possible section inserts: When we arrive at a section with possible inserts, we play first of all the main section, then a randomly chosen number of inserts: from zero to all the bracketed inserts.

There is a wait time of 20 seconds at the end of section 16, i.e. before restarting the piece anew. This value is programmer-configurable.

The four layers

  1. Three versions of a gong foreground layer, split into 16 sections. The app simultaneously plays two of these but one starts at 0db, the other at -10db; the first then fades out (also to -10db) over the 16 sections whereas the other fades in (to 0db). When we restart, then the missing third gong layer enters at -10db and starts its fade in whilst the other fades out. See the Gongs section below for more details.
  2. Background ambience layer, split into six longer sections: these are all around 3.5mins. A new ambience layer will be triggered at the same time as a gong section when the new section has started 2.5 minutes after the previous was triggered. The order of the ambiences is determined by a random shuffle of the six available sound files. Each of these is itself, however, a complex mix of automated cross-fades of various input sounds (see the Multi-fades section below for more details). Many of the input sounds were recordings of percussion–e.g. bowed bell plates and chinese cymbals–made in 2006 at ZKM, Karlsruhe, Germany, but some outdoor ambiences were also used. Some onset and pitch detection of these was used to simultaneously drive highly transposed additive synthesis, adding spectral depth in the extreme highs and lows.
  3. Three versions of an analogue synth background ghost layer, split into the same 16 sections as the gong layer. One of the three versions is chosen at random (i.e. shuffled) on each of the 16 section starts. The pitch and rhythm data for this layer is the same as for the gong layer but the MIDI data was edited by hand so that the density is much lower—this layer is very much in the background most of the time.
  4. 16 short interrupt sounds. These are out-of-context sounds that may play at fixed times during each of the 16 foreground sections (16 interruption times are specified, though these are not necessarily one per section). There’s only ever a 50/50 chance of an interruption happening at one of these specified times and these only begin to interrupt after we’ve reached section 7. Their function is to momentarily distract or confuse but then refocus on the main structure—short sonic irritations sounding outside the main sonic realm.

Gongs

Of the gong structures, there are three versions of each of the 16 sections: one with larger gong samples, another with small gongs, and a third with tuned gongs. No matter which samples are being used, each of the sections has the same pitch and rhythmic structure: this is clearly recognisable by listening to, for example, small-gongs02.aif and comparing to tuned-gongs02.aif or gongs02.aif: Each of these files starts with three repeated notes.

The ordering and thus entry of the gong versions is controlled by a random shuffling process, taking care not to allow repeats such as 1,3,2,2… When we start the piece, two of the gong versions start simultaneously with one in the foreground and the other 10dB lower. The former fades down very slowly over a complete cycle of 16 sections whereas the other fades up. When we get to the next cycle, the background gong version is now the foreground and the third, as yet unused version, begins as the background, beginning its fade into the foreground.

Multi-fades

The automatic multi-fade algorithm used in the ambience layer was programmed in Bill Schottstaedt’s Common Lisp Music. It can auto-fade an arbitrary number of sound files. It calculates start times and durations, even repetitions, according to a user-given output duration. It can perform one or two passes. If the latter, then it mixes two layers. It can also use a loudness (RMS) function to match the loudness of the fading files.

The multi-fade algorithm starts from the premise that a linear fade is unsatisfactory: it uses exponential envelopes instead. The fade envelope exponent is automatically calculated and dependent on sound file durations. More importantly perhaps, the fade itself is not straight up and down, rather it is more (exponentially) sawtooth-like. My experience of this approach is that it perhaps counterintuitively leads perceptually to a more linear effect, as the entering sound file breaks through to the listener’s consciousness in chunks, as it were.

It’s not just a matter of controlling amplitude though. The amplitude curves themselves are accompanied by time-varying filters. I use a Moog low-pass filter emulation made by Tim Stilson. During a fade-in the cutoff frequency moves from 20 to 15000Hz. For the fade-out however I use a State Variable high-pass filter (after Hal Chamberlin’s “Musical Applications of Microprocessors”) with the same frequency sweep.

Share Button