The title gold im bach arose from a discussion I had with Karin Schistek during a trip in summer 2019 to Camogli, near Genoa, Italy. I had made a mock-up of this new piece with my algorithmic composition software slippery chicken and we’d been listening to it in the background over a cheap bluetooth speaker whilst reading (now there’s a test environment for your latest composition).

Karin is a pianist and synaesthete for whom all objects, sounds, and experiences in general conjure up associated colours. I rarely if ever experience music in such a manner or even have extra-musical associations with sonic structure, but I told Karin that if I had to characterise it in an associative or metaphorical way, then I experience this piece as a long and fast-flowing river. She retorted that for her it was more like a ‘Bach’ (‘stream’ in German). Now, she might have merely been taking me down with this comment but she claims that she meant a stream in the sense that she could see detailed objects underneath the flowing water, and that amongst the stones and other objects were flashes of gold.

The title also conjures up Bach’s Goldberg Variations, of course, and insofar as my piece transforms repeated structures over a significant duration (c. 34 minutes), the connection to Goldberg is not amiss, no matter how insignificant my work might be in comparison to Bach’s. However, the extra-musical and poetic thrust of this new composition is much more connected to an astounding short poem by the Beat Generation American poet Gregory Corso found on his headstone in the Protestant Cemetery in Rome:

Spirit
is Life
It flows thru
the death of me
endlessly
like a river
unafraid
of becoming
the sea

installation context

gold im bach developed from the four-disklavier (MIDI piano) installation I did with Dirk Reith, Günter Steinke, and Thomas Neuhaus as part of the NOW! Transit festival at the Essen Philharmonic in October 2019.

The RWE Pavilion, Philharmonic Essen; Photo: Rebecca ter Braak

For this installation, we agreed to provide several short pieces for combination and rotation in a potentially endless run of music for the four spatialised pianos. I tried several approaches and whilst working on an offshoot of my jitterbug algorithm I found it started to take on a life of its own and move in a direction I really had no context for: a consistently quiet, fast but slowly developing piece lasting more than half an hour and consisting of a single ‘melodic’ line, i.e. no counterpoint at all. Hmm. This seemed at odds with the potential of four spatially-distributed instruments with massive capacity for dynamic contrast and contrapuntal weavings. However, in the context as well as in general, I was happy to take a different and perhaps divergent path from that which immediately offered itself by the installation. So my contributions did indeed remain quiet throughout and only one piano plays a single note or chord at any time. In the context of my co-composers music this contrast worked well, I believe.

algorithm development

While I followed the algorithm’s internal suggestions, moulding and shaping it towards the proportions it seemed to demand, as well, of course, towards my own musical goals for the piece—using almost exclusively code-based interventions, it must be said, i.e., no score editing here: like “pure gold” itself, the piece is 99.99% algorithmic, with only half a dozen or so notes out of 20,000 changed by hand in a sequencer—I developed a two-pronged strategy for realising its potential. For the installation, I was convinced that by extracting four 4-6 minute chunks I could place the material in the installation context; but also that by allowing it to occupy its extended length it would be aesthetically interesting as a production in and of itself. I was reminded once again of what Brian Eno said about his Discreet Music (1975):

Since I have always preferred making plans to executing them, I have gravitated towards situations and systems that, once set into operation, could create music with little or no intervention on my part. That is to say, I tend towards the roles of planner and programmer, and then become an audience to the results.

Two paths were then open for the long version: 1) a concert performance by humans from a generated and, at some point, edited score; 2) a studio production, perhaps with human pianists but more realistically via synthesis. For a human performance, c. 34 minutes is most certainly enough of a demand. For a “production release” I favoured the idea of extending the work with a second part to take it beyond an hour in duration.

To this end I generated and explored a reversed version of the first part, not merely (and cheaply) reversing all the notes, but (nevertheless rather cheaply) reversing the material mapping, transposing the harmonic material downwards as the piece proceeds (in order to achieve some low notes, which are almost entirely missing from the first part), then progressively and, of course, algorithmically thinning out the number of notes played so that by the end the texture is really quite sparse. I’d be happy to hear from anyone who notices upon listening that such a simple modification of the first part was used in order to generate the second part. Any takers?

human performance

A performance by human pianists is not impossible but it would be a huge undertaking. The first part, lasting c. 34 minutes, is by itself 1943 bars long and has an unwavering tempo of 176 BPM. Here is an extract of the unedited score after being imported into Dorico (with all its note spelling horrors still intact):

It is, on the face of it, by no means the most complex work written for piano, but the speed of some of the repeated notes generated by the algorithms, at the beginning especially, would be mostly impossible for a human pianist. These are thus modified algorithmically to octaves and major or minor ninths in the score version, and the interplay between these intervals takes on a life of its own:

The biggest barrier to performance is the sheer number of notes. A potentially pleasant and social solution to this would be to involve one or more pairs of pianists. Two pianos could be used and the pianists would alternate at points they chose in the score, overlapping and interjecting to create a ‘hand-off’ back and forth over the whole duration of the work.

As an aside: the rhythmic complexity is significantly reduced from that which my jitterbug algorithm usually produces. This was a conscious decision on my part, so as to make it at least conceivable for human performance over this significant duration. To achieve more rhythmic simplicity, instead of importing the slippery-chicken generated MusicXML, with all its nested-tuplet complexities—see Durchhaltevermögen (also a jitterbug-generated piece) for instance:

—I imported the slippery-chicken generated MIDI file into Dorico instead, allowing the software to quantise the rhythms to achieve something much simpler. But still, given those 20,000 notes at the given tempo, even if undertaken by pairs of pianists, I’m not holding my breath waiting for willing interpreters.

synthesised production

The production file offered in the audio player at the top of this post belongs to my category of Music for Parallel Consumption. As such, it is a companion piece to the composition and app of that name from 2015. Though of course one could sit focused and meditate upon the whole one-hour-plus structure of gold im bach, I don’t really intend the digital production to be listened to that attentively (!), rather it is self-consciously background music. That is perhaps a little strange coming from a composer of my ilk, background, and focus. Nevertheless, there and such it is.

The production was made via physical modeling synthesis using pianoteq instruments. There were two options I considered: 1) a purely acoustic piano version where, once established, the synthesis remains unchanged throughout; 2) a version that varies timbrally over the duration of the production. I opted for the latter, but not without misgivings. (For a while I considered making a new app allowing the user to make hybrid decisions interpolating between these two approaches. I looked into making a web app this time, using the Javascript Web Audio API. But preliminary investigations led me to conclude that there are still too many variations between browser and operating system to make something truly robust and reliable there.)

With regards to implementing the timbrally varied option, I approached the programmers of pianoteq to see if it would be possible to slowly modulate, or rather morph timbres—say from a Steinway model to a Rhodes electric piano—using automation of the physical models’ parameter sets. This isn’t yet achievable but it’s something the programmers are apparently going to look into next year. So I was left with the only viable option of automating a mix of several instruments playing the same musical data. This was not something I felt completely convinced of (a morph would potentially be much more interesting, especially in terms of digital sound synthesis and the potentially unheard timbres it might achieve) but seeing no clearly better alternative I went ahead all the same, using the Steinway, Erard, Cimbalom, Vibraphone, Electric Piano, and Harpsichord models.

Given the significant duration of the whole piece (1:02:40) I came up with and implemented a mix plan, moving from instrument to instrument and back again using slowly developing automation. Interestingly, given the durations involved and the spectral/dynamic characteristics of the different physical models, I believe, I didn’t find the visually/calculated/programmed results very satisfactory. I opted instead for automation recording, listening to the development of the piece over its whole duration and nudging the balance of sounds using my ears and physical faders (a Presonus Faderport 8: aye, my wise mixing-masters: mix with your ears (and fingers), not your eyes).

analogue mastering

Along the way I upgraded from pianoteq version 5 to version 6. The sonic differences are quite significant. There’s something in the new Steinway model, for instance, that really adds weight, substance, and ‘believability’ to the hammers. It also gives a strong impression of a more realistic sound board, especially with the damper pedal engaged.

By the mastering stage, however, I felt I had to dip out of the digital and into the analogue realm in order to get some extra juice and depth. To this end, I passed the signal through Mytek DA converters into a DAV BG6 compressor (not for dynamic compression but, rather, analogue gain) before doing some spectral processing with the inimitable Neve 8803 Dual EQ, before finally passing back into the digital realm through a trusty old Lexicon PCM91. The latter still manages to amaze me and in some contexts proves itself quite irreplaceable, despite the built-in reverberator of the pianoteq instruments  being not too shoddy at all, and the new version of SSL’s FlexVerb (also used in this project) being quiet dreamy in many sonic situations. The main question though is how do those pianos sound?

 

Featured image: Karin Schistek

Share Button