Algorithmic Improvisation:
The Primacy of NOW

Robert Willey
School of Music
University of Louisiana at Lafayette
willey@louisiana.edu

An explanation will be made of the technical aspects of Saudades de Ouro Preto, a piece performed at the festival by the author. Examples of several "programmable instruments" will be played. An approach to algorithmic improvisation is proposed in which the performer determines the timing and dynamic of events while the computer supplies the timbre and pitch.

Algorithmic improvisation combines two of the themes of the festival: algorithmic composition and new technology for performance. The approach taken evolved out of interests in prepared pianos and randomness (John Cage), gating (Joji Yuasa), and influence rather than control (Gordon Mumma).

One of the similarities among the instruments demonstrated here will be the way in which the performer controls the timing and dynamic of events while the computer determines the timbre and pitch.


The Primacy of Now

For a highly skilled player there is usually a one-to-one or one-to-few correspondence between actions performed on an instrument and the sounds that are heard as a result.  Less skilled players have significantly less control over what audio phenomena result from physical manipulation of an acoustic body.  With the advent of controller instruments such as keyboards there can be a decoupling of cause and effect, since a variety of different systems can be connected to the same layout of white and black keys, such as strings hit with hammers or plucked with leather, sets of pipes of different dimensions, bells, and most recently, microprocessor-driven synthesizers.

Simple setup for computer-mediated performance

With the convergence of the MIDI specification, personal computers, and digital synthesizers in the early 1980's it became possible to greatly increase the types of interactions performers could have with sound.  Instruments that can be "listened to" by computers (such as keyboards or other controllers) can now have arbitrary processing interposed between the physical object and the sounds that result. What especially excited me about these environments was the new situations in which improvisers could be placed, since surprises and novelty often led to freshness of playing.  In these systems a computer is inserted between the controller and the synthesizer.  A great variety of synthesizer responses to keyboard performance can then be created by changing the programs and their parameters running on the computer.

It's my opinion that the performer is fundamentally involved in the timing and dynamic of notes produced, more so than with pitch and timbre.  This may be offensive to some musicians and so I will qualify it that at least this is one way in which music could be approached, and has been useful in the design of interactive environments.  When I play in these situations I am mostly concerned with deciding when something is going to happen (the "primacy of now") and how big it is going to be.  Quite probably this results from a background in keyboard playing which doesn't allow for much pitch or timbral control.  As part of this work I did performance analysis experiments and had people sight read random 12-tone tone rows.  At least in that style it proved to disturb players more when there was a delay in the response of the instrument than when simply the wrong note came out.  It is also a reaction to the early days of computer music environments in which realtime performance was not possible, and then later in performance systems in which players are observed to make adjustments to controls with little noticeable relationship to the resulting sound.  My algorithms are simple, and are meant to avoid entering into musical decisions.  I let humans do what they want to do--many of us like to compose, rehearse, and play with others.  I don't want to replace those activities.  What we don't like to do, and aren't necessarily good at, for example, is remembering accurately long series of pitches or timbres during an improvisation.  Let the computers do what they are good at, in areas that are less appealing to humans, and then let players make contributions in ways they enjoy and are skilled.

I have learned to improvise in many different musical microworlds created with computer-mediated systems.  I find those most satisfying in which I am responsible for timing and dynamics, and then design strategies for the computer to participate in the selection of pitch and timbre.  In the systems that are discussed below, pitch is then controlled algorithmically in various ways and to different degrees. The computer's random choices can be constrained or serialized patterns used.  Combinations of parameters of control over pitch and timbre can be set up in advance during composition and rehearsal and then recalled in performance.  Other composers are interested in just the opposite, for them pitch and or timbre are of primary importance.  However, considering MIDI's weakness in timbral control, it seems wise to let the performer concentrate on timing and dynamics.  This work began from the perspective of a keyboard player in the 1980's, and is still suited to state of MIDI. As computers get faster they will be able to do more timbral control in the synthesis of sound, at which point we can connect the new synthesis capabilities with performance strategies we are developing now.

What follows is a discussion of a series of different algorithms developed in conjunction with a number of new compositions.  The design of the algorithms and choices of preset parameters and instruments is similar to the level of pre-compositional decisions made by other composers when working with paper scores and acoustic instruments.

Gated and pitch shifted audio recordings

I use gating to incorporate various degrees of randomness and to inexpensively increase timbral resources. The prepared piano extends the timbral resources of the piano, but the preparation for any particular key usually remains constant during the duration of the piece, since some preparations can be time consuming to change during the performance. I will go on to show a variety of preparations, which I call "programmable instruments". These preparations can be instantly and precisely reconfigured by the computer by simply pressing a button or some other cue. As a performer improvises using these instruments a computer program augments what the performer is playing with notes that it triggers, in order to provide new instrumental experiences. Each time a key is depressed the computer opens a gate to let a brief bit of radio through. Later in the excerpt the releasing of keys opens another gate. At the end the pitch of the played note controls the pitch shifting of the radio. The performer does not know exactly what sound will come through, but can control when it will be heard, and how loud it will be. The sounds that result may provoke further response because of the surprising result.

The opening section of Frontera Norte takes place over a background of other random elements. The keyboard opens the gate on a stereo broadcast from a radio station in Buenos Aires.  Depressing a key opens one channel in the field, releasing the same key opens the other side.  Presets were called up from the keyboard controller by pressing program change buttons.  When the keyboard is played normally this changes the timbre of the notes heard in response to the keys being played.  In this piece the keys are not usually heard, instead the keyboard is used to control the effects processor.  The gate stays open for different lengths of time, recalled by presets (in this case by sending program changes to a Yamaha SPX-90).  Another group of patches used a pitch shifting algorithm, and notes on the keyboard changed the interval--low notes on the keyboard transposed the sounds on the radio down, higher pitches raised the frequency of the radio passing through the processor.

There is quite a bit of gating driven by the acoustic violinist in Unprepared Music for Prepared Violins, performed by Paďvikki Nykter and János Négyesy.  At one point certain strings enable male Hungarian speech, while the others pass female Finnish.  Later notes on the acoustic violin let the electric violin be heard.

playbobw

The idea for this algorithm grew out of a conversation with Gordon Mumma, where he talked about influencing music rather than controlling it. In this mode the computer triggers random notes when any note is played on the keyboard, with notes that are played on the keyboard becoming more likely of being picked by the computer in the future. The computer remembers the last twelve notes played on the keyboard. Keys that are struck harder become more likely of being picked by the computer for its notes. The computer picks the pitch class and the octave in which the note is being played on the keyboard. Presets are arranged and called up using the program changes of the keyboard controller. These determine how many notes the computer will trigger, allowing for single notes or chords/clusters of different sizes. The notes the computer triggers last take on the same timing characteristics as the notes played on the keyboards, allowing the performer to determine when, and how loud, and in what register the notes will come out, leaving it up to the computer to pick which pitches to play. The choice of pitches is influenced, not controlled by the player. This was used in a number of pieces, including my part at the beginning of Strange Attractors, improvised with Rick Bidlack and Robert Thompson.

cereal

The performer controls the timing, dynamic, and register. The computer keeps track of what the next note should be according to the row. The performer controls whether the row is normal or retrograde, inverted, or retrograde inverted using pedals. The performer controls register, that is, the next note of the row will be played in the same octave as the played note. In the PC version the dynamic of each note could be serialized, or be controlled by the performer. Timbres were also serialized, within a range set by the performer.

In the Max/Macintosh version, panning and reverberation/chorus are also serializable.

keith

This patch was written for Australian pianist and composer Keith Humble.  It was capable of changing the pitch and/or timing of the output, by making louder notes come out higher and/or sooner than softer notes. The "refraction" capabilities of the program are demonstrated in The Last Turkey in the Straw.

Each time the performer plays a note the computer chooses a pitch, influenced by the notes that have been played in the past. The performer controls the timing and dynamic. The harder pitches are played the more likely they become of being played in the future, allowing the performer to influence the harmony by insistence. Timbres are randomly chosen by the computer.

aleatoria

Dynamic of note controls the level of the triggered notes that follow, played by the computer. Duration of the performer's last note determines the computer's pulse for the notes that follow, the performer's pitch determines the computer's register. Computer chooses notes of random pitch around the pitch of the last note from the performer.

Here is a bit of the "patch" in the MAX environment:

aleatoria patch

The wide box with the little dots represent presets, at this point in the piece we are on preset number three. This calls up parameters for minimum duration, spread, and the number of notes. Each time the performer releases a note the dynamic with which the note was played replaces the last value of velocity, in this case 49. This becomes the dynamic that the computer will play its notes that follow. The computer will then proceed to play five notes randomly chosen within a range of ten semitones above and below the last note released by the performer. These values can be changed by selecting a different preset. In this example from en route János Négyesy plays electronic violin. His harmonics leave a trail of random residue created by the patch.

This patch was also used in the second section of Un tiro de dados ("A Roll of the Dice").  In the first section of the piece the player has to play notes in order to prevent bells from ringing.  During the movement the rate at which notes must be played increases, arriving at a point in which the bells will ring incessantly no matter what.  In the last section notes played by the performer are delayed and echoed with different timbres.  Presets allow for only certain pitch classes to be echoed.  The idea for this came from vampire lore: supposedly you are not able to see a vampire reflected in a mirror. 

mathews

Max Mathews has been working on conducting programs on and off since 1975. One version, the "sequential drum" involved the performer performing the notes of a score one at a time by hitting a drum--each time the drum was hit the next note of the score would be played.

I used this idea in a patch, but allowed the performer to record the series of notes during the performance, rather than reading from a score, allowing more freedom for improvisation. One of the sustain pedals under the keyboard controls recording of pitches--while it is sustained notes that are played are recorded, adding on to the end of the list of any notes previously recorded. If the pedal is pressed twice without playing anything the recorded notes are erased, allowing the performer to start over.

Again, the performer controls when and how loud the notes will be by the velocity of the key depressions and control of overall volume with foot pedals, while the computer keeps track of the order of pitches. Timbre combinations for the input and output notes and their relative volumes set by preset.

This algorithm is used extensively in the composition Saudades de Ouro Preto.  Volume pedals allow for the control the dynamics of what is played and what is triggered as a result.

The opening section of The War Prayer also makes use of the same technique.  One volume pedal controls the dynamic of a Bach chorale, a second controls the dynamic of an organ patch playing the synchronized, recycled pitches.  The keyboards then cut out and computer-processed voice intones Mark Twain's text, the words of which gradually are resonated more and more by Bach's harmony ("A Mighty Fortress is Our God").


THRU notes only:

THRU notes only preset Before the piece starts the "recording" box is checked and the first phrase is played. Since the "thru" box is not checked the audience does not hear anything. Now the "thru" box can be checked and only the notes that are played on the keyboard are heard. They go through the computer with the pitches unchanged. They are now set to be played on channel 3 on the synthesizer, in this case with a harpsichord sound, panned to the right audio channel. The theme alone could then be played.


TRIGGERED notes only:

TRIGGERED notes only preset Notes that are played on the keyboard are not heard (since the "THRU" box is not checked), but each time a note is played the computer triggers a note from the pre-recorded sequence, one note at a time. When the last note of the sequence is triggered the computer prepares to start over at the beginning of the 16-note sequence. Notes that the computer plays are heard here with a guitar sound from the left channel.)


THRU and TRIGGERED notes:

THRU and TRIGGERED notes preset In this state both the notes that are played on the keyboard are passed thru, as well as the triggered notes. The volume levels of each part can be independently controlled with foot pedals, allowing, for example, the triggered notes to be of equal volume, or faded in only at the ends of phrases.

Another, subtler, application of this algorithm can be used on the improvisation The Good Part. By using foot pedals the volumes of the triggered parts can be turned up and down, adding textural variety without resorting to overdubbing.


Links

ForProgrammable Instruments paper

©2003 Robert Willey

http://willshare.com/willeyrk/creative/papers/ALGIMP/