Tristan Murail: Villeneuve-le's-Avignon Conferences, Centre Acanthes, 1992
Tristan Murail: Villeneuve-le's-Avignon Conferences, Centre Acanthes, 1992
Villeneuve-lès-Avignon Conferences,
Centre Acanthes, 9–11 and 13 July
1992
Tristan Murail
To cite this article: Tristan Murail (2005): Villeneuve-lès-Avignon Conferences, Centre Acanthes,
9–11 and 13 July 1992, Contemporary Music Review, 24:2-3, 187-267
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representation
that the contents will be complete or accurate or up to date. The accuracy of any
instructions, formulae, and drug doses should be independently verified with primary
sources. The publisher shall not be liable for any loss, actions, claims, proceedings,
demand, or costs or damages whatsoever or howsoever caused arising directly or
indirectly in connection with or arising out of the use of this material.
Contemporary Music Review
Vol. 24, No. 2/3, April/June 2005, pp. 187 – 267
Villeneuve-lès-Avignon Conferences,
Centre Acanthes, 9–11 and 13 July
1992
Tristan Murail (translated by Aaron Berkowitz & Joshua
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
Fineberg)
The following conference text was created from a transcription made by Dominic
Garant and revised by Pierre Michel. I would like to cordially thank both of them for
having taken on this onerous and thankless job. I thought it necessary, nevertheless, to
rewrite these texts rather substantially. The conferences were essentially improvisatory,
based loosely on a pre-established plan (I do not like to read conference texts: it
reminds me of a professor of civil law who—in what he called a course—read the
‘lecture notes’ that one could buy in advance at the book store across from the
university). The oral style seemed to me annoying to read; in addition, these
conferences were accompanied by numerous sonic and visual examples, without the
help of which they would have certainly become incomprehensible. Their subjects (and
the order in which they are discussed) were determined in relation to the concert
programme at the Centre Acanthes, where Désintégrations, Territoires de l’Oubli and
Allégories were featured.
I have endeavoured to compile these texts in such a way as to make them clearer and
easier to read, while still attempting to stay as close as possible to speech-like writing,
without stylistic pretence. I chose not to retain the division into four days, since it did
not correspond to a significant formal division; however, I did conserve the order of the
subjects discussed, even though it may seem a bit arbitrary outside of the context of the
Centre Acanthes. Finally, over the course of this rewriting, I tried to stay as faithful as
possible to the ideas expressed at that time—even if today I might formulate certain
things rather differently.
T.M., Monroe, New York, May 2003
musical sound while the sound of a violinist impolitely warming up in the wings
during the performance might just as easily lose its usual designation as ‘musical
sound’—since it is not integrated into the discourse.1
The instrumental sound can nevertheless serve as a paradigm for a broader
category of musical sounds. The reason for this is relatively simple: instrumental
sounds have attained their current forms through our attempts to modify and
‘improve’ them over centuries. We have, by now, reached the point where these
sounds are often judged more or less perfect—at least, for their intended usages. We
can thus embrace the hypothesis that instrumental sounds, in their contemporary
form, are closely related to the very foundations of our culture.
It would be interesting to analyse why instrumental sounds suit us so well. Perhaps
from this analysis we could derive a model for organizing music more generally? This
hypothesis, though certainly a bit bold, allowed nonetheless for the realization of a
certain number of pieces during the 1970s. I am thinking in particular of the Espaces
acoustiques cycle by Gérard Grisey. Of course, this idea is far from sufficient to
account for the totality of the work’s musical organization, but we can consider it as
one of the points of departure for the composition’s formal construction.
Timbre
Let us now examine the phenomenon of timbre in occidental music. In observing the
historical evolution of this music, it is easy to see that timbre takes on an increasingly
important role in musical discourse. In the music of the 16th and 17th centuries,
timbre was not really taken into account and was often not explicitly notated. Many
pieces could be played equally well on the oboe as on the violin, with accompaniment
provided by either a harpsichord or a lute; pieces were played with the available
means, without attaching much importance to the specific sonic character of the
resultant sounds. Later, timbres started to be more precisely indicated: the
Brandenburg Concerti, for example, are specifically written for certain types of
timbres. The melodic lines themselves begin to take on specific characteristics
depending on the instruments. The use of idiomatic language for the instruments is
beginning. Progressively, the concept of orchestration starts to emerge in the late 18th
and early 19th centuries. Little by little, orchestral timbre is refined either by
Contemporary Music Review 189
‘synthesis’ (adding instruments one of the fundamental principles of ‘classical
orchestration’), or through increasing precision in defining specific, often
unconventional instrumental techniques. This later approach has become especially
significant in the 20th century, in particular on string instruments, where the sonority
can easily be modulated (ponticello, tasto, col legno, etc.). At present, the possibilities
of instruments have been explored to the extreme, permitting us, at least in principle,
to define and notate instrumental timbre with great precision, while the technical and
virtuosic possibilities of instrumental performance continually expand. This,
however, does not necessarily signify that classical instruments, in their current
state, respond to all our needs and expectations.
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
way, if you focus your ear so as to dissect the contents, you can distinguish different
harmonics of this sound quite well and thereby understand that it is made up of a
group of components—all of which have their own lives. We are accustomed to
considering this group of components as a single object, and calling it the ‘sound’,
but it is equally possible to dissociate them: allowing unitary timbre to burst into
multi-dimensional harmony. This concept serves as the foundation for certain
fascinating vocal techniques. In Mongolia and in the Tuvan Republic5, the technique
of diphonic singing allows the dissociation of the voice into two perceptible entities:
the fundamental and its harmonics. While the fundamental frequency stays fixed, the
singer’s voice (by strongly accenting one or another harmonic, like an exaggerated
vowel) creates a succession of formant peaks that in turn create a sort of melody. In
contrast to a traditional melody, which consists of a succession of complete multi-
dimensional sound objects (the succession of ‘notes’ emitted by a classical singer),
here the melody situates itself in the very midst of a single sonic object that is
modulated over time. One can consider these diphonic (khöömi) songs of Mongolia
and of Tuva as the first known examples of ‘spectral composition’.
In Figure 1, the horizontal axis represents time (in seconds), the vertical axis
represents frequency (in hertz). The intensities of the component harmonics are
represented by marks that are more or less thick and dark. The numbers 1–10
correspond to the ranks of the harmonics. One sees that harmonics 1–5 are stable
(they make the fundamental perceptible), whereas harmonics 6–9 evolve markedly in
intensity. It is this succession of intensity peaks that creates the perceptible melodic
contour, notated below the sonogram in traditional pitch notation.
Let’s now examine a mundane piano sound. The analysis shown in Figure 2
corresponds to a brief instant of sound, just after the attack, of the note C1 played on
a modern piano (C1 is the lowest C on a standard piano). We are not interested here
in how the sound changes over time, as we were in the preceding example, but only in
its vertical (harmonic) structure. In this sound, the analysis program detected 118
harmonics, which is a rather large number. After eliminating the least important
components (i.e. those with close to zero intensity), 91 remain. Most of the low
instruments of the orchestra possess an enormous number of harmonics; however,
the piano remains an unusual case. Zones where the intensities of the components are
relatively louder than surrounding components are called ‘formants’. In the case of
Contemporary Music Review 191
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
the piano’s sound, we find formantic zones around the harmonics 27, 28, 29 and 30,
for example, or again around harmonics 35, 36, 37 and 38, which is extremely high in
the spectral scale.
In Figure 2, the numbers on the left in each column indicate the harmonic rank,
the numbers on the right give the intensity of each harmonic. The harmonics with the
most amplitude—which create the formants—are in bold (analysis carried out at
IRCAM in the 1980s).
Note that while the fundamental should normally be given the rank of number 1,
there is no component in this analysis with that rank. In fact, the 1st harmonic—the
fundamental—is totally absent. This means that the note C1, which we write in the
score, is in fact not heard at all. No frequency in the analysis of the piano’s C1
corresponds to the note C1. Therefore, at least in certain situations, what we think we
are hearing can be an illusion. In the case of the piano note, this illusion is called a
‘virtual fundamental’: we have the impression of hearing a fundamental sound when
we hear the entire ensemble of harmonics of a fundamental even if that fundamental
is itself absent. But, in reality, if you hear the sound C1 on the piano without bias, it
does not really resemble a C very much, nor does it resemble any other precise note.
It is actually a very complex sound that is barely harmonic and which does not really
fit the definition of a traditional instrumental sound. When this note is played at the
same time as a C major triad in the middle register, it sounds just like a real C—the
fundamental of the chord—because its normal harmonic contents are reinforced by
the chord of C major (and, inversely, the resonance of this chord will be magnified by
the harmonics of C1). On the other hand, if you play this very low C at the same time
192 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
spectrum. It is, of course, very obvious in the case of diphonic singing as well as for
certain similar sounds (such as those in the family of Jew’s harps). However, even the
sound of a familiar instrument (generally perceived as a single unit—a ‘sound
object’) can end up dissociated if we listen in a particular way. For the piano, the
evolution of the sound over time can be an aid to the perception of this inner
richness. At the emission of sound, all the components are present and the timbre is
complex and difficult to analyse; then, little by little, as the sound decays we hear
more clearly, and each in turn, the different zones of harmonic resonance—certain of
which die away first, while others resonate longer.
Temperament, Micro-intervals
It is well known that the pitches contained within a harmonic spectrum (as, for that
matter, in the majority of inharmonic spectra) are mostly not part of our tempered
scale universe. Therefore, working within the interior of a harmonic spectrum, as the
Mongolians do, entails the use of micro-intervals.
The frequencies observed inside of a spectrum do not correspond to any system
that divides the octave into regular intervals. However, since frequencies expressed by
the speed of their periodic vibrations (hertz) are inconvenient for the composer or
instrumentalist to use and difficult to notate on a score, I will continue to represent
these frequencies through (more or less precise) approximations using tempered
divisions of the octave. Figure 4 shows an example that compares three different
approximations of the same aggregate.
Quattro Pezzi per orchestra (ciascuno su una nota) (1959): in this set of four pieces for
orchestra, each piece is truly based on only one note. This goes beyond monody; it
represents a sort of extreme minimalism. In its way, the Mongolian music mentioned
earlier was also based on a single note.
In this context, where the parameter of ‘pitch’ is effectively abolished, music must
find other variables with which to express itself: these other variables are what Scelsi
called ‘the depth of sound’. This metaphorical expression designates the extensive use
of all of the internal parameters of sound: the spectrum, the variations of the
spectrum, the dynamics (the way in which the sound is dynamically developed over
time), the use of different types of sustain (like vibratos and tremolos of varying
speed) or even the timbral changes that one can create on the same note (e.g. by
playing it on different strings of a string instrument)—all of this is expressed very
precisely in the scores.
Scelsi wrote many works for solo instruments, which gave him the possibility of
deploying, in a clearly audible manner, his whole panoply of techniques for the
internal animation of sounds. In the orchestral works, the addition and mixture of
these sounds, with their own internal animation, further expand the sonic richness of
the unison—a unison ‘composed’ from the inside. More than an orchestration in the
traditional sense, Scelsi creates a sort of instrumental synthesis (to use Gérard Grisey’s
name for this technique). Moreover, this unison is usually thickened—enlarged into a
band of frequencies that surround the principal sound. Scelsi used quarter-tones in a
systematic manner, but very differently from the first explorers of micro-intervals
(composers like Alois Hàba, 1893–1973; Ivan Wyschnegradsky, 1893–1979; and
Julián Carillo, 1875–1965). Even though he often said that ‘quarter-tones are real
notes’ (to emphasize this fact, the symbols of quarter-tones are circled in his
manuscripts), Scelsi conceived his micro-intervals more as enlargements of the
unison than as a means of creating new scales.
Anahit, for violin solo and 18 instruments (1965), is in my opinion one of the most
successful and beautiful of Scelsi’s works. I will not go into a detailed analysis of the
piece, which would not really be of great meaning for this music. Instead, I would like
to pull certain generative principles from it and to give some indications concerning
the global form. One of the frequent characteristics of Scelsi’s music is the use of
smooth time, that is to say a form of musical time that is rarely marked by distinct
198 T. Murail (trans. by A. Berkowitz & J. Fineberg)
events. Since the music is centred around one or sometimes two principal pitches, all
traditional development and all systems of traditional variation become impossible.
The melody is limited to long, slow slides of pitch, sometimes punctuated by brief
more well-defined fragments. The formal progression is often very simple and
unidirectional.
Anahit is Scelsi’s only concert piece for solo instrument and orchestra. Its structure
in three parts could appear classical: the first part links the violin and the orchestra;
the second part corresponds to a violin cadenza; and in the third part the orchestra
returns. Inside each of the parts an alternation appears between relatively calm
passages and ‘climaxes’ where Scelsi used his entire range of techniques for
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
have more or less inharmonic spectra. This means that the mathematic relationships
between the components of their sounds (the ‘partials’) do not correspond to simple
integer ratios. We have previously seen examples of notes conforming to the
harmonic series: spectra where the frequency of each component partial is an integer
multiple of the fundamental frequency.8 The structure of any harmonic spectrum
follows this very simple rule. On the other hand, an inharmonic sound possesses
components that do not obey this rule. There is no single precise way of defining how
partials of inharmonic sounds relate to each other because, in contrast to harmonic
sounds, these potential relations are infinite. Nevertheless, there are structural models
of inharmonic sounds that are of special interest to us because they have been selected
by musicians through a slow historical process, a sort of ‘Darwinian’ evolution over
the course of centuries. Bell sounds, for example, have fascinated composers for ages:
Hector Berlioz in the Symphonie Fantastique (1830), Modest Mussorgsky in Boris
Gudonov (1868–1870), Claude Debussy, Maurice Ravel, Olivier Messiaen, etc. Figure
6 shows a schematic representation of the spectrum of a bell.
The fundamental is an F# and the harmonics of that fundamental are also present.
A slightly sharp A-natural (upward arrow) is interspersed in the harmonic series—
creating the non-harmonic sound of this bell. The frequency of the A in this example
is equal to the fundamental multiplied by 12/5.
This is the spectrum of a real bell, like the ones that ring in church steeples. Its
spectrum is different from that of an orchestral bell (tubular bell). With a sound like
this, though, we must be yet more precise: this is the spectrum of a European bell.
The spectrum of a Japanese bell—those enormous bells that one sees suspended at the
entrance of temples, and which the visitors strike with the help of a suspended
additional frequency.
For practical reasons, tubular bells are used instead of traditional bells in the
orchestra. Unfortunately, though, the sound of tubular bells is nevertheless somewhat
different than the sound of real bells (see Figure 7).
This tubular bell’s sounding pitch—the note that would be written on the score
and which should, in this example be a C5—is not really present. There are, of course,
harmonics of this C: the C an octave higher (C6), the G6 (3rd harmonic) the C7 (4th
harmonic), and finally the 7th harmonic (a low Bb). The note C5, which should be
heard, is obtained by subtraction. It is created as a differential sound between the
different harmonics (through the perceptual phenomenon of ‘virtual fundamentals’).
Certain inharmonic partials are very clear: a D# (or Eb if one prefers) and a D-three-
quarter-tone-sharp (or slightly lowered E). These two partials create an internal
beating that enriches the spectrum of the tubular bell. Additionally, their relationship
to the (virtual) fundamental forms an interval close to a minor 3rd, and evokes the
sound of a bell. Finally, the very low sound is simply an attack transient that is weak
and resonates only briefly—transients of this kind are common in orchestral
percussion sounds. With this tubular bell, though the composer writes a C, listeners
will, in fact, hear all sorts of things—a D-three-quarter-tone, a C an octave higher
than the written note, etc. If you double the tubular bell with another instrument,
why not double it at the higher octave or even with the minor-major third—the
D 3/ 4 #? Obviously, with real tubular bells, things are not quite so simple, the spectra
detected by the computer in the bell’s spectrum. However, certain partials are
hardly audible. The low sounds probably represent a sort of attack transient—like
the one we saw in the tubular bell spectrum. By using ‘Terhardt’s algorithm’,9 we
will reduce this analysis to only the sounds that are perceptually important
(Figure 8b).
mentioned earlier). This one analysis (after various operations including filtering,
transposition, modification of intensity envelopes, etc.) will allow the creation of the
entire palette of synthetic sounds used in the piece.
The bell’s spectrum also serves as a formal model. Various pitches, selected from
the analysis, articulate the sections of the piece: each of the eight sections uses one of
these pitches (plus the virtual F) as harmonic pivot (Figure 10).
The length of each section is inversely proportional to its harmonic rank in the bell
spectrum (the rank being the ratio of the partial’s frequency to the fundamental
frequency). More precisely, the length of each section (in seconds) is equal to 200
divided by the ratio of the pivot pitch’s frequency to the frequency of the
fundamental (C3). This gives the following list of durations: 100, 33, 75, 37, 50, 30, 84
and 200.
A child’s voice is also heard in this piece. The boy soprano sings the Latin words
engraved on the bell (mortuos plango, etc.). Harvey introduces a relation between the
pivot pitches and the colours of the vowels heard in each section. When the pivot
pitch is high, vowels that have high formants like ‘ee’ (as in free) or ‘ae’ (as in play)
are used most often and when the pivot pitch is low, like at the end, one will often
hears ‘oo’ (as in you) and ‘o’ (as in hope). Finally rhythmic pulsations—analogous to
internal beating of the bell—animate each section. The speed of these pulsations is
also proportional to the frequencies of the pivot pitches, according to the following
relationship (results in pulses per second, Hz):
functionality close to tonal music, in spite of the fact that the music is by no means
‘tonal’.
These calculations are obviously very simple, at least in relation to the resultant
pitches (the corresponding calculation for the intensity of each component is much
more complicated). On the other hand, it is a bit more complex from the musical
point of view, since the calculations are realized in hertz and must be transformed
from frequencies into musical pitches (approximating them to the closest usable
206 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
musical note). If working with quarter-tones, it is necessary to look for the quarter-
tone closest to the calculated frequencies. Figure 11 shows the first orchestral
aggregate of Gondwana.
The carrier is a G, the modulator is a G#. When the index is equal to 1, one obtains
two resultant sounds: D¼#5 and F#3; when it is equal to two, one obtains G¼#5 and
an F# too low to be heard (which will be suppressed), etc. The aggregates in Figure 12
are constructed with other carriers—A, B, D, F#, successively, while the modulator
stays fixed on G#. This gives us the series of aggregates shown in Figure 12b.
This progression is organized in order of increasing harmonicity. In effect, a direct
correspondence exists between the more or less consonant or dissonant character of
the interval between the carrier and the modulator and the more or less harmonic or
inharmonic result of the modulation. Thus, the first aggregate is based on the
dissonant interval G#–G is very inharmonic. Then, as the intervals formed by the
carrier and modulator become increasingly consonant, the orchestral aggregates
progress towards harmonicity. The last aggregate of this section—towards which the
entire progression is oriented—does not in fact correspond to a frequency
modulation spectrum, but to an incomplete double harmonic spectrum, based on
the last two sounds of the modulator-carrier pair (G#–F#), each transposed one
octave lower (see Figure 13).
All of these aggregates seem quite complex to the eye, but to the ear they are less
complex than one might imagine. In effect, whether they come from the results of a
frequency modulation or a harmonic series, they share the ability to create a certain
degree of fusion among their components. This fusion is due to the very precise
Contemporary Music Review 207
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
The technique of frequency modulation, that was used to build block structures
(large harmony–timbre aggregates) in this first section of Gondwana, is also used
to create various other forms and contours in other passages of the piece. For
example, in section F, pitches created through frequency modulation will produce
sets of harmonic-melodic structures, sorts of ‘fan-shaped’ contours. A central
frequency, C1/4#4, a remnant of the preceding process, becomes the carrier. The
modulator, very small at first, increases progressively as the index of modulation
increases. Instead of sounding all together, the pairs of resultant sounds (each
‘pair’ consists of an additional sound and a differential sound) enter one after the
other. This creates this effect of ‘fanning’ around a central frequency, like waves
breaking on the shore. This effect is similar to the one produced by progressively
raising the intensity of the modulator while synthesizing frequency modulated
sounds in an electronic music studio. The first waves present a very small
Contemporary Music Review 211
frequency interval (owing to the small modulator and low index), then a process
begins to manifest itself. This process grows and spreads until the contours amply
fill out the full tessitura of the orchestra. These wave-like contours are played by
the oboes, English horns and bassoons: the idea was to highlight these contours—
hence the choice of instruments with very rich timbres that stand out from the
resonance, played by the brass and strings, like the effect of a piano’s sustain
pedal applied to the orchestra.
Figure 15 shows the first four and last two ‘waves’ of frequency modulation in
section F. There are very narrow intervals at the beginning—almost like glissandi
around the carrier—and large sweeps at the end. In the final orchestration, the
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
approximation was often made to the nearest semitone because the passages had to
be played so rapidly. In the last two waves, the sounds that are too low have been
eliminated. You will notice that the lower line of these last two waves starts off
descending, like in the other waves, but then rises up again. This is called a foldover
effect: for high values of the modulation index, the resultant differential (c–i*m)
becomes negative (because i*m4c). A ‘negative’ frequency obviously cannot really
exist—at least not in the universe we know. Therefore, we can simply ignore the
‘minus’ sign (in reality the ‘negative’ sign of the frequency is manifested as an
inversion of phase, which does not concern us here). So the differential frequencies
start to increase again once i*m becomes greater than c and end up interspersed
within the additional sounds. This phenomenon considerably enriches the harmonic
or timbral texture, and is often sought after in synthesis by frequency modulation.
The aggregates in the first section of Gondwana contain a very strong foldover effect.
Let’s continue our study of the concept of models—in particular, the notion of
instrumental timbre as a model—by examining another piece: Désintégrations. This
piece both allows us to study various processes and to begin speaking about the role of
the computer in musical composition.
We are now going to create aggregates by intuitively selecting certain pitches from
this spectrum (which was already reduced to its principal formants). For example, the
first aggregate contains the harmonics 7, 11, 13, 20,29 and 36 (Figure 18).
Here again, I prefer to speak of an aggregate rather than a chord, because these
combinations of sounds serve equally well in the synthesis of electronic sonorities as
they do in writing instrumental parts. Since the electronic synthesis adds together
very pure, quasi-sinusoidal sounds, the partials tend to fuse strongly. Thus, the
resultant aggregate does not really sound like a chord, but like a single perceptual
object, a timbre. On the other hand, the instrumental orchestration of this object
creates a sonority more like what is usually called a ‘harmony’, owing to the
individual richness of each of the instruments used (the presence of harmonics in the
instrumental sound, the complex envelope of the sound, the vibrato, etc.). The global
result is nevertheless a bit ambiguous, since the electronic sounds and instrumental
harmonies are heard simultaneously. Once again, the most accurate descriptor may
be the hybrid term ‘harmony-timbre’.
instant 0; it is followed by an event b that lasts 3.2 seconds and that begins at instant 3
seconds, then by an event a that occurs at the instant 3.5 seconds—the Latin letter
indicating that this event belongs to the other spectral series—etc.
To create the feeling of progressive slowing, I used curves, not straight lines. It
would have been simpler to make straight lines between the points of departure and
arrival; however, the resulting progression would have been linear. Whereas,
observation of instrumental reality shows that instrumentalists, when asked to play a
rallentando, will intuitively perform logarithmic slowing down of event durations—
not a linear progression. A linear progression (of the ‘chromatic durations’ variety)
would not create a ‘natural’ impression; rather, it would create a constrained effect,
which sounds awkward to the ear. Here, we are jumping ahead into a new subject:
algorithm and intuition. The way I’m using the word ‘intuition’ amounts to a list of
intentions: ‘My first object will not last for very long; my last object will last 14
seconds; the process will be organized as a progressive slowing which should last
between 1 and 2 minutes.’ ‘Intuition’ would also include observing how musicians
and listeners react to this series of events organized in time—this is a sort of
experimenting with the musical ‘intuitions’ of those who will be participants in the
musical act (the listener and the performer). The algorithm itself is simply a series of
operations—logical or arithmetical—which allow a result, based upon a set of input
data (parameters), to be calculated. In this specific case, the algorithm allows me to
216 T. Murail (trans. by A. Berkowitz & J. Fineberg)
create the optimal curve for this rallentando process and also to calculate the
intermediate steps of this process. To define an algorithm, one must create a model of
the phenomenon one seeks to recreate: in this case, the manner in which a musician
performs a rallentando. This model allows a curve to be calculated—a mathematical
function, whose starting parameters are the intuitive estimation of the durations at
the outset and the arrival of the process (and possibly also a timeframe, which will
help the process fit within the global form).
I started thinking about time in terms of process, durations and functions before I
had access to computational techniques—these techniques facilitate algorithmic
calculation, which often requires their use. Even in Désintégrations—where the
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
computer was used as a sound synthesizer, a means of creating some of the formal
structures and carrying out certain spectral or temporal calculations—I turned to
empirical and graphic solutions. ‘Computer-aided composition’ programs did not yet
exist; thus, it was cumbersome to address these musical problems with environments
that were not very ‘user-friendly’. Moreover, my computer skills were still
rudimentary. This led to the use of graphs like the example we just saw. In that
specific case, I had to proceed by successive approximations, through ‘trial and error’,
modifying the initial parameters, etc. The constraints I had set myself were numerous
and sometimes contradictory: how to make two curves converge in a harmonious
manner, while still creating two convincing continuous rallentandi, and making all of
this occur in a set period of time. The computer would have been very helpful, if I
could have used it: computers can very rapidly calculate and simulate various
situations. It’s easy to start over and try, try again until a satisfactory result has been
found. The ‘algorithmic’ techniques of computer music need not necessarily be used
to create a predestined, automatically calculated, result. On the contrary, they can
allow the exploration of a larger field of possibilities; thereby heightening the freedom
of the composer—not limiting it.
Let’s return to the two rallentando curves: what makes them interesting is their
superposition. Instead of a simple rallentando, the alternation of points situated on each
curve creates an unexpected and unstable rhythmic progression; all the while conserving
the global impression of slowing down, since the durations (on average) are increasingly
long. The process is ‘directed’ (listeners perceive it as ‘going towards something’), but at
the same time this process still produces unpredictable rhythmic configurations. This is
a very simple example of the interplay of predictability and unpredictability: my feeling
is that this interplay is one of the central issues in musical composition. On the one
hand, a work needs to be part of a sufficiently predictable universe that the listener can
perceive continuity and coherence in the musical discourse; however, at the same time,
if the discourse is too predictable the work rapidly becomes uninteresting. Structural
predictability needs to be contradicted constantly by some type of unpredictability
within the discourse. However, it is also essential that this surprise, this unexpected
aspect, integrates logically and in a coherent fashion, a posteriori, over the course of the
form. The shock, the surprise, even the incongruous, should become explicable, should
reintegrate itself as a necessary element of the discourse (in hindsight). If this does not
Contemporary Music Review 217
happen, the unexpected becomes simply arbitrary and the effect of surprise will be
dulled on subsequent hearings. A totally unpredictable discourse does not hold a
listener’s attention any better than a totally predictable discourse. It is ironic that
extreme randomness yields the same sensation of total unpredictability for a listener as
does the total organization of the discourse—like the principles experimented with in
‘algorithmic’ music or in ‘integral serial’ music. It turns out that perpetual surprise is no
longer surprising, and unpredictability can became too predictable to be interesting.
The preceding example illustrates the way I conceive of temporal control. I do not
work with durations by combining small elements, pulsations or rhythmic
microstructures; on the contrary, I take a global point of view, conceiving the totality
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
Figure 21 The three last aggregates of section I. Note: The aggregates result from the
superposition of spectra built on A# and C#. The harmonics transposed down by one or
more octaves are boxed.
the strings move to ordinario playing. When the spectra of the aggregates has become
still richer, it’s the oboe’s turn to enter. The oboe plays in the high register (C¼#6) at
first. Because, while the low register of the oboes has a very rich spectrum, its high
register (in the region of C6) has a much simpler spectrum, very centred on the
fundamental—resembling quite a bit, in fact, the clarinet’s or flute’s spectrum.10
Once the two spectral series collide, creating really rich spectra, the other
instruments enter (bassoon, brass). Obviously, these instruments, with very rich
spectra, add their own harmonics to the theoretical aggregates and could confuse the
Contemporary Music Review 219
electronic sound).
sonic result. However, at this point in the process the added richness only reinforces
the spectral complexity that has been attained. Even better, I can draw on the added
spectral richness. Let’s take the example of the aggregate in bar 34, the penultimate
aggregate of this harmonic process (Figure 23). The horns and the double reeds play
five of the aggregate’s central pitches. They are playing forte with accents so they add
their own harmonics powerfully. The tape part takes up the spectra of these five
instruments and progressively unfurls their additional harmonics, all the way up to
the 23rd partial. It is almost as if we were applying a gain filter tuned to higher and
higher frequencies in the instrumental sounds. This process makes clearly audible the
harmonics of harmonics.
At the end of this sonic spiral, the very high harmonics form a very brilliant
‘cluster’. The strings then take up certain pitches of the cluster—in regular sounds or
in harmonics—and a high cymbal joins the strings, with the hope that the frequency
band of the cymbal will be in the same region as that of the synthetic sounds.
Unfortunately, this is not always the case. The imprecise definitions of percussion
instruments are a recurring problem. In the score, when requesting a high or low
cymbal, a high or low tam-tam, it’s never clear just what kind of sound will be
produced. If your only concern is a colouristic or emotional effect, this is not a big
problem. However, if one is looking for a more precise effect, like the one described
here (an effect of integration between instrumental and electronic sounds), the
problem becomes crucial. Just as a microphone is defined by its frequency response
curve, it would be useful for a cymbal to be delivered with its spectrogram and
defined by a frequency band, rather than the impossibly vague descriptions ‘high’,
‘medium’, ‘low’, etc.
tape and the instruments is achieved through a ‘click track’: ‘clicks’ are placed on one
track of the multichannel tape and the conductor hears these clicks through
headphones. The clicks accurately reproduce the measures and the beats of the score.
This technique allows near-perfect synchronization, but it takes a lot of interpreta-
tional liberty away from the conductor: he absolutely cannot change the tempi.11
steps, and the result is used to generate successive frequency shifts of out original
aggregate, the result as shown in Figure 26 is obtained.
The overall process clearly moves from harmonicity to inharmonicity, but the
intermediate results are not necessarily what we wanted to obtain: the intervallic
configuration of these aggregates either can reinforce or contradict the global process
(note the splendid F major triad in this example). Of course, the unanticipated results
could be adjusted to break up a process that is too predictable. As calculated, this
harmonic succession seems somewhat incoherent: should the algorithm be changed?
It is hard to imagine a calculation that could resolve this type of question; only the
intuition and craft of the composer will ensure that good decisions are made. The
solution, in this case, was to calculate many more steps than needed (25) and to
choose from among those steps in order to create a succession that seemed to make
harmonic sense. The progression that results from this is slightly irregular and less
smoothly progressive than the previous sequence; however, in the end it works much
better (see Figure 27).
decorative role and it does not merely serve as a colouration of time as it passes (we
will come back to this later).
Now we arrive at section X of Désintégrations (Figure 33). The starting point for
this section is a low E played by the trombone. The tape takes up the trombone’s
harmonics, calling attention to them through successive entries. It then distorts the
trombone’s spectrum by progressively displacing the partials (in fact the fundamental
of the tape’s spectrum is the E an octave lower than the trombone’s—as if the
trombone were playing the 2nd harmonic). To illustrate what is happening, let’s
choose the 12th harmonic as a point of reference: in a harmonic spectrum, it should
be a B4. For the first step of the distortion process, this 12th harmonic is raised by
one quarter-tone to B¼#4. This operation is carried out eight successive times, so
that at the end of the process, the 12th harmonic has been raised from B4 to D#4 by
steps of a quarter-tone. Obviously, all of the other harmonics are recalculated as a
function of this reference displacement.
230 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
the 3rd harmonic rises by half-steps. This creates an upward frequency slide and an
effect of compression simultaneously. The first step in realizing this progression
consisted of calculating, in order, the six different distortion spectra—to which the
initial spectrum was also added. If I had stopped there, the result would have been
an extremely predictable process, like the one in section X. I sought this
predictability in section X because it provoked a high degree of tension that could
only be resolved through a ‘catastrophe’—a sort of explosion (the opening of
section XI). In the passage we are interested in here, I needed a more static effect
that would form a sort of ‘climax’ for the piece. It was, therefore, impossible to give
this process such a strong orientation. I needed to disrupt the progression. I did
this, first of all, through local permutations: instead of presenting the distortions in
increasing order (1, 2, 3, 4, 5, 6, 7), they are used in the order 1, 4, 5, 2, 6, 3, 7.
While this change modifies the local progression, it preserves the global orientation.
These local permutations introduce ‘accidents’—fractures—that make the listening
experience much more interesting and thwart an excessive sensation of
predictability. I have often used this technique that produces one of the major
articulations of musical discourse: a dialectic between predictability and
unpredictability. To avoid the effect of tension (which occurs quite clearly in
section X, as a result of the great enlargement of the range), the aggregates in
section VIII are alternatively enlarged or reduced through the addition or
subtraction of partials. The tape and the instruments realize these aggregates
simultaneously. Each one has a different duration, as a function of its contents (its
degree of distortion). There is also a sort of ‘spatial vibrato’ in the tape part—a
rapid forward-backward spatial movement. The frequency of this spatial vibrato is
also a function of the harmonic contents of the aggregates.14
The last aggregate of the very short section VIII ‘collapses’ brutally into a dense
storm of sounds, marking the start of section IX. This is a sort of ‘chaos’, it creates an
impression of disorder that was, nonetheless, carefully constructed. Beginning with
maximum instability, the textures gradually organize themselves. Progressively, they
sharpen their focus around a low E in the trombone and thus arrive at the opening of
section X, which we have already discussed. This ‘downpour’ of sounds is the result of
virtual ring modulations between the low sounds played by the strings—which
progressively stabilize around the trombone’s E. The modulations were calculated
232 T. Murail (trans. by A. Berkowitz & J. Fineberg)
with the strings’ spectra in mind: each harmonic of the first sound interacts with each
harmonic of the second. In other words, if sound A possesses five significant
harmonics (A, 2A, 3A, 4A and 5A) and sound B has three harmonics (B, 2B and 3B),
the resultant modulations will be A + B, A – B, A + 2B, A – 2B, A + 3B, A – 3B, then
2A + B, 2A – B, 2A + 2B, 2A – 2B, etc. This produces a huge mass of resultant notes
(Figure 34). All the combinations between the pairs of low sounds are exploited as
showers of notes and not as synchronized spectra—as were the harmonics of the
instrumental sounds in section III, which created the clouds of high percussive bell
sounds. The storm is organized according to global gestures (descending lines)
modified by controlled randomness algorithms (which the computer took care of).
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
Figure 34 Example of ring modulation, section IX. Note: Modulation between sound
‘A’, with six harmonics, and sound ‘B’, with five harmonics. The resultant sounds are
classified by harmonic level. The differential tones that were too low have been
eliminated.
we now call ‘algorithmic music’ (it must be said that this tendency has not really left
us very many masterpieces). This approach often turned out to be naı̈ve and led to a
reduction in the complexity of the musical act, which was in effect a contradiction of
the initial postulates.
In hindsight, the principal critique of ‘algorithmic music’ is that musical
phenomena are not as easily reduced to a series of numbers (numerical data that
234 T. Murail (trans. by A. Berkowitz & J. Fineberg)
the computer can manipulate) as some have thought. Therefore, the goal of totally
controlling the form and content of a piece of music with computer algorithms is a
mirage. There is no automatic relationship between an algorithm and the perception
of the musical (or at least, the sonic) phenomenon generated by that algorithm.
Computer music research in the 1960s and 1970s moved on to concentrate more on
sound synthesis, a trend that was facilitated by the increasing power of computers.
However, this new focus on synthesis often led institutions and researchers to forget
the contributions computers could make to the work of composition proper. For
example, when I began working at IRCAM in 1981, I found a variety of synthesis
programs there, but not one program capable of assisting composers in their daily
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
work—not even the kind of elementary little programs that could perform small but
tedious tasks, like converting frequencies into musical notes and vice versa. During
that time, I decided that I had to develop some rudimentary programming skills,
which allowed me to write small personal programs for spectral calculations,
modulations and distortions, exploitation of analytical data, duration calculations,
etc.
The computer can help us express musical images. I see the act of composition as
a sort of mental projection: I imagine more or less complex musical situations in
which the details are not yet defined, then I try to realize them. To do this, one
must analyse and decompose the global nature of these musical situations. The
musical ideas must be reduced into components that are much simpler than the
original idea. Without adequate conceptual tools to realize this simplification and
reconstruction of the original musical image, the final result runs the risk of being
very far removed from the original conception. It is at this level that computers can
be useful. They allow us to keep the connecting thread between the original idea
and the final realization intact. They do this in two ways: first, the computer
accelerates the processes of decomposing and then recomposing the sonic image;
and, second, the computer can propose more refined solutions than those that we
might have intuitively chosen. This is, of course, due to the computer’s capacity for
performing complex calculations; however, it is also the result of a computer’s
ability rapidly to propose a multitude of different solutions—between which the
composer can choose. Whereas, when working intuitively (with pencil and paper),
fewer possibilities can be imagined at one time, which encourages the composer to
accept the first solution that is found—or to be content with an only approximate
realization.
The role we are defining for computer-aided composition is thus, in the end,
somewhat modest. We are not asking the computer to invent the global shape of a
piece, or to determine its large-scale form; we don’t even really expect it to create any
of the material. The computer’s role will be situated somewhere between these two
levels, as a mediator, or perhaps an intermediary. This is the perspective with which I
have created a certain number of computer tools for myself over the years. These
programs responded to precise compositional needs, and not to theoretical
considerations. My first programs worked on small personal computers; then I
Contemporary Music Review 235
15
collaborated on the completion of the program Patchwork at IRCAM. Patchwork
offers the advantage of being an environment where composers can easily create their
own algorithms, produce representations of the obtained results in musical notation,
and play these results via a MIDI interface.16
The ideas behind this sort of computer-aided composition are very different from
those traditionally associated with ‘algorithmic music’. Algorithmic music’s ideas
were most likely derived from the movement’s heritage in serial writing:
permutations, combinatorial operations, etc. A mechanistic or ‘algorithmic’
approach in that sphere actually pre-dates the development of computers. Let’s take
some examples from Messiaen (in whose music one would probably not, at first
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
fact was new) and complex—the manual calculations to realize them were, if not
complex, at least long and fastidious. Once it became easy to create frequency
modulation spectra, either by programming them on synthesizers or by calculating
their contents with a computer, they lost some of their earlier magic: they became
well-known sonorities, and the great simplicity of the procedure that generates them
was revealed.
All the same, this development allowed me to concentrate on higher-level work—
on the musical discourse itself. Computers free me from all sorts of ‘accounting’
issues and allow me to focus my creative effort on what is really important. What
might previously have seemed like the ultimate goal of the work is no longer any
more than a point of departure. This ease with which the computer generates
material can give composers much more freedom to imagine, to let their intuitive
ideas fully ripen into the imagined musical realization. Paradoxically, algorithms can
liberate our intuitions.
the fermatas, and can repeat certain fragments, with the goal of letting the resonances
bloom or evaporate.
Since processes, global transformations of texture from one state to another,
underlie Territoires, the pianist must perform the very delicate task of creating these
progressive changes. It is not sufficient for the pianist to concentrate on any single
instant. The pianist must maintain the progressive evolution of a musical passage in
his memory: understanding the nature of the transformation and the objective
towards which it is aimed, in order to be able to guide these processes—which are
sometimes quite long (they can last four to five pages)—in a way that will clearly
recreate them for the listener. Examples of these processes are very gradual
accelerations or decelerations. The pianist must carefully control the slowing or
acceleration, so as not to risk arriving at the goal tempo prematurely, which would
create an undesired moment of tempo stasis. The same kind of planning ahead is, of
course, equally important for controlling dynamics. The complexity of the piano
writing grows greater when the piece arrives at junctures with superposed processes:
for example, one process is often abating while the next one is beginning to establish
itself. Another type of junction is created when musical material has transformed in
such a way that it becomes unrecognizable; then, from this resulting material, this
sort of residue, a new process begins, and so on. Processes overlap incessantly in this
piece, which makes it difficult to divide it into clear sections. In the score, rehearsal
letters mostly serve as reference points for the performer or for the analyst; however,
they do not necessarily correspond to marked caesuras for the listener.
Echoes
The first example17 that we will examine (page 7 of the score; Figure 35), uses echoes
as its model; however, this echo is a little unusual because it is combined with a
technique of harmonic resonance. The (rather simple) point of departure consists of
two intertwined melodies.
A bit later in this process, when the general dynamic level augments slightly, the
lower melody needs to be played slightly less loudly than the upper melody: this
allows the two melodic streams to be distinguished from each other. At the
beginning, the melodic fragments use very few notes and are confined to a restricted
Contemporary Music Review 239
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
range. Progressively, this range enlarges, the number of notes increases, and the
contours become more complicated. Let’s imagine building a melody with neumes. A
neume is a very simple, very clearly shaped contour. Gregorian neumes consist of
contours using two, three or four notes. However, one can invent slightly more
elaborate neumes, which can be used as the elementary units of melodic fragments.
These melodic fragments will become increasingly complex if we place additional
neumes as substitutes for some of the notes within the neumes already used. Today,
we would describe the resulting melodies as ‘fractal’.
The two melodic streams created this way are then reflected in echoes. The model
for this process was more the electronic echo chamber than the natural phenomenon
of echo. Moreover, the composing of this sort of process allows some liberties to be
taken. Rhythmic liberties: instead of being regular, the repetitions undergo
progressive deceleration. Modification of timbre: in natural echo and analogue
electronic echoes (like the ones in use at the time this piece was composed), the
repetitions are filtered, causing the upper harmonics to disappear progressively. In
this case, however, I use my compositional liberty to produce the inverse effect: more
and more harmonics appear over the course of the repetitions. To avoid leaving the
audible domain (and the keyboard), the highest of these harmonics are transposed
down one or more octaves; thus an echo—through this process—can sometimes
appear in a lower register than the original note (Figure 36).
To implement this principle, I built a sort of grid where melodies appeared with
their echoes—according to the system of rhythmic slowing. Each echo has more and
more harmonics, transposed if necessary. The mass of pitches I ended up with,
obviously, was too large to be playable on the piano. Therefore, I intuitively selected
the elements that seemed most interesting to me, and that created musical structures
that were playable.
240 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Towards the end of this passage, a polarization arises for quite a while around the
note C; it then dies away very gradually creating confusion between the echoes and
the melodies from which they originate (page 11, 2nd system). The resulting mixture
of melodies and echoes transforms, ‘congeals’, into a sort of rhythmic swaying (page
12). The idea behind this whole section can be seen as a progressive proliferation of
pitches generated through the accumulation of echoes, leading to a point where the
original structures become unrecognizable. After only a few pages, the music seems a
bit anarchic, a sort of ‘organized chaos’. Inside this chaotic system appear rhythmic
polarizations and resonant frequencies, such as the C mentioned above (these louder
resonant modes in the midst of saturated sonic spaces are a real acoustic
phenomenon that is easily perceived in concert halls, for example). The music
finishes by contracting back on itself, around the poles of frequential and temporal
attraction—a bit like a black hole, where matter folds back on itself. At the end of this
process of proliferation then coagulation, the music settles on semi-repetitive
formulas, with the left hand and right hand moving independently. This type of
procedure can be found again and again throughout the whole piece: there is a
constant oscillation between semi-regular pulsations and rhythmic configurations
that appear very ‘chaotic’.
The letter ‘R’, used as a dynamic, signifies ‘do not play louder than the resonance’;
this allows the resonance of the note to be sustained, without hearing the note struck.
The G then starts to crescendo and progressively emerges.
A similar phenomenon is produced on page 17, where successively C#4, G3, then
D5 emerge softly from the resonance of a low ostinato and then congeal in a repeated
chord. Before arriving at letter E, the ad libitum repetition of the chord G–C#–D
allows the sonority to ‘deflate’—arriving at ppp. Therefore, letter E does not so much
mark a new section as it does a point of inflection (the moment where the curve
changes direction, from increasing to decreasing or the inverse). The bass sounds
242 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
(vestiges of page 16) are held over and then disappear progressively. The effect is as if
some contrabasses of the orchestra were performing a gradual diminuendo to silence.
Another example that makes use of the piano’s natural resonance occurs at the end
of the piece, where the three sounds F1, D#4 and C#7 are repeated for quite a while.
The harmonics of F1 are progressively amplified—affecting, among others, the 7th
harmonic (a slightly lowered D#4). This creates a beating between the overtone of
244 T. Murail (trans. by A. Berkowitz & J. Fineberg)
F—the lowered D#—and the equal-tempered D# played directly by the pianist. This
beating causes the D# to start vibrating in a very special way, which colours the entire
end of the piece.
Ring Modulation
Let’s return to section E. It starts on the chord G–C#–D. These three pitches are used
as sound generators for a ring modulation. The intervals contained within this chord
have certain specific characteristics. The interval C#–D is ‘dissonant’, in the
traditional sense, but it is softened by the G: the perfect fifth G–D has a consonant
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
harmonic nature, while the interval G–C# is somewhere between dissonance and
consonance.18
We saw above how to calculate the ring modulation of two sounds, along with
their harmonics. Here, I created imaginary modulations between all three sounds. If
we designate their frequencies with the letters a, b and c, we will calculate the
interactions between a and b, between b and c, between a and c, sometimes between
a, b and c, and between the harmonics of these sounds (up to the 5th harmonic). The
obtained result constitutes a vast table of frequencies in which we can trace a kind of
path, by first concentrating on the simplest combinations (between a, b and c), then
by introducing the second harmonics, that is 2a, 2b, 2c, then the third 3a, 3b, 3c, etc.
By exploring more and more harmonics and their combinations, we move away, little
by little, from the initial anchoring to G, C#, D—and this introduces considerable
changes in the musical flow (see Figure 39).
The chord written in small notes on the 4th beat of Figure 40 contains three
‘additional’ sounds (plus some harmonic and inharmonic partials). The dynamic
marking 4R indicates that the pianist must play slightly louder than the current level
of resonances. The lower chord on the 8th beat helps make the ‘differential’ sounds
audible.
At the end of the section, the generator chord progressively disappears. The whole
reservoir of possible notes has already been used and now a ‘filtering’ effect appears:
the lowest pitches are eliminated. The cut-off frequency of the ‘filter’ slowly rises,
until the sonic texture is reduced to a high trill C7–Db7.
Another example of virtual ring modulation occurs at the end of the piece. At letter
G (page 30), several different musics are superimposed. The first element, low
resonances, a reminder of the music that preceded it (a sort of ‘stormy’ music, made
with percussive gestures and trills in the low register of the piano), will be heard until
the end of the piece. However, it will grow gradually simpler as it condenses onto a
single frequency (F1). The second element, a sequence of sounds in the middle
register centred around C#4 (this C# is also inherited from the previous section),
smoothly changes its polarity: D# is substituted for the C# as a pole of attraction and
ends up attracting all of the nearby sounds to itself. The third element: a progression
of ascending movements that are progressively drawn towards C#7. These three
sounds (F1, D#4, C#7) then start to interact, in the same way I described above (see
Contemporary Music Review 245
Figure 41). However, the final result is quite different, because these three pitches are,
in fact, part of the same harmonic spectrum—or at as close as is possible with equal-
tempered notes. The D#4 is very close to the 7th harmonic of F1 (we saw before that
this creates beating with the exact—real—harmonic of F); the C#7 is the 7th
harmonic of D#4, or if you prefer, the 49th harmonic of F119 The resultant sounds of
a ring modulation whose inputs are part of the same harmonic spectrum will
themselves be part of this harmonic spectrum. The pitches obtained in this section
are, therefore, close to the harmonic spectrum of F. The modulation enriches the
global timbre but does not produce the ‘anarchic’ effect of proliferation there was in
section E.
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
Musical Atoms
The organization of musical discourse, traditionally, has used notes as the point of
departure. These notes are assembled either horizontally into melodic lines or
vertically into chords; melodic lines and chords are then superposed to create
polyphony or an accompanied melody. This traditional conception (still very much
present in academic teaching) is, in fact, very limited. Music can be conceived in
categories that are far vaster; moreover, this new sort of conception is not in
conflict with the traditional approach, but rather incorporates it. Let’s return to the
notion of a ‘note’: notes are normally considered the smallest element of musical
discourse, the musical ‘atom’. In the etymological sense, ‘atom’ means ‘indivisible
element’—an object that one cannot divide into smaller elements. Moreover, the
very notion of a note is actually quite ambiguous: the term is simultaneously used
to refer to a sonic event (a ‘musical sound’) and a symbolic object (the ‘note’ that
appears in the score).
However, the perceptual atom is only rarely the musical note. Perception is
interested in much larger objects, in structured ensembles of sounds (e.g. a melodic
sequence of notes). Additionally, we cannot say that the musical note (seen as a
sound) is indivisible; just as, since Niels Bohr, the atom is also no longer the atom,
since it can be broken down into smaller particles. If the atom can be compared to a
miniature solar system, similarly, a musical sound is a complex world into which we
can enter and within which we can explore.
246 T. Murail (trans. by A. Berkowitz & J. Fineberg)
We saw that spectral analysis allowed us to dissociate complex sounds into their
elementary components—with different frequencies, amplitudes and phases. Each
sound has a specific dynamic evolution along with attack and extinction transients;
and, in fact, each of the sound’s components has its own, independent dynamic
evolution. This huge internal richness is what makes certain sounds particularly
interesting to human perception. Thus, we arrive at a two-part pronouncement. First,
the musical note (seen as a sound) can, in fact, be broken down into very much
smaller elements. Second, more often than not, the note is not in and of itself an
object of perception: it is usually only one element within of a much larger perceptual
group. Therefore, a note is just one level within a hierarchy of musical (perceptual)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
structures.
Musical Objects
Many themes in music from the ‘classical’ period are built on very simple structures
like scales or arpeggios. That a theme includes, for example, the sequence C, Eb, G is
not really important. Even the fact that this sequence could help establish the key of C
minor is not essential. What is really important is for the listener to be able to
recognize this ‘arpeggio’ object itself: once learned, the sequence C, Eb, G will become
available for transformation later in the piece (e.g. through transpositions and
modulations to G, B, D or even Bb, E, G, Db, etc.). The harmonic colours and the
intervals will change, but all of these objects share a strong common identity. For
perception, what matters is the similarity of dynamic movement in ascending
arpeggios (Figure 42).
In computational language, we would say that each of these ‘arpeggio’ objects is an
‘instantiation’ of the same class, ‘arpeggio’. Each individual—each object—can still
be unique, through the interplay of parameters defined for the given class. This
similarity of structure can work to our advantage when employing a computer-
assisted composition program such as Patchwork.20
This notion of object is quite unlike the traditional notion of thematic
development; it is closer to the leitmotif idea, though it is different from that as
well. Musical objects as I’m defining them are extremely supple; they can be modified,
even to the point of progressively changing their identity (by subjecting them to
processes of transformation). The original form of the object, after successive
metamorphoses, can be forgotten—this is in complete contrast to the Wagnerian
leitmotif, whose role is of course to be recognized. Nevertheless, the idea of a class of
objects, from which other objects are derived, is the same in both cases. Because of its
role as a beacon for the listener, the leitmotif most often does not participate in the
development of forms and textures: it remains isolated in the midst of the discourse.
This is not exactly the kind of function I’m trying to endow objects with. Debussy
might provide a better illustration. While his music is not dominated by the idea of
thematic development, you never lose your footing when listening, perception is
never disoriented, and you always find points where your memory can anchor itself.
Contemporary Music Review 247
Debussy uses cells, motions and contours that allow for the identification of
similarities between objects. This makes it very difficult to analyse his music with
classical techniques: something else is going on.
In computer science terminology an object contains both data and the means
(‘methods’) for the exploitation of the data. The data for an arpeggio-object are a
harmonic field and some parameters. The method employed is the ‘arpeggio’
method, which consists of separating out certain sounds from the harmonic field, as a
function of certain parameters: speed of the arpeggio, range, size of steps, number of
steps, direction (ascending or descending), etc.
From the object class ‘arpeggio’, which we have just defined, we can derive
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
subclasses, another notion commonly used in both computer science and music
(whether consciously or unconsciously). Thus, one subclass of an arpeggio could be a
broken arpeggio: instead of a unidirectional motion, there will be a zigzag path. An
ordinary arpeggio and a broken arpeggio have different contours, yet they are clearly
related. The data and the methods of exploitation can be varied infinitely; however,
there will always be some sort of (more or less loose) relationship links, and these
links will at least be visible from one step to the next, though after a certain number
of operations, it may very well become quite difficult to recognize the original object.
With these ideas in mind, we might take a fresh look at the music of the past.
Instead of holding on only to traditional criteria (thematic development, formal
models, tonal progressions. . .), we could explore structural and statistical
phenomena, as well as everything else that concerns the actual perception we have
of a piece, rather than focusing exclusively on its theoretical conception. To the idea
of a musical object, we could add other notions, such as texture. Rather than speaking
of counterpoint, polyphony, accompanied melody, etc., we could simply categorize
all of these as different types of texture. For example, seen this way, four-voice
counterpoint, which for a long time seemed to be the most perfect and advanced
form, is but one particular, limited texture—a specific configuration of textural
organization amongst an infinity of others. Though this perhaps pushes the point a
bit far, we could say that four-part counterpoint is simply a subgroup of much vaster
structures, such as Ligeti’s micro-polyphony . . .
Another perspective is that of the Norwegian composer Lasse Thoresen, who
developed a theory of textures, layers and strata in music. According to Thoresen,
within musical textures, certain layers are more visible (audible) than others.
However, the importance given to the various layers varies for each listener. For
example, classically trained musicians generally have the impression that popular
rock music sounds ‘impoverished’ (without depth). Our perception of the
foreground, the most apparent layer, is what ties in with our musical education
and thus it is often what we attend to: classically trained musicians seek harmonic
progressions, melodic development, etc.—all things that will not be found in popular
music. For rock musicians, by contrast, the most important layers are the rhythmic
and timbral layers—harmony and melody are mere ornaments in the background.
Everything is changed if we view things from this angle.
248 T. Murail (trans. by A. Berkowitz & J. Fineberg)
In one way or another, this type of analysis totally ‘short circuits’ traditional
notions of thematic development and formal models. If we now add in the idea of
process—transformation from one texture to another or generation of objects whose
characteristics vary progressively—we obtain some absolutely fascinating results.21 A
complex musical image—composed of textures and objects—comes to life, and then,
by way of transformations affecting its components, evolves towards another quite
different image (into which the various processes at work will progressively transform
it). Numerous recent compositions have employed this type of organization:
processes and metamorphoses alter the musical objects, generating intermediate
situations with new, even unheard of characters—while also conferring a tension
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
Allégories (1990)
Allégories is written for six instruments: flute, clarinet, violin, cello, horn and
percussion. It also requires a real-time electronic performance apparatus consisting of
a Macintosh computer, a MIDI keyboard (that does not, itself, make any sound, but
sends MIDI signals), and a Yamaha TX-816 synthesizer. The TX-816 includes eight
modules (each of which has the power of a DX-7 synthesizer), which can produce a
total of 8 times 16 polyphonic voices. These 128 voices allow me to create a sort of
real-time additive synthesis. Since the electronic textures in the piece are too complex
to be played directly by one keyboard player, the computer controls the synthesis
modules using the commands sent by the MIDI keyboard as cues. The computer uses
the program MAX (the Macintosh version of which was still under development at
IRCAM when this piece was composed).
At the time I composed Désintégrations (1983) for orchestra, this type of system
did not exist, and real-time realization was still very difficult. This is why composers
continued to rely on pre-recorded tapes to play back their electronic sounds.
However, these tapes created a major problem: synchronizing the tape and the
instrumental ensemble. In Désintégrations the conductor is forced to use an earpiece
through which he hears ‘clicks’ corresponding to the beats in the score. The tape has
four tracks, one of which is reserved for these ‘clicks’—which faithfully follow the
changes of tempo and meter.22 Obviously, the ‘click track’ technique imprisons the
conductor: any rubato whatsoever becomes impossible. This is a difficult constraint
for the conductor, but also for the composer, who can no longer count on the
suppleness of interpretation to repair potential holes in the writing. In a sense, the
interpretation is fully planned in advance and fixed—at least as far as durations are
concerned. In certain cases, this can be a good thing, because potential
misinterpretations are avoided; but sometimes a good interpretation can transfigure
a piece and reveal within it aspects that the composer himself had not imagined, and
this potential is eliminated by the ‘click-track’. This is why real-time electronics are
Contemporary Music Review 249
desirable, at least in terms of allowing a much more supple synchronization with
instruments and conductors.
The electronic techniques used in Allégories are relatively modest; yet it still
attempts to replicate the idea behind Désintégrations, where the electronic sounds
enrich and complete the instrumental discourse. However, there is one major
difference: in Allégories, the electronic sounds follow the conductor, and not the
other way around. The electronic part is essentially decomposed into small events
(objects or textural elements), which are triggered at the right moment by an
instrumentalist playing on a MIDI keyboard. The notes played by the
instrumentalist have nothing to do with the sounds one hears, they are simply
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
Additive Synthesis
We saw earlier that all musical sounds are divisible into elementary sonic
components. Inversely, a sound can be reconstructed from these elementary
components. The reason that additive synthesis is so attractive to me resides in the
ease with which the composer can control (‘compose’) each detail of the sound.
Almost the entire tape of Désintégrations was created in this way. Certain of the
electronic sounds evoke percussion, piano, trombone or cello; however, in reality,
they are totally artificial sounds obtained through analysing instrumental spectra.
These spectra are then manipulated, re-interpreted and deformed by the computer
before being used as the basis for synthesizing these completely artificial sounds. With
this technique, sounds that evoke instruments can be ‘re-composed’ just as easily as
hybrid sounds (sonic ‘monsters’).
In a certain way, this mode of synthesis is very primitive—and, in any case, it is
very laborious. Its roots date back to Stockhausen’s first experiments, in which he
sought to construct sounds from sinusoidal generators. The technology available at
that time was certainly awkward: the generators were large boxes that had to be tuned
by hand, then recorded and mixed over and over again (since each generator was
monophonic). This all became much easier with computers. Nevertheless, creating
sounds with additive synthesis remains complex and difficult. For example, in
Désintégrations to create an interesting sound it was often necessary to keep track of
10–30 components per sound, with 10–15 separate parameters for each component:
pitch, dynamic, duration, time of attack, dynamic envelopes, spatialization envelope,
vibrato—with its different parameters (envelope, frequency, amplitude), spatializa-
tion, etc. There were often several hundred parameters for a single sound.
Programming these parameters manually was, of course, impossible. Therefore, I
needed to write a program that could calculate all of the necessary parameters as a
function of global musical data. For example, I needed to be able to specify to the
computer that an oboe spectrum would be used, that the global duration would be x
250 T. Murail (trans. by A. Berkowitz & J. Fineberg)
seconds, that the attacks would not be simultaneous (but rather staggered with an
acceleration effect), that the vibrato would have a certain frequency (speed) for the
lowest component and another for the highest component, etc. The program then
performed all of the necessary intermediate calculations, carried out any interpola-
tions needed, and supplied the list of parameters required for synthesis. Clearly this
work remained rather cumbersome, even with computer assistance; however, even
now additive synthesis still seems the appropriate procedure if you want to control
the finest details of the sound.
The Yamaha DX and TX synthesizers function on the principle of frequency
modulation, which allows the construction of rich sounds with relatively few
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
parameters. Nevertheless, the detailed make-up of these sounds is often beyond the
programmer’s control. In Allégories, I actually use the potential of frequency
modulation synthesis very little: only for some sounds, which are played at the
beginning of the piece. All of the other electronic sonorities in the piece are created
through additive synthesis: the synthesizer emits only sinusoidal tones, whose
amplitude envelopes (percussive sounds, very soft attacks, shorter or longer
resonances) and aspect (various vibratos or phase differences) are varied.
The pitches in these clouds come from the upper portion of a distorted harmonic
spectrum. The precise sequence shown in Figure 48 does not occur in the score; it is
one among thousands of possible combinations, only a few of which were actually
used in the piece.
The synthesized sounds are sometimes in a closer relationship with the
instrumental sounds; in several sections of the score (e.g. sections C, L, M and
N), they create echoes or pre-echoes of instrumental sounds. At other times, they
add synthesized formants to the notes played by the instruments (e.g. the end of
section N). Often, the attacks of the partials are desynchronized so as to produce a
Figure 46 Percussive aggregates, section H (bars 2 and 12, respectively). Note: The
chords are represented in the form of arpeggios to facilitate their reading.
sort of sweep through the spectrum. All of these synthetic sounds are based on
spectral analyses of the instruments that they complement. However, they are never
used to replace an acoustic instrument; rather they enrich or diffract the
instrument’s sound.
254 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
Figure 47 (a) Section H, bars 1–13. (b) Section H, bars 1–13 (continued). (c) Section
H, bars 1–13 (continued).
object that I will describe are certainly present in the music, even if they do not result
from a deliberate pre-compositional plan.
The initial object is simple, almost banal, but choosing it was not so simple. I
needed a very special, malleable object: one that was susceptible to metamorphosis,
but also one that was sufficiently distinctive that it could be easily recognized—yet
not so distinctive that it could not undergo extensive transformations. It is helpful if
such an object is simple and striking, but it is not necessary—on the contrary—that it
be complex or even very interesting. A perfect example is the initial cell of
256 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Beethoven’s Fifth Symphony: a not very sophisticated melodic fragment. However,
this simple idea allows for many subsequent transformations. Without wanting to
inflate the analogy or compare my piece to Beethoven’s, this is a bit like what happens
here.
Figure 49a shows a schematic representation of the initial object. It consists of what
Messiaen calls a ‘rocket group’: rapid ascending lines of several instruments
superimposed, which reaches a small accent, prolonged by a trilled resonance. Over
the course of the piece, a certain number of ‘subclasses’ of this group are created,
which in turn are used to form new ‘subclasses’. For example, at the very beginning,
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
Figure 49 (a) Schematic representation of the initial object. (b) Object preceded by an
anacrusis – a horn call. (c) The trilled resonances dissolve into semi-random clouds of
sounds.
Contemporary Music Review 257
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
Figure 50 (a) Section A, bars 37–42. (b) Section A, bars 37–42 (continued).
the object is preceded by an anacrusis—a horn call (see Figure 45). This ‘subclass’
returns again in section G (Figure 49b). Later in section A, the trilled resonance
(actually transformed into tremolos) dissolves into clouds of sounds—the ones we
spoke of just a couple of pages ago (Figure 49c, Figure 50a and Figure 50b).
258 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Or, on other occasions, that resonance shatters into a melodic entanglement of
intertwined spirals (Figure 51). Another frequently used subclass is a ‘rocket group’
that reaches a resonant chord (a sort of amplification of the little initial accent; Figure
51b).
These derived forms are transformed in turn; allowing the creation of the table
shown in Figure 52. With this diagram, it is easy to follow the successive
metamorphoses of the object. For example, the simplification to ‘rocket group’ +
percussive chord (a), then the simplification of the ‘rocket group’ to groups of
grace notes as an anacrusis to the chord (a, b, o). At letter c, only the chord itself
remains, sometimes followed by a small ornamental group. The percussive attack
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
Figure 52 Various transformations of the initial object. Note: The small letters
correspond to the sections of the piece in which one can hear these various forms.
experimentation.
If we look again at the global evolution of the piece, we can see that an interplay of
relationships is created. They can be schematized as shown in Figure 53.
Once again, this schema corresponds to the final state of the piece, and not to a
completely pre-established plan. My initial plan, for example, contained five parts;
however, in the end only four remained. What is now section l, which is comprised
of many ‘clouds’ of sounds, was initially supposed to occur just after section c.
However, it seemed to me that section l was too elaborate for that particular
moment—it would have been too close to the beginning of the piece. It is hard to
explain these types of decisions in a purely rational way. Perhaps I needed to hear
less distorted forms of the initial object at this early stage of the piece. On the other
beginning to the sections—the letters are mere reference points for analysis or
rehearsals. At other moments, the transition from one section to the next provokes a
rupture in the discourse (symbolized by a double diagonal line). Please note that the
smooth transitional processes occur at the beginning and end of the piece: the most
disjointed part is part III.
As I said earlier, the harmonic processes support the formal processes. In the same
way that the three sections in part I are smoothly connected gesturally, there is a
single (smooth) harmonic progression that unifies them as well. This harmonic
process is built of a series of distortions of an aggregate drawn from a harmonic series
(this aggregate can be found at the beginning of section C). The piece opens with very
distorted spectra (a strongly inharmonic starting point that nonetheless is related to
the harmonic goal), then the spectra grow progressively less and less distorted, in a
zigzag evolution that avoids too much predictability, until the tension has been
released and the ‘defective’ harmonic spectra that opens section C (and was the basis
for all the distortions) is heard.
The harmonic object towards which the process is directed is a fragment of a
harmonic spectrum (containing partials 3, 5, 7, 9, 11, 13, 15, 18, 20 and 29).
Figure 54 shows this aggregate and its first two distortions. The reference partials
used to calculate the distortions are harmonics 3 and 29. For the first distortion,
the third harmonic is raised by 4.5 Hz, while the 29th harmonic is lowered by 62
Hz: the rest of the spectrum is modified as a function of those reference notes.
Thus a compressed spectrum is created: the low partials are raised and the higher
ones are lowered. The second distortion (3rd harmonic raised by 0.8 Hz, 29th
harmonic lowered by 90 Hz) generates another spectral compression with a
different colour. These ‘first two’ distortions are in fact the last two chords in the
progression, since the process converges on the harmonic spectrum of C (null
distortion) (Figure 55).
This convergence does not happen in the linear way you see on the graph. I wanted
dynamic harmonies that are continually changing. They needed to be oriented
towards a specific goal, but without creating the effect of an inexorable slide (which
would surely have resulted from a purely linear evolution of the distortion
coefficients). While we are certainly moving towards a goal, the trajectory is
capricious. To reduce the sensation of predictability a bit more, I vary slightly the
262 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
Figure 55 Evolution of the distortions from B to C. The curves indicate the amounts of
distortion that affect the two reference partials.
number and quality of spectral components in each aggregate. Figure 56 shows the
final harmonic progression, which extends from the beginning of section B to the
Contemporary Music Review 263
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
beginning of section C (with indications for the harmonic ranks used and the
reference deviations). And Figures 57a, b, c and d show the corresponding portion of
the final score.
These timbre-harmony aggregates are often quite interesting in and of
themselves. Nevertheless, it is, yet again, the relationships between the elements
that matter most. The entire goal is to organize the progression in a satisfying
manner. There is no hard and fast rule for this; it is a complex question, especially
with these types of rich, microtonal aggregates. However, in spite of the novelty of
the harmonies, the problems that must be solved are eternal: renewal or repetition
of the aggregates, presence or absence of ‘common tones’, attention to the motion
of the outer-most ‘voices’ (which are generally more salient), interplay of registers,
etc.
In certain cases, we need to hear a quick turnover of pitches (or at least have the
illusion of constantly hearing new pitches). This is what happens in this section of
Allégories, where the harmonic rhythm is rapid. Here, any impression of pitch stasis
would lead to an effect of redundancy or of ‘pleonasm’, that would be unpleasant—
because it would contradict the formal direction of the passage. However, when we
arrive at the final aggregate (at letter C)—which is by nature harmonic—we find
ourselves in a situation of harmonic stability—making pitch repetitions or even some
redundancies welcome.
I believe that the kinds of problems we have discussed arise in every period and in
all types of music. They are rarely highlighted and explained by traditional analysis,
which tends to look for the generative techniques of a musical style, rather than
studying the phenomenological reality of musical works. By studying this
264 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
phenomenological reality, one can say—as Messiaen liked to affirm—that ‘the music
of Mozart is not tonal, but rather chromatic’. One could also say that very many
‘serial’ works are seductive because they are, in fact, modally organized (emphasized
notes, frozen harmonic fields. . .). With regard to pieces that are called ‘spectral’, they
are undoubtedly more valuable for their original formal organization and the novel
Contemporary Music Review 265
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
ways they shape time than for their harmony–timbre aggregates (which, though often
strikingly different, have no intrinsic value except insofar as they express the form
and manipulate our perception of time).
266 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Notes
[1] The absence of a precise and agreed-upon definition of a musical sound is sufficient to make
the interpretation of musical language directly modelled on grammatical-linguistic schemata
impossible.
[2] We can mention, for example, a Japanese bamboo flute called the shakuhachi, which is able
to produce a variety of ‘Aeolian’ sounds (that is to say mixtures of breath and sound). For
this reason it has become quite fashionable among young composers, who are not necessarily
Japanese.
[3] Though, in classical music theory, timbre is considered little more than an inexplicable
residue: ‘that which allows for the differentiation of sounds with the same pitch and
intensity.’
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
[4] Intonation exists in languages devoid of pitch, but it only serves to specify intention, or
expression (interrogation, exclamation), while in tonal languages, pitch is itself a
discriminating feature with its own impact on meaning.
[5] One of the Russian republics, situated to the North of Mongolia, whose ethnicity and culture
is similar to the Mongols.
[6] In this analysis, we formulated the hypothesis that the piano is a ‘harmonic’ instrument (i.e.
one whose spectrum would correspond precisely to a harmonic series). The sound of the
piano is, in reality, a bit inharmonic and presents a slight harmonic ‘distortion’. This kind of
harmonic distortion is a very interesting phenomenon about which we will speak more later.
[7] ‘Out-of-tune’ is used here to mean an involuntary and awkward result, one that does not
make sense in the stream of musical discourse. While one can certainly seek effects of
intervallic awkwardness with an expressive or colouristic goal, as long as the context is
coherent the sensation produced is not that the music is out of tune.
[8] In other words, if one has a fundamental of 100 Hz, the third harmonic will be 300 Hz (3 6
100), the fifth harmonic will be 500 Hz (5 6 100), etc. The relationship between harmonics
4 and 3 will thus be 4/3, and so on.
[9] Terhardt’s algorithm attributes a ‘perceptual weight’ to each of the partials of the sound. This
‘perceptual weight’ depends upon the amplitude of the partial, but also on possible masking
phenomena and the frequency response curve of the ear. If the weight of a given partial is
zero or very weak, it can probably be ignored.
[10] The spectra of the upper register of the flute, oboe and clarinet are all very similar. Their
timbre remains recognizable because of how they are played and because of the differences in
how they sustain the sound. Vibrato, breath effects, emission noises, etc. produce secondary
effects allowing the instruments to be identified. However, within a rich orchestration, these
instruments can easily substitute for one another without changing the global sonority.
[11] In my more recent mixed instrument and electronic pieces—written after this conference—I
have used techniques allowing the computer playback of the synthetic sounds to be
synchronized with the conductor’s beat.
[12] A frequency modulation or ring modulation spectrum can actually be fully harmonic if the
carrier and modulator or the sounds to be modulated are in a mathematically simple
relationship: in other words, if they are part of the same harmonic spectrum. In the graphic
representation above, a linear spectrum will be harmonic if the line that represents it
intersects the x axis at a whole number value (i.e. the value of ‘i’, the index of modulation).
[13] The Yamaha DX7 was the first commercial synthesizer to use the technique of frequency
modulation.
[14] See ‘Target Practice’ (in this issue), Example 1.
[15] This conference included a description of the Patchwork program for computer-assisted
composition, some basic notions of how MIDI represents notes, and some examples of
Contemporary Music Review 267
simple musical algorithms. At that time, all of this was relatively new for composers. Now,
however, these concepts are better known and documented. Therefore, it did not seem
necessary to transcribe those passages.
[16] Since the time of these conferences, a newer program OpenMusic has largely replaced
Patchwork. Both programs are based on a similar paradigm, but the newer realization has
greater possibilities. OpenMusic is now widely used by composers.
[17] During the conference, Dominique My performed these examples on the piano; she also
performed the work in concert.
[18] The notions of harmonicity and roughness ought to take into account the interactions
between all possible combinations of pitches in an aggregate. In this case, it is simple, but
when the harmonic or spectral aggregates contain numerous, non-tempered components,
the problem becomes extremely complicated.
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012
[19] Because 7 6 7=49. In fact, owing to approximation errors, C# would correspond more
closely to the 51st harmonic (or 50th or 52nd, all of which are quite close to each other and
all of which would have to be approximated to C# when approximating to the nearest
semitone).
[20] And even more so with its successor, OpenMusic.
[21] Striking examples of textural transformation can be found in Gérard Grisey’s Modulations.
At one point in the piece, a complex texture (close to Ligeti-style micro-polyphony)
progressively simplifies, becoming a sort of counterpoint, which in turn congeals into a
sequence of chords.
[22] Tape can now be replaced by digitized sound-files, but the problem of synchronization
remains.