100% found this document useful (2 votes)
355 views82 pages

Tristan Murail: Villeneuve-le's-Avignon Conferences, Centre Acanthes, 1992

This document summarizes Tristan Murail's 1992 conference at the Centre Acanthes in Villeneuve-lès-Avignon, France. It begins with an introduction noting that the conference was improvised based on a loose plan and accompanied by musical examples. The summary then discusses two main topics: the definition of musical sound and the instrumental sound as a paradigm for musical sounds more broadly. The conference explored how the realm of musical sounds has expanded in recent decades and attempted to define what constitutes a musical sound while acknowledging the broadening of the category.

Uploaded by

JMKraus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
355 views82 pages

Tristan Murail: Villeneuve-le's-Avignon Conferences, Centre Acanthes, 1992

This document summarizes Tristan Murail's 1992 conference at the Centre Acanthes in Villeneuve-lès-Avignon, France. It begins with an introduction noting that the conference was improvised based on a loose plan and accompanied by musical examples. The summary then discusses two main topics: the definition of musical sound and the instrumental sound as a paradigm for musical sounds more broadly. The conference explored how the realm of musical sounds has expanded in recent decades and attempted to define what constitutes a musical sound while acknowledging the broadening of the category.

Uploaded by

JMKraus
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

This article was downloaded by: [Karlstads Universitetsbibliotek]

On: 09 January 2012, At: 07:35


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Contemporary Music Review


Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/gcmr20

Villeneuve-lès-Avignon Conferences,
Centre Acanthes, 9–11 and 13 July
1992
Tristan Murail

Available online: 25 Jan 2007

To cite this article: Tristan Murail (2005): Villeneuve-lès-Avignon Conferences, Centre Acanthes,
9–11 and 13 July 1992, Contemporary Music Review, 24:2-3, 187-267

To link to this article: http://dx.doi.org/10.1080/07494460500154889

PLEASE SCROLL DOWN FOR ARTICLE

Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-


conditions

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden.

The publisher does not give any warranty express or implied or make any representation
that the contents will be complete or accurate or up to date. The accuracy of any
instructions, formulae, and drug doses should be independently verified with primary
sources. The publisher shall not be liable for any loss, actions, claims, proceedings,
demand, or costs or damages whatsoever or howsoever caused arising directly or
indirectly in connection with or arising out of the use of this material.
Contemporary Music Review
Vol. 24, No. 2/3, April/June 2005, pp. 187 – 267

Villeneuve-lès-Avignon Conferences,
Centre Acanthes, 9–11 and 13 July
1992
Tristan Murail (translated by Aaron Berkowitz & Joshua
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Fineberg)

The following conference text was created from a transcription made by Dominic
Garant and revised by Pierre Michel. I would like to cordially thank both of them for
having taken on this onerous and thankless job. I thought it necessary, nevertheless, to
rewrite these texts rather substantially. The conferences were essentially improvisatory,
based loosely on a pre-established plan (I do not like to read conference texts: it
reminds me of a professor of civil law who—in what he called a course—read the
‘lecture notes’ that one could buy in advance at the book store across from the
university). The oral style seemed to me annoying to read; in addition, these
conferences were accompanied by numerous sonic and visual examples, without the
help of which they would have certainly become incomprehensible. Their subjects (and
the order in which they are discussed) were determined in relation to the concert
programme at the Centre Acanthes, where Désintégrations, Territoires de l’Oubli and
Allégories were featured.

I have endeavoured to compile these texts in such a way as to make them clearer and
easier to read, while still attempting to stay as close as possible to speech-like writing,
without stylistic pretence. I chose not to retain the division into four days, since it did
not correspond to a significant formal division; however, I did conserve the order of the
subjects discussed, even though it may seem a bit arbitrary outside of the context of the
Centre Acanthes. Finally, over the course of this rewriting, I tried to stay as faithful as
possible to the ideas expressed at that time—even if today I might formulate certain
things rather differently.
T.M., Monroe, New York, May 2003

The Musical Sound


Let’s begin at the most elementary level, that of the musical sound (which is the
foundation of the entire musical edifice). First, however, we must ask ourselves ‘what
is a musical sound?’ The realm of musical sounds has broadened so much over the
last few decades that it has become difficult to give a precise answer. Most generally,

ISSN 0749-4467 (print)/ISSN 1477-2256 (online) ª 2005 Taylor & Francis


DOI: 10.1080/07494460500154889
188 T. Murail (trans. by A. Berkowitz & J. Fineberg)
we might say that a musical sound is any sound considered as such by composers and
listeners. It is with this definition that composers have tried to integrate, more or less
successfully, all sorts of sounds (many of which were previously considered ‘non-
musical’) into the musical discourse. Here, I’m thinking of the sounds found in
musique concrete or in the works of John Cage. However, if all sounds can potentially
be ‘musical’, how can one not get lost? Actually, it is quite easy to tell from the flow of
the music whether a sound ought to be considered ‘musical’ or not. During a concert
of classical music there is little doubt that the sound of your neighbour coughing is
not part of the musical discourse. Alternatively, within a piece written for ‘coughing
voice’ and ‘creaking window’ the sound of a cough will certainly be considered a
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

musical sound while the sound of a violinist impolitely warming up in the wings
during the performance might just as easily lose its usual designation as ‘musical
sound’—since it is not integrated into the discourse.1
The instrumental sound can nevertheless serve as a paradigm for a broader
category of musical sounds. The reason for this is relatively simple: instrumental
sounds have attained their current forms through our attempts to modify and
‘improve’ them over centuries. We have, by now, reached the point where these
sounds are often judged more or less perfect—at least, for their intended usages. We
can thus embrace the hypothesis that instrumental sounds, in their contemporary
form, are closely related to the very foundations of our culture.
It would be interesting to analyse why instrumental sounds suit us so well. Perhaps
from this analysis we could derive a model for organizing music more generally? This
hypothesis, though certainly a bit bold, allowed nonetheless for the realization of a
certain number of pieces during the 1970s. I am thinking in particular of the Espaces
acoustiques cycle by Gérard Grisey. Of course, this idea is far from sufficient to
account for the totality of the work’s musical organization, but we can consider it as
one of the points of departure for the composition’s formal construction.

Timbre
Let us now examine the phenomenon of timbre in occidental music. In observing the
historical evolution of this music, it is easy to see that timbre takes on an increasingly
important role in musical discourse. In the music of the 16th and 17th centuries,
timbre was not really taken into account and was often not explicitly notated. Many
pieces could be played equally well on the oboe as on the violin, with accompaniment
provided by either a harpsichord or a lute; pieces were played with the available
means, without attaching much importance to the specific sonic character of the
resultant sounds. Later, timbres started to be more precisely indicated: the
Brandenburg Concerti, for example, are specifically written for certain types of
timbres. The melodic lines themselves begin to take on specific characteristics
depending on the instruments. The use of idiomatic language for the instruments is
beginning. Progressively, the concept of orchestration starts to emerge in the late 18th
and early 19th centuries. Little by little, orchestral timbre is refined either by
Contemporary Music Review 189
‘synthesis’ (adding instruments one of the fundamental principles of ‘classical
orchestration’), or through increasing precision in defining specific, often
unconventional instrumental techniques. This later approach has become especially
significant in the 20th century, in particular on string instruments, where the sonority
can easily be modulated (ponticello, tasto, col legno, etc.). At present, the possibilities
of instruments have been explored to the extreme, permitting us, at least in principle,
to define and notate instrumental timbre with great precision, while the technical and
virtuosic possibilities of instrumental performance continually expand. This,
however, does not necessarily signify that classical instruments, in their current
state, respond to all our needs and expectations.
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Timbre, thus, seems to be taking on a greater and greater importance in musical


discourse. Additionally and in contrast to our Western tradition, one finds music in
other parts of the world based on timbre rather than on pitch layout. I am thinking of
certain ancient music of the Far East, China, Japan. . . One sometimes finds
instrumental techniques in these musics which are strangely reminiscent of our
‘contemporary’ techniques. These techniques have the goal of producing successive
sound effects, which often seek to evoke natural phenomena.2 In this music, the
discourse rests on sequences of timbral effects, or rather sound objects, rather than on
sequences of pitches (in the traditional sense).
The importance of timbre3 could be explained in a variety of other ways.
Timbre is one of the sonic categories most easily analyzed by perception, owing to
the simple reason that spoken language is essentially a timbral phenomenon.
There are, of course, also pitch phenomena in spoken language (e.g. Far Eastern
languages, or certain African languages, which are comprised of ‘tones’4); there is
often a linguistic role that falls upon the tonic accent (the intensity), a role that
carries varying importance depending on the language—essential for comprehen-
sion in some cases, but only accessory, or even almost non-existent, in others (as
in the case of French). Sometimes the length of vowels (the rhythm) also serves to
convey meaning. Thus, the only universal characteristic of human languages is the
use of timbre: vowels can be assimilated as pure harmonic vibration (spectrum),
whereas the consonants act as attack and extinction transients. Moreover, the
richness in vowels of certain languages seems to compensate for the non-use of
pitches and rhythms, and vice-versa. Since our infancy, we have been habituated
to perceiving and distinguishing timbres much more finely than pitches.
Additionally, the majority of non-musician listeners are capable of distinguishing
one instrumental timbre from another and even naming them, although they
could not identify pitches and rhythms. In the popular music of our time—rock,
pop, etc.—the essence is placed in the timbre, in the mixing and in the utilization
of electronic processing and sonorities. On the other hand, the message of the
pitches, melodic or rhythmic, if it exists, is often extremely simple. Finally, the
contemporary attraction to the phenomenon of timbre is greatly facilitated by the
technical means at our disposal. New technologies allow us, in effect, to infinitely
expand the possibilities offered by the layouts and arrangements of timbre—to
190 T. Murail (trans. by A. Berkowitz & J. Fineberg)
build a combinatory system based on timbre, which was previously almost
unimaginable.

Discourse and Musical Language


Is it possible, then, given what we’ve learned from the study of timbre, to construct a
coherent discourse and musical language based upon that phenomenon? An
instrumental sound, any one, seems to us to be a unique perceptual object. A cellist
plays a ‘beautiful’ sound, with nice vibrato, and the listener represents it mentally as a
beautiful cello sound with vibrato. Nonetheless, if you listen to a sound in a certain
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

way, if you focus your ear so as to dissect the contents, you can distinguish different
harmonics of this sound quite well and thereby understand that it is made up of a
group of components—all of which have their own lives. We are accustomed to
considering this group of components as a single object, and calling it the ‘sound’,
but it is equally possible to dissociate them: allowing unitary timbre to burst into
multi-dimensional harmony. This concept serves as the foundation for certain
fascinating vocal techniques. In Mongolia and in the Tuvan Republic5, the technique
of diphonic singing allows the dissociation of the voice into two perceptible entities:
the fundamental and its harmonics. While the fundamental frequency stays fixed, the
singer’s voice (by strongly accenting one or another harmonic, like an exaggerated
vowel) creates a succession of formant peaks that in turn create a sort of melody. In
contrast to a traditional melody, which consists of a succession of complete multi-
dimensional sound objects (the succession of ‘notes’ emitted by a classical singer),
here the melody situates itself in the very midst of a single sonic object that is
modulated over time. One can consider these diphonic (khöömi) songs of Mongolia
and of Tuva as the first known examples of ‘spectral composition’.
In Figure 1, the horizontal axis represents time (in seconds), the vertical axis
represents frequency (in hertz). The intensities of the component harmonics are
represented by marks that are more or less thick and dark. The numbers 1–10
correspond to the ranks of the harmonics. One sees that harmonics 1–5 are stable
(they make the fundamental perceptible), whereas harmonics 6–9 evolve markedly in
intensity. It is this succession of intensity peaks that creates the perceptible melodic
contour, notated below the sonogram in traditional pitch notation.
Let’s now examine a mundane piano sound. The analysis shown in Figure 2
corresponds to a brief instant of sound, just after the attack, of the note C1 played on
a modern piano (C1 is the lowest C on a standard piano). We are not interested here
in how the sound changes over time, as we were in the preceding example, but only in
its vertical (harmonic) structure. In this sound, the analysis program detected 118
harmonics, which is a rather large number. After eliminating the least important
components (i.e. those with close to zero intensity), 91 remain. Most of the low
instruments of the orchestra possess an enormous number of harmonics; however,
the piano remains an unusual case. Zones where the intensities of the components are
relatively louder than surrounding components are called ‘formants’. In the case of
Contemporary Music Review 191
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 1 Sonogram of a fragment of Mongolian diphonic singing.

the piano’s sound, we find formantic zones around the harmonics 27, 28, 29 and 30,
for example, or again around harmonics 35, 36, 37 and 38, which is extremely high in
the spectral scale.
In Figure 2, the numbers on the left in each column indicate the harmonic rank,
the numbers on the right give the intensity of each harmonic. The harmonics with the
most amplitude—which create the formants—are in bold (analysis carried out at
IRCAM in the 1980s).
Note that while the fundamental should normally be given the rank of number 1,
there is no component in this analysis with that rank. In fact, the 1st harmonic—the
fundamental—is totally absent. This means that the note C1, which we write in the
score, is in fact not heard at all. No frequency in the analysis of the piano’s C1
corresponds to the note C1. Therefore, at least in certain situations, what we think we
are hearing can be an illusion. In the case of the piano note, this illusion is called a
‘virtual fundamental’: we have the impression of hearing a fundamental sound when
we hear the entire ensemble of harmonics of a fundamental even if that fundamental
is itself absent. But, in reality, if you hear the sound C1 on the piano without bias, it
does not really resemble a C very much, nor does it resemble any other precise note.
It is actually a very complex sound that is barely harmonic and which does not really
fit the definition of a traditional instrumental sound. When this note is played at the
same time as a C major triad in the middle register, it sounds just like a real C—the
fundamental of the chord—because its normal harmonic contents are reinforced by
the chord of C major (and, inversely, the resonance of this chord will be magnified by
the harmonics of C1). On the other hand, if you play this very low C at the same time
192 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 2 Analysis of a piano’s low C.


Contemporary Music Review 193
as a complex non-tonal chord, the pitch of the low sound will become difficult to
determine.6
Ravel used this property of very low sounds in several of his piano pieces. In the
example given in Figure 3, taken from Une barque sur l’océan, the first A in the left
hand replaces what should have been a low G# that does not exist in this register on
normal pianos. In the context, one has the illusion of having heard a G# and not an
A. . .
The piano sound and diphonic chant examples prove that if we listen to timbres
with great attention, in an effort to deconstruct all of the conditioning of our hearing,
it is possible to distinguish various components from the interior of the sonic
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

spectrum. It is, of course, very obvious in the case of diphonic singing as well as for
certain similar sounds (such as those in the family of Jew’s harps). However, even the
sound of a familiar instrument (generally perceived as a single unit—a ‘sound
object’) can end up dissociated if we listen in a particular way. For the piano, the
evolution of the sound over time can be an aid to the perception of this inner
richness. At the emission of sound, all the components are present and the timbre is
complex and difficult to analyse; then, little by little, as the sound decays we hear
more clearly, and each in turn, the different zones of harmonic resonance—certain of
which die away first, while others resonate longer.

Temperament, Micro-intervals
It is well known that the pitches contained within a harmonic spectrum (as, for that
matter, in the majority of inharmonic spectra) are mostly not part of our tempered
scale universe. Therefore, working within the interior of a harmonic spectrum, as the
Mongolians do, entails the use of micro-intervals.
The frequencies observed inside of a spectrum do not correspond to any system
that divides the octave into regular intervals. However, since frequencies expressed by
the speed of their periodic vibrations (hertz) are inconvenient for the composer or
instrumentalist to use and difficult to notate on a score, I will continue to represent
these frequencies through (more or less precise) approximations using tempered
divisions of the octave. Figure 4 shows an example that compares three different
approximations of the same aggregate.

Figure 3 Maurice Ravel: Une barque sur l’océan.


194 T. Murail (trans. by A. Berkowitz & J. Fineberg)

Figure 4 Different approximations of the same aggregate.


Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 5 Steps in the harmonic progression from the opening of Anahit.

The chosen aggregate is composed of harmonics 3, 5, 7, 9 and 11 of the


fundamental G1 (the lowest G on the piano). These harmonics’ frequencies in hertz
are: 147, 245, 343, 441 and 539. If we round these frequencies to the closest half-step,
they correspond to the notes D, B, F, A and C#. Despite the major seventh (which is
softened by the presence of consonant intervals like the perfect fifth), the resultant
chord sounds rather ‘consonant’ and ‘classical’. This is a chord that can sometimes be
found in the works of composers from the impressionist period. In reality, though,
the approximation to the half-step is somewhat crude. If we round to the nearest
quarter-tone rather than the nearest half-step, the F (7th harmonic) becomes an E¼#,
and the 11th harmonic (C#) becomes C¼#. We can refine this aggregate even further
by approximating to the nearest eighth-tone. This will cause the B to be modified as
well, becoming a B lowered an eighth-tone (downward arrow); while the F, which
had been lowered by a quarter-tone, will now only be lowered by an eighth-tone; the
C stays C¼#. We could, in principle continue this process towards ever-finer
approximations, but experience and acoustic theory show that, in practice, the
approximation to the eighth-tone is sufficiently precise.
When listening to these different approximations of the original aggregate, one
notices that the more precise the approximation the less beating occurs, and the
more the notes melt into one another (creating a fused sonic image). With the
approximation to the nearest half-step, we clearly perceive a chord made up of
five notes; then, by refining the approximation, we arrive at the perception of a
Contemporary Music Review 195
single timbre with five embedded components (like our natural perception of
individual complex sounds). When we do these operations in the inverse order, it
seems that there is increasing tension. This chord, which seemed relatively gentle
at the beginning, becomes almost ‘dissonant’ when contrasted with the more
precisely approximated versions. Thus micro-intervals do not necessarily introduce
a sensation of ‘out-of-tune-ness’ in a musical discourse; on the contrary, they can
create a greater sense of ‘in-tune-ness’. They can create greater consonance, or an
enhanced effect of fusion between the notes. Micro-intervals also allow for the
attainment of sonic aggregates that are much more interesting, much richer and
very much more varied than combinations of the 12 tempered pitches. This use of
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

microtones is very different from the approach of composers whose music is


based on dividing the octave into an arbitrary number of intervals, sometimes 24
(quarter-tones), but also sometimes more exotic divisions based on theories that
are more or less eccentric. The result of these arbitrary divisions often is not very
convincing from the harmonic point of view: they create an ‘out-of-tune’
impression, which is rather unpleasant.7 On the other hand, I believe that, with
my way of writing non-tempered music, an average listener—who was not told
that there were microtones—would hardly notice their presence. Of course, if
these micro-intervals were not there, the music’s colour would be totally changed.
One would lose both richness and suppleness. The harmony would probably
become much more ‘hard’ and undesired dissonances would appear. This is,
unfortunately, what happens all too often when pieces are poorly rehearsed, or
when the musicians or conductor have little experience with microtonal music: it
is the absence of micro-intervals, required by the score but not executed, that
creates an ‘out-of-tune impression’ in frequency-based music!
In the previous example, by increasing the precision of approximation, we
moved progressively from the perception of harmony to the perception of timbre.
Harmony and timbre can thus delineate a continuous domain. Between the poles
formed by these two notions, there is a whole space that is particularly interesting
because of its very ambiguity. In other words, an entire portion of musical
discourse can be situated between harmony and timbre. The notion of harmony-
timbre is not completely new. It was alluded to by Edgard Varèse and put into
practice by Olivier Messiaen. These two composers sought to build complex
harmony-timbre sonorities, based partially on the phenomena of natural sonic
resonances. In the midst of complex orchestral aggregates in these composers’
music, there are often effects of fusion; however, the use of the tempered scale
limits the scope of these effects.
The use of micro-intervals obviously poses some practical problems. In chamber
and solo music, performers are usually able to find more or less satisfactory solutions.
Certain things are, of course, impossible (e.g. quarter-tone alterations of the lowest
notes of the oboe and of the notes in certain regions of the clarinet). In principle, one
can play any and all possible microtones on string instruments; however, the
composer must still account for the performer’s ear—and the tempo.
196 T. Murail (trans. by A. Berkowitz & J. Fineberg)
In my works, I generally limit myself to the quarter-tone. In certain very specific
cases (and only in music for soloists or small ensembles), I ask for smaller intervals.
In certain specific situations, it is, in fact, possible to perform accurately these
smaller intervals: for example, in Treize couleurs du soleil couchant, the flutist must
play a slightly lowered E (about an eighth of a tone) at a certain moment. When
the exact pitch desired is performed, an effect of ‘fusion’ is created, the slightly
lowered E integrates itself perfectly into the harmony, and this effect is very easy for
the performer (and the listeners) to hear. The musician knows, thanks to the
context, that the note he plays is thus perfectly in tune. Of course, it is useless to
try to obtain a similar result with a large orchestra, especially given the current
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

rehearsal conditions. Therefore, I use other stratagems in orchestral settings. For


example, I sometimes ask one part of the orchestra to tune itself a quarter-tone
lower, thus making the use of complicated micro-intervallic fingerings unnecessary.
On the other hand, this forces me, in certain cases, to realize my melodic lines with
a technique almost like hocketing. Thus, the use of this special tuning induces its
own constraints on writing music. Perhaps things will change and, one day or
another, we will have quarter-tone keys on all of the instruments of the orchestra
(there are already quarter-tone flutes), but for now this is only a hope. We are in a
similar situation to Johann Sebastian Bach when he began writing chromatic music
with modulations for all instruments. Certain instruments (e.g. the trumpet) did
not then have a system for playing the desired notes, so the musicians needed to
use various substitutions. I imagine that the performances were often out of tune,
or at least approximate, and that the musicians of the 18th century must have
protested, as sometimes occurs at present. Maybe the performances J.S. Bach heard
of his music were often ‘out of tune’, in the same way that we often hear our music
played ‘out-of-tune’ now! But let’s not be too pessimistic. There are currently many
ensembles that excel at the performance of microtones and many instrumentalists
who know perfectly well how to perform them. In any case, I hope that you will
have understood that, for me, the quarter-tone is not an absolute—a goal in
itself—but the somewhat approximate means of realizing what one could call a
‘frequential harmony’—a harmony liberated from the constraints of scales and
other grids habitually applied to the continuum of frequencies. In this view, the
goal is to re-create an approximation of diverse acoustic phenomena, and a
microtone, even inaccurately performed, is still closer to the target frequency being
approximated than a ‘more accurate’ performance of a (cruder) semitone
approximation would be.

Giacinto Scelsi: Anahit


It is impossible not to evoke Giacinto Scelsi (1905–1988) when talking about music
based on timbre and on micro-intervals. In his own way, Scelsi too explored the
interior of sound. It is well known that, after a first career as an atonal or
dodecaphonic composer, Scelsi destroyed practically all of his previous work and
Contemporary Music Review 197
started over from scratch. From that point on, Scelsi concentrated all of his attention
on musical sounds—even on a single sound alone. This meditation on sound is
equivalent to an intuitive exploration—mystical maybe—of the interior of those
sounds.
Scelsi used techniques for technologically aided composition, as we would now say,
which were relatively avant-garde for the period; he made simulations with electronic
instruments (e.g. the Ondioline, an instrument created in the 1950s that was a
polyphonic equivalent of the ondes Martenot) and recorded these experiments on
tape. These simulations allowed him to explore the inflections of micro-intervals, the
diverse types of vibrato, etc. One of Scelsi’s first really striking pieces is entitled
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Quattro Pezzi per orchestra (ciascuno su una nota) (1959): in this set of four pieces for
orchestra, each piece is truly based on only one note. This goes beyond monody; it
represents a sort of extreme minimalism. In its way, the Mongolian music mentioned
earlier was also based on a single note.
In this context, where the parameter of ‘pitch’ is effectively abolished, music must
find other variables with which to express itself: these other variables are what Scelsi
called ‘the depth of sound’. This metaphorical expression designates the extensive use
of all of the internal parameters of sound: the spectrum, the variations of the
spectrum, the dynamics (the way in which the sound is dynamically developed over
time), the use of different types of sustain (like vibratos and tremolos of varying
speed) or even the timbral changes that one can create on the same note (e.g. by
playing it on different strings of a string instrument)—all of this is expressed very
precisely in the scores.
Scelsi wrote many works for solo instruments, which gave him the possibility of
deploying, in a clearly audible manner, his whole panoply of techniques for the
internal animation of sounds. In the orchestral works, the addition and mixture of
these sounds, with their own internal animation, further expand the sonic richness of
the unison—a unison ‘composed’ from the inside. More than an orchestration in the
traditional sense, Scelsi creates a sort of instrumental synthesis (to use Gérard Grisey’s
name for this technique). Moreover, this unison is usually thickened—enlarged into a
band of frequencies that surround the principal sound. Scelsi used quarter-tones in a
systematic manner, but very differently from the first explorers of micro-intervals
(composers like Alois Hàba, 1893–1973; Ivan Wyschnegradsky, 1893–1979; and
Julián Carillo, 1875–1965). Even though he often said that ‘quarter-tones are real
notes’ (to emphasize this fact, the symbols of quarter-tones are circled in his
manuscripts), Scelsi conceived his micro-intervals more as enlargements of the
unison than as a means of creating new scales.
Anahit, for violin solo and 18 instruments (1965), is in my opinion one of the most
successful and beautiful of Scelsi’s works. I will not go into a detailed analysis of the
piece, which would not really be of great meaning for this music. Instead, I would like
to pull certain generative principles from it and to give some indications concerning
the global form. One of the frequent characteristics of Scelsi’s music is the use of
smooth time, that is to say a form of musical time that is rarely marked by distinct
198 T. Murail (trans. by A. Berkowitz & J. Fineberg)
events. Since the music is centred around one or sometimes two principal pitches, all
traditional development and all systems of traditional variation become impossible.
The melody is limited to long, slow slides of pitch, sometimes punctuated by brief
more well-defined fragments. The formal progression is often very simple and
unidirectional.
Anahit is Scelsi’s only concert piece for solo instrument and orchestra. Its structure
in three parts could appear classical: the first part links the violin and the orchestra;
the second part corresponds to a violin cadenza; and in the third part the orchestra
returns. Inside each of the parts an alternation appears between relatively calm
passages and ‘climaxes’ where Scelsi used his entire range of techniques for
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

manipulating sustained sounds: trills, tremolos of varying amplitudes and speeds,


addition of trills and tremolos on different instruments. At certain moments, one
hears some very surprising amalgams that almost evoke the human voice—the result
of combining of all of these internal sonic movements and using certain instrumental
registers. The choice of instrumentation is especially interesting: three flutes
including alto flute, an English horn, a clarinet, a bass clarinet, a saxophone, two
horns, a trumpet, two trombones, two violas, two cellos, two basses, and the solo
violin. Note the absence of bassoon and oboe, as well as a predominance of warm and
velvety timbres. The absence of bassoon and oboe can be explained by the desire to
achieve an ‘instrumental synthesis’, which requires the fusion of all the instruments.
Double-reed instruments have a tendency to emerge from orchestral complexes more
than other instruments at the same dynamic, which makes them particularly apt for
playing solo lines. However, in music that seeks the effect of fusion above all, their use
becomes trickier, or even impossible.
Nonetheless, Anahit does use the English horn, as well as brass with mutes and sul
ponticello strings: all sounds that are in some way more sharply coloured than the
oboe. However, the context here is that of an orchestral group, not a solo instrument.
Scelsi is more interested in extreme situations than in moderate ones. Very often,
timbre travels between two poles, the very delicate timbre (flutes, the sul tasto of the
strings, etc.) on one side, and a highly coloured timbre that is sometimes at the
threshold of losing pitch and becoming coloured noise (strings sul ponticello, English
horn, stopped horns, etc.) on the other. These oppositions of timbre also appear
within the solo violin part itself. Most of the time, this part is written on four staves:
one staff for each string of the violin! Scelsi often asks the musician to play the same
note on different strings, either successively or simultaneously; on the violin this
produces many different timbres because of the differences in string tension and
thickness. To facilitate the playing of the same note on the different strings, the
composer is obliged to ask for a modified tuning of the violin, a scordatura (G, G one
octave above, B , D). The violin part is written almost entirely in double-stops and
very often in sweeps across three or four strings. As I mentioned before, the solo
violin very often plays ‘thickened’ unisons—that is to say double- or triple-stops
forming micro-tonal clusters of notes including the quarter-tone above or below the
main pitch, or sometimes sounds with a very large vibrato.
Contemporary Music Review 199
The pitches are organized according to a very simple progression. In contrast to the
Tre pezzi, the pitches in Anahit are not based on a single note. Rather, Scelsi uses a
pivot-note: a central note generally played by the violin, which gradually changes.
This central note begins on D5. Over the course of the first section, there is a
continuous ascending motion from this D5 up a third to F#5. During the violin
cadenza, the ascent is prolonged, from F#5 to Ab5. In the third section, we return to
D, but an octave higher (D6). The same ascending motion again appears, this time
rising slightly farther to G6. These simple unidirectional ascents are the ‘melodic’
contents of the piece.
This central and sliding unison line is surrounded by other sounds. These other
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

sounds do not, properly speaking, play a harmonic role—since there is no melodic


sequence to harmonize. Their role could more accurately be compared to the
phenomenon of diffraction, in which a ray of light (the central sound) penetrates a
prism and explodes into various luminous frequencies. Thus the D5 heard at the
beginning of the solo part is diffracted into harmonics and subharmonics, or, better,
the D5 could be considered as a harmonic of a virtual fundamental (that one will hear
or not). Through this process of diffraction, a chord progressively establishes itself: G,
Bb or B¼b (the sound oscillates between the two) and D. However, this chord is by no
means a banal perfect triad: the D—the third harmonic of a low G—is reflected in the
sounds G and B–B¼b). Later, surreptitiously, this D slides towards Eb5. In turn, this
Eb5 generates its own diffractions and an Ab appears in the bass (see Figure 5).
What we have here is not really a harmonic progression in the classical sense: a
series of ‘parallel fifths’ (G–D, then Ab–Eb). Everything changes through surreptitious
sliding, so that between two harmonic diffractions there is a period of instability from
which the new configuration is born, without there being an audible moment of
arrival. Study of the piece reveals that the pivot note can often be considered as the
3rd or the 6th harmonic of a virtual fundamental, but other times it is the 5th or even
the 7th harmonic. When the orchestra re-enters after the violin’s cadenza (the start of
the third section) a very intense effect is produced. The pseudo-perfect triads of the
orchestra, still slightly muddied by micro-intervals, have a very particular timbre. In
thickening the texture through the addition of microtonal colourings, the composer
does not create dissonances. Rather, the mix of harmonics he creates is similar to a
sort of filtering. The global sonority created is a bit nebulous—fuzzy, like the music of
an old film—where the upper harmonics have been lost through poor conservation
and where all that remains is a slightly vague and faraway sound universe. This
phenomenon gives Scelsi’s music its somewhat nostalgic sound.

The Sound as Formal Model: Inharmonic Sounds


Earlier, I brought up the idea of using the structure of instrumental sounds as a
model, from which new timbral arrangements can be extrapolated and upon which
new formal elements can be built—sometimes, even, the entire architecture of a piece
can come from these models. I would like to show two examples of this approach: an
200 T. Murail (trans. by A. Berkowitz & J. Fineberg)
electroacoustic work by Jonathan Harvey, Mortuos plango, vivos voco, and one of my
own pieces for orchestra, Gondwana. It just so happens that these two works both use
the sound of a bell as their model. Bell sounds belong in the ‘inharmonic’ class of
sounds. Let’s take a moment to discuss this class of instrumental sounds that do not
obey the usual model of the harmonic series.
Two large classes of spectra can be distinguished: harmonic spectra and
inharmonic spectra. The majority of orchestral instruments—wind and strings, for
example—produce basically harmonic spectra. These spectra are sometimes mixed
with a bit of noise from the bow or the breath: this is especially noticeable for the
strings and the flute. On the other hand, most percussion instruments and the piano
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

have more or less inharmonic spectra. This means that the mathematic relationships
between the components of their sounds (the ‘partials’) do not correspond to simple
integer ratios. We have previously seen examples of notes conforming to the
harmonic series: spectra where the frequency of each component partial is an integer
multiple of the fundamental frequency.8 The structure of any harmonic spectrum
follows this very simple rule. On the other hand, an inharmonic sound possesses
components that do not obey this rule. There is no single precise way of defining how
partials of inharmonic sounds relate to each other because, in contrast to harmonic
sounds, these potential relations are infinite. Nevertheless, there are structural models
of inharmonic sounds that are of special interest to us because they have been selected
by musicians through a slow historical process, a sort of ‘Darwinian’ evolution over
the course of centuries. Bell sounds, for example, have fascinated composers for ages:
Hector Berlioz in the Symphonie Fantastique (1830), Modest Mussorgsky in Boris
Gudonov (1868–1870), Claude Debussy, Maurice Ravel, Olivier Messiaen, etc. Figure
6 shows a schematic representation of the spectrum of a bell.
The fundamental is an F# and the harmonics of that fundamental are also present.
A slightly sharp A-natural (upward arrow) is interspersed in the harmonic series—
creating the non-harmonic sound of this bell. The frequency of the A in this example
is equal to the fundamental multiplied by 12/5.
This is the spectrum of a real bell, like the ones that ring in church steeples. Its
spectrum is different from that of an orchestral bell (tubular bell). With a sound like
this, though, we must be yet more precise: this is the spectrum of a European bell.
The spectrum of a Japanese bell—those enormous bells that one sees suspended at the
entrance of temples, and which the visitors strike with the help of a suspended

Figure 6 Schematic spectrum of a bell.


Contemporary Music Review 201
beam—would be completely different. The principal characteristic of occidental bells
is the superposed presence of a major and a minor 3rd: a minor 3rd is interposed
within the spectrum that is otherwise relatively regular (harmonic) and based on a
fundamental, called the drone (‘bourdon’). This characteristic sound is consciously
sought after by bell-makers and the choice is certainly not the result of pure chance.
The minor 3rd represents an interesting complication within a harmonic spectrum. It
adds sufficient inharmonicity to render the spectrum richer, more interesting, but not
so much inharmonicity that the sound becomes too complex or too muddied. Very
often in metallic percussion sounds there is this type of harmonic structure, with a
harmonic spectrum modified and made more complex by a strategically chosen
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

additional frequency.
For practical reasons, tubular bells are used instead of traditional bells in the
orchestra. Unfortunately, though, the sound of tubular bells is nevertheless somewhat
different than the sound of real bells (see Figure 7).
This tubular bell’s sounding pitch—the note that would be written on the score
and which should, in this example be a C5—is not really present. There are, of course,
harmonics of this C: the C an octave higher (C6), the G6 (3rd harmonic) the C7 (4th
harmonic), and finally the 7th harmonic (a low Bb). The note C5, which should be
heard, is obtained by subtraction. It is created as a differential sound between the
different harmonics (through the perceptual phenomenon of ‘virtual fundamentals’).
Certain inharmonic partials are very clear: a D# (or Eb if one prefers) and a D-three-
quarter-tone-sharp (or slightly lowered E). These two partials create an internal
beating that enriches the spectrum of the tubular bell. Additionally, their relationship
to the (virtual) fundamental forms an interval close to a minor 3rd, and evokes the
sound of a bell. Finally, the very low sound is simply an attack transient that is weak
and resonates only briefly—transients of this kind are common in orchestral
percussion sounds. With this tubular bell, though the composer writes a C, listeners
will, in fact, hear all sorts of things—a D-three-quarter-tone, a C an octave higher
than the written note, etc. If you double the tubular bell with another instrument,
why not double it at the higher octave or even with the minor-major third—the
D 3/ 4 #? Obviously, with real tubular bells, things are not quite so simple, the spectra

Figure 7 Schematic spectrum of a tubular bell (on C5).


202 T. Murail (trans. by A. Berkowitz & J. Fineberg)
are not so neat, and it is probable that each set of tubular bells will sound a bit
different. In general, our percussion sounds are rather poorly defined. This is in
strong contrast to the art of percussion developed in a number of non-European
musical cultures. The variability of our percussion makes it difficult to use in a
controlled fashion within orchestral mixtures and all too often relegates its role to
that of sound effects or rhythmic punctuations. As Claude Debussy said, ‘Our
percussion is an art for the uncivilized (un art de sauvage).’
Figure 8a shows another prototype of metallic percussion: a little Japanese bell,
one of the small bells in the form of a bowl that are used in the temples—the
reason they are often called ‘temple bells’. The figure shows the components
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

detected by the computer in the bell’s spectrum. However, certain partials are
hardly audible. The low sounds probably represent a sort of attack transient—like
the one we saw in the tubular bell spectrum. By using ‘Terhardt’s algorithm’,9 we
will reduce this analysis to only the sounds that are perceptually important
(Figure 8b).

Figure 8 (a) Schematic spectrum of a small Japanese bell. (Transcription of an analysis


carried out at IRCAM with the program IANA.) (b) Spectrum of Japanese bell after
reduction using Terhardt’s algorithm.
Contemporary Music Review 203
Thus simplified, the spectrum reveals a collection of sounds that comprise a
harmonic spectrum that is a bit warped, a bit distorted: the two Cs are slightly raised,
there is a B-quarter-tone (a false octave of the Cs), a G# raised slightly; and, strangely,
an F¼# is also part of this spectrum. In fact, one very often sees this inharmonic
partial formant that is a slightly large 4th (4th + quarter-tone, or approximately
augmented 4th ) above the fundamental (C to F¼#, in this case) with instruments
from this group of small metallic percussion—small Japanese or Tibetan bells,
crotales, etc.
How, then, can I write for these instruments? Either I must consider the note played
by the instrument as a pure symbol—I wrote a C and too bad if you heard something
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

else—or I must conceive of this instrumental sound as a specific sonic complex,


distinct from its notation, and try to use it as such. In the latter case, if I want to
integrate these sounds into a musical discourse, I must keep in mind the harmonic
relationships emanating from the instrument’s spectrum. The final result will certainly
be richer and more interesting than if I were simply to use the metallic percussion’s
timbre as a sound effect—an object placed within a context where its only relationship
to the discourse is metaphorical. When dealing with electronics, composers must often
confront sounds that are at least as complex as these bells and the same issue arises. To
integrate electronic sounds into the musical discourse, the composer must know their
precise make-up. The same is true for multiphonic sounds of wind instruments, which
generally possess many non-harmonic components. If one uses any of these sounds
simply for their colour, most of the time nothing more than a simple anecdotal effect
can be obtained. On the other hand, an effort to integrate them into a musical
discourse in a way that takes them as they are—complex sonic objects—and attempts
to compile a ‘grammar of complex sound objects’ can foster their true integration into
a musical discourse. Thus, these sounds will no longer appear as colouristic effects, but
as indispensable events within the totality of the musical discourse.

Jonathan Harvey: Mortuos plango, vivos voco


Jonathan Harvey used a bell as the main formal model for his work Mortuos plango,
vivos voco for eight-track tape (1980)—this work has since become a classic in the
genre. The piece is entirely based on the sound of a bell from Winchester Cathedral in
England (Figure 9).

Figure 9 The bell from Mortuos Plango.


204 T. Murail (trans. by A. Berkowitz & J. Fineberg)
To give a broad overview, this spectrum corresponds well to the theoretical model
of European church bell sounds presented above (Figure 6). It presents a more or less
regular harmonic series based on the fundamental C coloured by an inharmonic
partial (here a D# or Eb), which forms a minor 3rd with the fundamental. However,
this spectrum also contains sounds that are not part of the harmonic series: for
example, the 6th sound, an E¼#, which is a little too high, and which beats against
the 7th sound, an F, which is foreign to the harmonic series on C. According to the
composer, an additional, virtual sound (F3), which does not exist in the computer’s
analysis, is also heard—this is probably perceived as the resultant sound of a group of
high partials (through the perceptual phenomenon of ‘virtual fundamentals’
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

mentioned earlier). This one analysis (after various operations including filtering,
transposition, modification of intensity envelopes, etc.) will allow the creation of the
entire palette of synthetic sounds used in the piece.
The bell’s spectrum also serves as a formal model. Various pitches, selected from
the analysis, articulate the sections of the piece: each of the eight sections uses one of
these pitches (plus the virtual F) as harmonic pivot (Figure 10).
The length of each section is inversely proportional to its harmonic rank in the bell
spectrum (the rank being the ratio of the partial’s frequency to the fundamental
frequency). More precisely, the length of each section (in seconds) is equal to 200
divided by the ratio of the pivot pitch’s frequency to the frequency of the
fundamental (C3). This gives the following list of durations: 100, 33, 75, 37, 50, 30, 84
and 200.
A child’s voice is also heard in this piece. The boy soprano sings the Latin words
engraved on the bell (mortuos plango, etc.). Harvey introduces a relation between the
pivot pitches and the colours of the vowels heard in each section. When the pivot
pitch is high, vowels that have high formants like ‘ee’ (as in free) or ‘ae’ (as in play)
are used most often and when the pivot pitch is low, like at the end, one will often
hears ‘oo’ (as in you) and ‘o’ (as in hope). Finally rhythmic pulsations—analogous to
internal beating of the bell—animate each section. The speed of these pulsations is
also proportional to the frequencies of the pivot pitches, according to the following
relationship (results in pulses per second, Hz):

Frequency of the pivot pitch


Pulsation speed ðHzÞ ¼  0:5
Frequency of the fundamental ðC3Þ

Figure 10 Pivot pitches.


Contemporary Music Review 205
The pulsations for the various sections, calculated in this way, are (Hz) 1, 3, 1.33,
2.67, 2, 3.31, 1.19 and 0.5 respectively.
All of this could appear like a somewhat arbitrary theoretical game. However, the
composer’s use of these basic relationships allows him to create a succession of well-
characterized musical instants in which the rhythmic associations (pulsation, pitches,
timbre) function in a clear and ‘natural’ way. It creates a sort of rigorous formal plan,
within a well-defined framework (the universe of the bell), possessing well-known
archetypal correspondences: high/agitated/clear in opposition to low/slow-moving/
sombre. Moreover, the spectral pitches of the bell and the chosen pivot pitches confer
a quite distinctive colour to the music—evoking a certain modality and a harmonic
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

functionality close to tonal music, in spite of the fact that the music is by no means
‘tonal’.

Tristan Murail: Gondwana, for Orchestra (1980)


Like Mortuos plango, Gondwana explores the domain of inharmonic sounds.
Coincidentally, the two pieces date from the same year, but there is no overt
relationship between them, aside from the use of models of the bells and the
construction of relationships between sonic phenomena.
The bells of Gondwana are imaginary—in contrast to those of Mortuos plango. For
the beginning of the piece, I wanted to make large bell sonorities heard via the
orchestra. Not having a model at my disposal, and not looking, by any means, to
create a pure imitation of a sonic object, I thought of a mathematical technique used
in computer music to produce reasonably convincing bell-like sonorities called
‘frequency modulation’. This technique, developed by John Chowning and
popularized by Yamaha’s DX and TX series synthesizers, relies on the utilization of
two sound generators linked together in a particular manner, one called the ‘carrier’
and the other the ‘modulator’. The frequencies of the carrier and modulator combine
to produce a certain number of resultant sounds according to the formula:
f=c+I * m, where f is the resultant frequency, c is the carrier, m is the modulator, and
i is the index of modulation (i.e. the intensity of the effect). For example, if the carrier
equals 100 Hz, the modulator is 20 Hz, and the index of modulation varies by integer
steps, from 0 to 2, we will hear the following series of sounds:

index=0 : c+0m=c, so 100 Hz


index=1 : c+1m=120 Hz (100 + 20) and 80 Hz (100 - 20)
index=2 : c+2m=140 Hz (100 + 20 6 2) and 60 Hz (100 - 20 6 2)

These calculations are obviously very simple, at least in relation to the resultant
pitches (the corresponding calculation for the intensity of each component is much
more complicated). On the other hand, it is a bit more complex from the musical
point of view, since the calculations are realized in hertz and must be transformed
from frequencies into musical pitches (approximating them to the closest usable
206 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 11 First aggregate of Gondwana

musical note). If working with quarter-tones, it is necessary to look for the quarter-
tone closest to the calculated frequencies. Figure 11 shows the first orchestral
aggregate of Gondwana.
The carrier is a G, the modulator is a G#. When the index is equal to 1, one obtains
two resultant sounds: D¼#5 and F#3; when it is equal to two, one obtains G¼#5 and
an F# too low to be heard (which will be suppressed), etc. The aggregates in Figure 12
are constructed with other carriers—A, B, D, F#, successively, while the modulator
stays fixed on G#. This gives us the series of aggregates shown in Figure 12b.
This progression is organized in order of increasing harmonicity. In effect, a direct
correspondence exists between the more or less consonant or dissonant character of
the interval between the carrier and the modulator and the more or less harmonic or
inharmonic result of the modulation. Thus, the first aggregate is based on the
dissonant interval G#–G is very inharmonic. Then, as the intervals formed by the
carrier and modulator become increasingly consonant, the orchestral aggregates
progress towards harmonicity. The last aggregate of this section—towards which the
entire progression is oriented—does not in fact correspond to a frequency
modulation spectrum, but to an incomplete double harmonic spectrum, based on
the last two sounds of the modulator-carrier pair (G#–F#), each transposed one
octave lower (see Figure 13).
All of these aggregates seem quite complex to the eye, but to the ear they are less
complex than one might imagine. In effect, whether they come from the results of a
frequency modulation or a harmonic series, they share the ability to create a certain
degree of fusion among their components. This fusion is due to the very precise
Contemporary Music Review 207
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 12 (a) Progression of the carriers. (b) Progression of frequency modulation


aggregates.

frequency relationships that these techniques generate, as well as the interplay of


intensities and timbres.
The amplitudes of the sounds resulting from a frequency modulation created by
the computer are highly variable and are a function of the index of modulation. They
208 T. Murail (trans. by A. Berkowitz & J. Fineberg)

Figure 13 Double harmonic spectrum from the end of the A section.


Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

can be precisely calculated with Bessel functions, but it is rather complicated. In


composing Gondwana, I did not attempt to model frequency modulation with that
level of fidelity. I simply considered that, roughly speaking, the highest indices would
give the weakest intensities. To this general principle, the role of global dynamic
profiles must be added. It is well known that percussive envelopes enhance the effect
of fusion among the components of inharmonic spectra. The dynamic profile of a bell
sound is characterized not only by its percussive attack, but also by the different
evolutions of each of its partials. The high components disappear first, one after the
other, leaving only one sound in the end, simple and unique: called the ‘drone’. This
model inspired the dynamic profiles for the initial sonic complexes of Gondwana. The
attacks of the brass and woodwinds are reinforced by the percussion, and the high
components of the chords (the sounds with the highest modulation index values)
extinguish rapidly, leaving a G# resonating longer. This is the modulator, which
assumes the role of the drone in the piece.
We saw that a process leading to increasing harmonicity marked the harmonies of
the first section of Gondwana. A process of transformation also affects the dynamic
envelopes themselves. The point of departure is, of course, the profile of the bell,
which is associated with an inharmonic spectrum. At the point of arrival, the
orchestral aggregate has a profile similar to the dynamic envelope of a brass sound
and is associated with a semi-harmonic spectrum. The envelope of the bell has a
brutal attack, followed by an exponential extinction; as for the brass envelope, it has a
less abrupt attack transient, during which the harmonics enter progressively from
lowest to highest, forming the slightly delayed peak of intensity and timbre, which is
characteristic of a ‘brassy’ attack. This attack is followed by a phase of sustain that is
more or less stable. Figure 14 shows how the transformation from one profile to the
other operates—with a few of the intermediary forms whose attacks are being
progressively softened while a sustain phase, that was absent in the initial bell
envelope, gradually begins to appear.

Orchestral Realization of the First Section


I have often used the term ‘aggregate’, rather than ‘chord’; additionally, I have often
spoken of ‘fusion’. In effect, what I sought to create here were large synthetic
Contemporary Music Review 209

Figure 14 Evolution of dynamic profiles from a ‘bell-type’ envelope to a ‘brass-type’


envelope.

sounds built from very specific orchestral combinations: a sort of ‘harmony-


timbre’, realized with ‘instrumental synthesis’ techniques. It is clear that the choice
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

of frequency relationships and of intensities is crucial. However, the orchestration


itself should also be carefully realized. Theoretically, to construct a spectrum—
whether harmonic or inharmonic—you add together pure (sinusoidal) tones: their
sum creates the perception of timbre. Of course, sinusoidal sounds are only
available with electronic techniques; so when instrumental sounds are used instead,
one also adds the spectral components of each of the instruments to the theoretical
(model) spectrum. The obtained aggregate will thus be much more complex than
any theoretical model. Obviously, the complexity of this final aggregate depends on
the chosen instrumental timbres. For example, if I had orchestrated my aggregates
with bassoons, oboes, brass with straight mutes, strings, etc., I would have added so
many additional harmonics that the final result would have been beyond complex,
muddied. The harmonic structure of the spectra calculated by frequency
modulation risked being drowned out by the multitude of extra partials emitted
by the orchestra. Therefore, I used instruments with spectra that were somewhat
less rich, whenever possible.
The heart of each chord, which corresponds to the sounds with the lowest indices
of modulation and thus the strongest amplitudes, is played by the brass. The sound of
the brass, without mutes, is somewhat concentrated on the first harmonics, and thus
stays rather clear. The sounds corresponding to higher index values are higher in
pitch but also softer and so are logically played by the woodwinds. The strings are not
used in these chords at all, since their spectra are too rich and slightly noisy; using
them would risk blurring the effect of orchestral re-synthesis. The oboes are used, but
generally play in the high register, where their spectrum is simpler. Some percussion
(tubular bells, vibraphone) gives sharpness to the attack transients of the bell-like
chords: as the progression of aggregates changes from the ‘bell’ model to the ‘brass’
model, with its softer attack, the percussion sounds will become desynchronized with
the attack and finally disappear totally. Finally, to create the ‘drone’ of the bell, I
needed a sound as pure as possible: I finally chose the tuba because it possesses a
timbre that is very centred on the fundamental in this register (G#3). (This is the
reason that orchestration treatises will often describe the tuba as ‘voluminous’, or
claim that its timbre is ‘large’. Contrastingly, the oboe or the violin, for example, are
often described as ‘sharp’ or ‘intense’, signifying the fact that their harmonics are
quite widely dispersed over their entire spectrum.)
210 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 15 Frequency modulation based ‘wave-like’ contours—section F of Gondwana.

The technique of frequency modulation, that was used to build block structures
(large harmony–timbre aggregates) in this first section of Gondwana, is also used
to create various other forms and contours in other passages of the piece. For
example, in section F, pitches created through frequency modulation will produce
sets of harmonic-melodic structures, sorts of ‘fan-shaped’ contours. A central
frequency, C1/4#4, a remnant of the preceding process, becomes the carrier. The
modulator, very small at first, increases progressively as the index of modulation
increases. Instead of sounding all together, the pairs of resultant sounds (each
‘pair’ consists of an additional sound and a differential sound) enter one after the
other. This creates this effect of ‘fanning’ around a central frequency, like waves
breaking on the shore. This effect is similar to the one produced by progressively
raising the intensity of the modulator while synthesizing frequency modulated
sounds in an electronic music studio. The first waves present a very small
Contemporary Music Review 211
frequency interval (owing to the small modulator and low index), then a process
begins to manifest itself. This process grows and spreads until the contours amply
fill out the full tessitura of the orchestra. These wave-like contours are played by
the oboes, English horns and bassoons: the idea was to highlight these contours—
hence the choice of instruments with very rich timbres that stand out from the
resonance, played by the brass and strings, like the effect of a piano’s sustain
pedal applied to the orchestra.
Figure 15 shows the first four and last two ‘waves’ of frequency modulation in
section F. There are very narrow intervals at the beginning—almost like glissandi
around the carrier—and large sweeps at the end. In the final orchestration, the
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

approximation was often made to the nearest semitone because the passages had to
be played so rapidly. In the last two waves, the sounds that are too low have been
eliminated. You will notice that the lower line of these last two waves starts off
descending, like in the other waves, but then rises up again. This is called a foldover
effect: for high values of the modulation index, the resultant differential (c–i*m)
becomes negative (because i*m4c). A ‘negative’ frequency obviously cannot really
exist—at least not in the universe we know. Therefore, we can simply ignore the
‘minus’ sign (in reality the ‘negative’ sign of the frequency is manifested as an
inversion of phase, which does not concern us here). So the differential frequencies
start to increase again once i*m becomes greater than c and end up interspersed
within the additional sounds. This phenomenon considerably enriches the harmonic
or timbral texture, and is often sought after in synthesis by frequency modulation.
The aggregates in the first section of Gondwana contain a very strong foldover effect.
Let’s continue our study of the concept of models—in particular, the notion of
instrumental timbre as a model—by examining another piece: Désintégrations. This
piece both allows us to study various processes and to begin speaking about the role of
the computer in musical composition.

Désintégrations (1982–1983) for Ensemble and Tape


Section I
The techniques used in this piece are mostly quite clear. Its compositional elements
are easily perceived and isolated—in contrast to my more recent pieces, where the
structures are more interwoven and thus more difficult to analyse. This is why I often
use Désintégrations to present some of my ideas and techniques.
One of the fundamental ideas of Désintégrations was that an excellent fusion
between instruments and the electronic sounds could be achieved. How could these
two sound worlds—so contrasting in their superficial appearances—be made to
communicate? The solution was to use the same procedures to generate both the
instrumental harmonies and the synthetic timbres. The instrumental timbres used as
models, or at least as points of departure, inform the generation, within a single
framework, of both harmonic structures and electronic spectra.
212 T. Murail (trans. by A. Berkowitz & J. Fineberg)
The first section of the piece uses a piano sound (C1), whose analysis we have
already seen. This time, let’s look at just the first 50 harmonics (Figure 16). As before,
the numbers on the left in each column indicate the harmonic ranks, and the
numbers on the right give the relative intensity of each harmonic. The loudest
harmonic arbitrarily receives the value 1. These numbers represent relative linear
intensities, not decibels (dB). The groups of partials comprising the formants
(loudest zones of resonance) are in bold and enclosed in rectangles. These groups are
not the loudest in an absolute sense, but they represent peaks of intensity relative to
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 16 Formants of C1 on the piano.


Contemporary Music Review 213
neighbouring partials. For example, the partials in the group 35, 36, 37, 38 have
amplitudes equal to 0.168, 0.1121, 0.1963, 0.1002 respectively, which are much louder
than the preceding group, 31–34 (intensities between 0.007 and 0.0819), and louder
than the following group, 39–41 (intensities between 0.0132 and 0.0435). This
ensemble of formant groups defines a very characteristic spectral structure—that is
why it interested me.
Let’s translate this list of numbers into musical notation; we’ll keep only the
formants defined above and approximate to the nearest quarter-tone. For this
example, the fundamental has been transposed to A#0, one of the spectra actually
used in this piece (Figure 17).
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

We are now going to create aggregates by intuitively selecting certain pitches from
this spectrum (which was already reduced to its principal formants). For example, the
first aggregate contains the harmonics 7, 11, 13, 20,29 and 36 (Figure 18).
Here again, I prefer to speak of an aggregate rather than a chord, because these
combinations of sounds serve equally well in the synthesis of electronic sonorities as
they do in writing instrumental parts. Since the electronic synthesis adds together
very pure, quasi-sinusoidal sounds, the partials tend to fuse strongly. Thus, the
resultant aggregate does not really sound like a chord, but like a single perceptual
object, a timbre. On the other hand, the instrumental orchestration of this object
creates a sonority more like what is usually called a ‘harmony’, owing to the
individual richness of each of the instruments used (the presence of harmonics in the
instrumental sound, the complex envelope of the sound, the vibrato, etc.). The global
result is nevertheless a bit ambiguous, since the electronic sounds and instrumental
harmonies are heard simultaneously. Once again, the most accurate descriptor may
be the hybrid term ‘harmony-timbre’.

Figure 17 Spectrum transposed to A#0.

Figure 18 First aggregate.


214 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 19 Succession of aggregates on the fundamentals A# and C#.

Section A of Désintégrations presents a succession of aggregates developed from


this model. In fact, two series of aggregates alternate, according to a process I will
describe later. The first series is based on the fundamental A#0 and the other is based
on C#2. While this alternation certainly introduces some variety in the aggregates
used, it also clarifies the relationship between the harmonic series of A# and C#. As
with the bell spectrum we saw earlier, the highlighted relation is that of a minor 3rd,
or more precisely a minor 10th.
These two series of chords are organized according to two temporal curves: two
superimposed rallentandi. The addition of these two curves produces a global
slowing, containing local irregularities. The two series of aggregates alternate more or
less irregularly, while progressively being enriched through the addition of lower
partials and growing closer together in time—until a collision occurs. From the
moment of collision onwards, the aggregates based on A#0 and those based on C#2
occur simultaneously—creating a sonority somewhat similar to a bell.
To organize these rallentandi, I used a graphic representation of the process,
reproduced in Figure 20. Time is on the abscissa and the durations are on the
ordinate axis. By duration, I mean the interval of time between two aggregates of the
same series. The durations are measured in seconds—not in rhythmic values. The
upper curve corresponds to the series built on the fundamental A# and the aggregates
are indicated by Latin letters; the Greek letters and the lower curve correspond to the
series built on the fundamental C#. The series built on C# begins with shorter
durations than the A# series, then progressively the temporal intervals between the
sounds of both series get longer. At letter k the two curves join—the aggregates on
both A# and C# sound simultaneously and the duration of the event k is 14 seconds
for both aggregates. The only part of this process not shown on the graph is the very
first event, an aggregate built on A# which occurs 7 seconds before the first C#-based
aggregate (letter a)—i.e. 7 seconds before time 0 of the graph. The rest of the graph
should be read in the following manner: the event a lasts 3 seconds and occurs at the
Contemporary Music Review 215
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 20 Rallentando curves—Désintégrations I.

instant 0; it is followed by an event b that lasts 3.2 seconds and that begins at instant 3
seconds, then by an event a that occurs at the instant 3.5 seconds—the Latin letter
indicating that this event belongs to the other spectral series—etc.
To create the feeling of progressive slowing, I used curves, not straight lines. It
would have been simpler to make straight lines between the points of departure and
arrival; however, the resulting progression would have been linear. Whereas,
observation of instrumental reality shows that instrumentalists, when asked to play a
rallentando, will intuitively perform logarithmic slowing down of event durations—
not a linear progression. A linear progression (of the ‘chromatic durations’ variety)
would not create a ‘natural’ impression; rather, it would create a constrained effect,
which sounds awkward to the ear. Here, we are jumping ahead into a new subject:
algorithm and intuition. The way I’m using the word ‘intuition’ amounts to a list of
intentions: ‘My first object will not last for very long; my last object will last 14
seconds; the process will be organized as a progressive slowing which should last
between 1 and 2 minutes.’ ‘Intuition’ would also include observing how musicians
and listeners react to this series of events organized in time—this is a sort of
experimenting with the musical ‘intuitions’ of those who will be participants in the
musical act (the listener and the performer). The algorithm itself is simply a series of
operations—logical or arithmetical—which allow a result, based upon a set of input
data (parameters), to be calculated. In this specific case, the algorithm allows me to
216 T. Murail (trans. by A. Berkowitz & J. Fineberg)
create the optimal curve for this rallentando process and also to calculate the
intermediate steps of this process. To define an algorithm, one must create a model of
the phenomenon one seeks to recreate: in this case, the manner in which a musician
performs a rallentando. This model allows a curve to be calculated—a mathematical
function, whose starting parameters are the intuitive estimation of the durations at
the outset and the arrival of the process (and possibly also a timeframe, which will
help the process fit within the global form).
I started thinking about time in terms of process, durations and functions before I
had access to computational techniques—these techniques facilitate algorithmic
calculation, which often requires their use. Even in Désintégrations—where the
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

computer was used as a sound synthesizer, a means of creating some of the formal
structures and carrying out certain spectral or temporal calculations—I turned to
empirical and graphic solutions. ‘Computer-aided composition’ programs did not yet
exist; thus, it was cumbersome to address these musical problems with environments
that were not very ‘user-friendly’. Moreover, my computer skills were still
rudimentary. This led to the use of graphs like the example we just saw. In that
specific case, I had to proceed by successive approximations, through ‘trial and error’,
modifying the initial parameters, etc. The constraints I had set myself were numerous
and sometimes contradictory: how to make two curves converge in a harmonious
manner, while still creating two convincing continuous rallentandi, and making all of
this occur in a set period of time. The computer would have been very helpful, if I
could have used it: computers can very rapidly calculate and simulate various
situations. It’s easy to start over and try, try again until a satisfactory result has been
found. The ‘algorithmic’ techniques of computer music need not necessarily be used
to create a predestined, automatically calculated, result. On the contrary, they can
allow the exploration of a larger field of possibilities; thereby heightening the freedom
of the composer—not limiting it.
Let’s return to the two rallentando curves: what makes them interesting is their
superposition. Instead of a simple rallentando, the alternation of points situated on each
curve creates an unexpected and unstable rhythmic progression; all the while conserving
the global impression of slowing down, since the durations (on average) are increasingly
long. The process is ‘directed’ (listeners perceive it as ‘going towards something’), but at
the same time this process still produces unpredictable rhythmic configurations. This is
a very simple example of the interplay of predictability and unpredictability: my feeling
is that this interplay is one of the central issues in musical composition. On the one
hand, a work needs to be part of a sufficiently predictable universe that the listener can
perceive continuity and coherence in the musical discourse; however, at the same time,
if the discourse is too predictable the work rapidly becomes uninteresting. Structural
predictability needs to be contradicted constantly by some type of unpredictability
within the discourse. However, it is also essential that this surprise, this unexpected
aspect, integrates logically and in a coherent fashion, a posteriori, over the course of the
form. The shock, the surprise, even the incongruous, should become explicable, should
reintegrate itself as a necessary element of the discourse (in hindsight). If this does not
Contemporary Music Review 217
happen, the unexpected becomes simply arbitrary and the effect of surprise will be
dulled on subsequent hearings. A totally unpredictable discourse does not hold a
listener’s attention any better than a totally predictable discourse. It is ironic that
extreme randomness yields the same sensation of total unpredictability for a listener as
does the total organization of the discourse—like the principles experimented with in
‘algorithmic’ music or in ‘integral serial’ music. It turns out that perpetual surprise is no
longer surprising, and unpredictability can became too predictable to be interesting.
The preceding example illustrates the way I conceive of temporal control. I do not
work with durations by combining small elements, pulsations or rhythmic
microstructures; on the contrary, I take a global point of view, conceiving the totality
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

of a temporal segment and, through successive attempts, trying to determine the


details of how the durations must evolve. I proceed in basically this same way for all of
the dimensions of the musical discourse. The first section of Désintégrations, in fact,
unites many separate processes involving the durations, the harmony and the timbre.
These processes are in a strict relationship with one another. The harmonic and
timbral processes evolve simply from harmonicity at the start of the piece to
inharmonicity at the end of section I. As we saw earlier, the aggregates at the start of
this process are fragments of a harmonic series. Their lack of lower components,
however, makes them a little less stable, a little more ‘suspended’ than complete
harmonic spectra would have been. Over the course of the process, the lower portion
of the spectra are more fully explored. Once the rhythmic collision between the two
rallentandi occurs, the two aggregate series continue in superposition. There are two
simultaneous fundamentals, yielding a resultant aggregate that is not really harmonic
anymore. Moreover, for the last three of these aggregates, some harmonics are
progressively transposed one octave lower—reinforcing the impression of inharmo-
nicity. (One way to measure the harmonicity of an aggregate is to consider its ‘virtual
fundamental’. The lower this ‘virtual fundamental’ is, the more inharmonic the
aggregate. Moving a harmonic one octave lower often amounts to pushing the virtual
fundamental one octave lower, thus rendering the aggregate more inharmonic.)
(Figure 21).
This procedure of harmonic transformation was widely employed by Gérard Grisey
(see the first section of Partiels for ensemble, 1975). In the last section of
Désintégrations, it’s the entire spectrum that slides down, octave after octave, until
reaching, for the final sound, a virtual fundamental G–3 with a spectra possessing
only one out of every 10 harmonics (5, 15, 25, 35, etc.). The sonic effect produced is
strangely similar to the sound of a tam-tam (Figure 22).
The orchestration reinforces the effect of a ‘drift towards inharmonicity’ present in
section I. At the start of the section, I use timbres that respect the harmonicity of the
aggregates as much as possible. In other words, I use relatively transparent
instrumental timbres: flutes and clarinets. Progressively, the other instruments appear
in a very precise order. First to enter is a muted horn, whose timbre is very filtered and
poor in harmonics. Next, the string instruments enter. They begin by playing
harmonics or sul tasto (another way to filter the spectrum). Then, a few measures later,
218 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 21 The three last aggregates of section I. Note: The aggregates result from the
superposition of spectra built on A# and C#. The harmonics transposed down by one or
more octaves are boxed.

the strings move to ordinario playing. When the spectra of the aggregates has become
still richer, it’s the oboe’s turn to enter. The oboe plays in the high register (C¼#6) at
first. Because, while the low register of the oboes has a very rich spectrum, its high
register (in the region of C6) has a much simpler spectrum, very centred on the
fundamental—resembling quite a bit, in fact, the clarinet’s or flute’s spectrum.10
Once the two spectral series collide, creating really rich spectra, the other
instruments enter (bassoon, brass). Obviously, these instruments, with very rich
spectra, add their own harmonics to the theoretical aggregates and could confuse the
Contemporary Music Review 219

Figure 22 Final sound of Désintégrations (frequency components contained within the


Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

electronic sound).

sonic result. However, at this point in the process the added richness only reinforces
the spectral complexity that has been attained. Even better, I can draw on the added
spectral richness. Let’s take the example of the aggregate in bar 34, the penultimate
aggregate of this harmonic process (Figure 23). The horns and the double reeds play
five of the aggregate’s central pitches. They are playing forte with accents so they add
their own harmonics powerfully. The tape part takes up the spectra of these five
instruments and progressively unfurls their additional harmonics, all the way up to
the 23rd partial. It is almost as if we were applying a gain filter tuned to higher and
higher frequencies in the instrumental sounds. This process makes clearly audible the
harmonics of harmonics.
At the end of this sonic spiral, the very high harmonics form a very brilliant
‘cluster’. The strings then take up certain pitches of the cluster—in regular sounds or
in harmonics—and a high cymbal joins the strings, with the hope that the frequency
band of the cymbal will be in the same region as that of the synthetic sounds.
Unfortunately, this is not always the case. The imprecise definitions of percussion
instruments are a recurring problem. In the score, when requesting a high or low
cymbal, a high or low tam-tam, it’s never clear just what kind of sound will be
produced. If your only concern is a colouristic or emotional effect, this is not a big
problem. However, if one is looking for a more precise effect, like the one described
here (an effect of integration between instrumental and electronic sounds), the
problem becomes crucial. Just as a microphone is defined by its frequency response
curve, it would be useful for a cymbal to be delivered with its spectrogram and
defined by a frequency band, rather than the impossibly vague descriptions ‘high’,
‘medium’, ‘low’, etc.

Role of the Tape


The tape and the instruments carry out different types of dialogues. In the preceding
example, the tape took up and developed the pitches originating from the
220 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 23 Transformations of the aggregate, bar 34.

instrumental spectra—enriching and modifying the instrumental sounds. Most often,


I am looking for an effect of intimate complementarity, of fusion, sometimes even of
ambiguity between the electronic and the acoustic sounds. The percussive aggregates
at the end of section I sound like powerful bells: within these sounds, it is not possible
to distinguish the contributions of the instruments from those of the electronics. In
section II, the tape amplifies the instrumental ensemble, making it sound almost like
an orchestra. In particular, this is due to a multitude of superposed trills with variable
speeds, each trill built on an instrumental partial. This idea of applying a sort of virtual
processing to the instruments is also found in section III, where the English horn solo
is doubled by changing imaginary formants of its own spectrum—creating the effect of
virtual filtering. The electronics can also fill in instrumental gaps. At the beginning of
section III, the piano and percussion play clouds of very high percussive sounds, while
the tape completes these clouds with the non-tempered pitches that those instruments
cannot play. At other moments, the tape simply clarifies the instrumental discourse,
particularly with regard to rhythm. In section IV, the instruments play a very rapid
series of low chords; normally, in this register, the attacks cannot be very clear, and the
rhythms—a succession of different rallentandi—would hardly be audible. However, to
make this effect clear, the tape adds attack transients to the instrumental sounds. The
tape can also perform other utilitarian roles, like helping obtain precise micro-
intervals. This can be done by giving a reference to the instrumentalists, when the
Contemporary Music Review 221
electronics give the instrumental pitches, or by creating fusion between the
instruments and the tape when they double each other’s notes—if the deviation is
not too large, even when their intonation is not exactly the same, the resultant
complex of pitches will be essentially correct.
Perfect synchrony between the electronic sounds and the instrumental ensemble is
indispensable if the tape is to play these different roles. When Désintégrations was
premiered in 1983, there was no computer technology that could easily play back
synthetic sounds in real-time from a computer. We therefore had to store the sounds
on a tape, which runs without interruption from one end of the piece to the other.
This poses the problem of how to achieve synchronization. Coordination between the
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

tape and the instruments is achieved through a ‘click track’: ‘clicks’ are placed on one
track of the multichannel tape and the conductor hears these clicks through
headphones. The clicks accurately reproduce the measures and the beats of the score.
This technique allows near-perfect synchronization, but it takes a lot of interpreta-
tional liberty away from the conductor: he absolutely cannot change the tempi.11

Frequency Shifting in Section III


We saw the use of inharmonic spectra derived from the analysis of bell sounds. There
are many other types of inharmonic spectra, originating from the analyses of acoustic
sounds or from studio techniques for sound processing. One of the oldest studio
techniques is ring modulation. Stockhausen, for example, used this technique in
Mixtur and then again in Mantra. The orchestra in Mixtur and the two pianos in
Mantra are transformed by ring modulators. While Stockhausen is clearly aware of
the effects of harmonicity and inharmonicity caused by diverse intervals between
carrier frequencies of the ring modulators and the notes played on the piano, he does
not precisely calculate the resultant pitches. More importantly, he does not take these
into account in his (otherwise quite elaborate) system of pitches—the theoretical
pitches, thus, contradict the sounds used in the piece.
Let’s briefly review the principle of ring modulation: two sound sources enter a
modulator—let’s call their respective frequencies ‘a’ and ‘b’. The resultant sound is
the addition and subtraction of those frequencies: a + b and a – b. If ‘a’ and ‘b’ are
pure frequencies, these formulas would be sufficient to describe fully the resultant
sonority. In reality, though, ring-modulators usually have an instrumental source for
‘a’ and an electronic, sinusoidal sound for ‘b’ (as is the case in the Stockhausen works
mentioned above). In this configuration, the first input to the modulator is
connected to a more or less complex spectrum captured by a microphone, and all of
this sound’s components are modulated by the sinusoidal sound ‘b’ in the second
input. If the instrument has three significant harmonics, the resultant will contain the
following frequencies: a + b, 2a + b, 3a + b, and a – b, 2a – b, 3a – b. As this example
shows, it is easy to calculate the resultant sounds of a ring modulation as long as the
modulated sounds are not too complex. The paradigm of ring modulation allows the
creation of new types of harmonic relationships and can serve as a model—this time
222 T. Murail (trans. by A. Berkowitz & J. Fineberg)
technological—for the creation of new spectra. The exploration of this model can
take place in the realm of mixed music, or within purely instrumental music. In
Partiels, Gérard Grisey calculates the virtual ring modulations of two instrumental
lines played by flutes or clarinets, and creates secondary lines from the results, which
are orchestrated in the strings—these lines form a strange sort of counterpoint with
the principal lines. Désintégrations uses the model of ring modulation in several
places (e.g. section II, section IX). Both the instruments and the tape play the
calculated notes; however, in these cases, it is actually a simulation of modulation, not
real-time or pre-recorded electronic processing. A variation on ring modulation is
frequency shifting. With this technique, a frequency is added to or subtracted from a
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

complex of sounds. This produces a linear transposition in terms of frequencies and


thus creates a non-linear transposition in terms of intervals.

Figure 24 Example of frequency shifting.


Contemporary Music Review 223
Figure 24 is taken from the end of Les Courants de l’Espace for ondes Martenot with
electronic processing and orchestra. A ring modulator modifies the sound of the
ondes; the orchestra plays sonic complexes using the pitches that result from the
modulation—or, in this example, from frequency shifting.
In the above example, an aggregate F#–C (a bit lowered)–E–C (a bit lowered)
slides upwards 208 Hz, resulting in a completely different chord: G–Bb–C¼#–F¼#.
The distance in hertz between the pitches stay the same, but the intervals are all
changed. Like ring modulation, there have long been electronic devices that can
perform this effect in real-time (frequency-shifters): causing instrumental sounds to
be transposed in the frequency domain. Since this effect is applied equally to the
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

fundamentals of the sounds and to their harmonics, the harmonics themselves


become distorted and the instrumental spectra become inharmonic. A piano sound
treated this way takes on a ‘gamelan’-like sonority, somewhat like the sound of a
prepared piano (and for much the same reason: the preparation of the piano often
makes its spectrum more inharmonic). On the other hand, a very rich instrumental
sound (e.g. a chord in the strings) produces a resultant which is very ‘noisy’ and
difficult to control.
In section III of Désintégrations, frequency shifting is applied to a five-note
aggregate, a fragment of the harmonic series (harmonics 3, 5, 7, 9, and 11 of F1;
Figure 25).
This aggregate is the consequence, the consolidation, of the resonances produced
by the clouds of high percussion that open the section. The five sounds are exchanged
between the woodwinds and muted brass, like a distant carillon that has been slowed
down. Once the initial texture is established, the aggregate progressively drifts in
frequency towards the low register, becoming gradually more inharmonic. Since the
shifting of frequencies is downward towards the low register, the intervals enlarge
progressively; when viewed in terms of notes, lower notes descend by greater intervals
than higher notes which have been shifted by the same frequency. The tinkling of
small bells from the beginning of the section continues and undergoes the effects of
the frequency shift. The ‘carillon’ speeds up until it reaches the point where an
English horn highlights certain pitches—forming short melodic phrases. The level of
agitation increases, accompanying ever-stronger frequency shifts. The melodic
phrases accelerate and accumulate more and more elements until a sort of

Figure 25 Fragment of harmonic spectrum, beginning of section III.


224 T. Murail (trans. by A. Berkowitz & J. Fineberg)
‘catastrophe’ occurs. In other words, there is such an upheaval of the texture that the
music shifts into a completely new territory.
The frequency shifting does not occur with a simple glissando, but through a series
of discrete steps. Thus we are dealing with a new process: from a point of departure
(the five-note aggregate), the frequencies shift by a certain quantity of hertz, over a
given period of time, through a certain number of steps. The amount of shifting is
determined by the arrival point: more precisely, by the lowest note in the final
aggregate (here it is a C#2). The frequency shift required to accomplish this is 61.5 Hz
(the distance in hertz between C3 and C#2). Temporal constraints and duration
curves determine the number of steps: 11. When 61.5 Hz is divided into 11 equal
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

steps, and the result is used to generate successive frequency shifts of out original
aggregate, the result as shown in Figure 26 is obtained.
The overall process clearly moves from harmonicity to inharmonicity, but the
intermediate results are not necessarily what we wanted to obtain: the intervallic
configuration of these aggregates either can reinforce or contradict the global process
(note the splendid F major triad in this example). Of course, the unanticipated results
could be adjusted to break up a process that is too predictable. As calculated, this
harmonic succession seems somewhat incoherent: should the algorithm be changed?
It is hard to imagine a calculation that could resolve this type of question; only the
intuition and craft of the composer will ensure that good decisions are made. The
solution, in this case, was to calculate many more steps than needed (25) and to
choose from among those steps in order to create a succession that seemed to make
harmonic sense. The progression that results from this is slightly irregular and less
smoothly progressive than the previous sequence; however, in the end it works much
better (see Figure 27).

Figure 26 Frequency shifting in 11 equal steps.

Figure 27 Frequency shift, final solution.


Contemporary Music Review 225
The global shift towards inharmonicity is certainly still there; however, the local
progressions now seem equally satisfying and no longer contradict the general
process. Moreover, a slightly unpredictable quality has been introduced to the
progression. The whole problem is to reconcile these two aspects of the musical
discourse: the directionality of the process and the functionality of the harmony.
Harmony must be an essential element of a musical discourse and must have an
intimate relationship to the form. In numerous musical aesthetics, where harmony
no longer performs a functional role, this notion has almost completely disappeared.
But in my music, I have always tried to give harmony a real functional role, and I
believe this role is very powerful. For me, harmony is not reduced to a purely
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

decorative role and it does not merely serve as a colouration of time as it passes (we
will come back to this later).

Sections VIII, IX and X


Let’s look at one last passage from the piece: sections VIII, IX and X. This passage
contains a new type of inharmonic spectrum. To simplify the explanation, I would
like to discuss some theoretical ideas about different types of spectra. In order to do
this, I will represent them as mathematical functions.
The harmonic series can be represented by a simple linear equation: p=f*r, where
the frequency of the partial (p) is equal to the frequency value of the fundamental (f)
multiplied by the partial’s harmonic rank (r being a positive integer). This function is
displayed in Figure 28 as a graph: the harmonic rank is on the abscissa and the
frequency is on the ordinate axis. The black points indicate the positions of the
partials.
This graph corresponds to any linear equation in the form y=ax, and allows us to
represent a harmonic spectrum by a line. Now let’s examine frequency modulation;
we saw earlier that it is represented by the equation: f=c+m*i. We can also write this
equation as f=m*i+c, if we state that i can have both negative and positive values.
This shows that we are, in fact, dealing with another linear equation, but one in the
form y=ax+b (‘b’ corresponds to ‘c’, the carrier; ‘a’ corresponds to ‘m’, the
modulator; and i, the index of modulation, corresponds to x and serves as the
equation’s variable. The graph of this equation is also a line. The only difference
between this line and the one representing the harmonic series is that this line does
not necessarily pass through the origin (0,0) of the graph (Figure 29).
The representation of spectra created through frequency shifting or ring
modulation would be very similar to that of frequency modulation. It is possible
to draw the conclusion from these graphs that harmonic spectra and the inharmonic
spectra produced by frequency modulation or ring modulation possess a certain
kinship, which comes from the regular spacing of the partials in terms of
frequencies.12 Moreover, the whole family of frequency modulation spectra has a
certain familial character, which is easily perceived. At the beginning, when one starts
to explore the possibilities offered by a DX713, for example, or to synthesize frequency
226 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 28 Graph of the harmonic spectrum.

Figure 29 Graph of a frequency modulation spectrum.


Contemporary Music Review 227
modulation sounds on a computer, it seems that an amazing variety of timbres can be
obtained; however, rather rapidly, the impression of always hearing the same type of
sounds starts to set in and soon it feels like you have exhausted the entire repertoire
of sounds the technique can produce. This is also part of the reason why computer
music from a certain era always seems to sound similar: frequency modulation was
relatively easy to do with the programs in use at the time.
Given these limitations, let’s look for some new models to enrich our repertoire of
inharmonic sounds. Their spectra will need to present a more unequal spacing
between partials if we do not want to end up with the same set of problems. Metallic
percussion has just this type of spectrum: a series of regularly spaced partials
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

disrupted by the presence of a few inharmonic partials. Another, less expected


candidate is the piano. The sound of the piano is, in fact, slightly inharmonic. We can
use the 16th harmonic as a point of reference. If the sound of the piano were perfectly
harmonic, the 16th harmonic would be exactly four octaves above the fundamental.
However, in reality, it sounds approximately one half-step higher. This phenomenon,
of course, affects all of the piano’s partials and does so proportionally to their rank—
the higher the partials, the stronger the deviation. This explains why the high register
of pianos is tuned higher than it ought to be and the low register is tuned lower than
it ought to be. If you want ‘just’ octaves (octaves that do not beat) in relation to the
piano’s spectrum, you must enlarge them slightly.
Since the 16th harmonic is too weak, it is not represented here (Figure 30). The
17th harmonic, in theory, should be close to a C#; in reality, though, it is closer to a
D. The greater the rank of a partial, the farther it is from the theoretical position it
would have if the piano were perfectly harmonic.
I often use the following equation to model this phenomenon, which I refer to as
‘harmonic distortion’: p=f*rd, that is, the frequency of the partial (p) equals the

Figure 30 Harmonic distortion of the piano spectrum.


228 T. Murail (trans. by A. Berkowitz & J. Fineberg)
fundamental frequency (f) multiplied by the harmonic rank (r) raised to the power of
d (distortion).
If d is equal to 1, we end up with a linear equation of the form y=ax + b; thus, we
obtain a harmonic series. If d is greater than 1, the harmonic series is stretched (like
the piano); if d is less than one, the harmonic series is compressed.
Figure 31 shows a representation of the harmonic distortion of that same piano
spectrum in the form of a graph. On the abscissa we have the ranks of the partials,
and on the ordinate the frequencies. The non-distorted harmonic series forms a
straight line (darker points); the lighter points—which represent the spectrum with
harmonic distortion—form a curve that progressively departs from the straight line.
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 32 gives two examples of imaginary distortions, applied to a spectrum of


only odd harmonics. The coefficient of distortion is expressed as a percentage in this
example: 3% corresponds to d=1.03, –7% corresponds to d=0.93. This observation
about the piano spectrum and the formalization derived from it will allow us to
generalize a new process. We can play with the idea of harmonic distortion: perhaps
by exaggerating this phenomenon, we can generate a completely new family of
spectra—and thus of timbres and harmonies.

Figure 31 Graph of the distortion of the piano.


Contemporary Music Review 229
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 32 Distortion spectra.

Now we arrive at section X of Désintégrations (Figure 33). The starting point for
this section is a low E played by the trombone. The tape takes up the trombone’s
harmonics, calling attention to them through successive entries. It then distorts the
trombone’s spectrum by progressively displacing the partials (in fact the fundamental
of the tape’s spectrum is the E an octave lower than the trombone’s—as if the
trombone were playing the 2nd harmonic). To illustrate what is happening, let’s
choose the 12th harmonic as a point of reference: in a harmonic spectrum, it should
be a B4. For the first step of the distortion process, this 12th harmonic is raised by
one quarter-tone to B¼#4. This operation is carried out eight successive times, so
that at the end of the process, the 12th harmonic has been raised from B4 to D#4 by
steps of a quarter-tone. Obviously, all of the other harmonics are recalculated as a
function of this reference displacement.
230 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 33 Progressive distortion of the trombone spectrum, section X.


Contemporary Music Review 231
The final spectrum is obviously quite different from the original spectrum; it has
also lost all of its ‘trombone’ colour. Yet, the transformation is very progressive: it
occurs as a series of cross-fading spectral slides. Over the course of the section, the
tension gradually increases, as a result of the ever-greater inharmonicity of the
increasingly distorted spectra (and also through the simpler effects of register and
voice-leading).
Section VIII also depends on the use of distortion spectra. However, here, there
is no acoustic model: the spectra are determined simply through calculation. There
are two reference points, this time: the 3rd harmonic and the 21st harmonic. The
21st harmonic rises by steps of a quarter-tone until it has risen from F¼# to G 3/ 4 #;
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

the 3rd harmonic rises by half-steps. This creates an upward frequency slide and an
effect of compression simultaneously. The first step in realizing this progression
consisted of calculating, in order, the six different distortion spectra—to which the
initial spectrum was also added. If I had stopped there, the result would have been
an extremely predictable process, like the one in section X. I sought this
predictability in section X because it provoked a high degree of tension that could
only be resolved through a ‘catastrophe’—a sort of explosion (the opening of
section XI). In the passage we are interested in here, I needed a more static effect
that would form a sort of ‘climax’ for the piece. It was, therefore, impossible to give
this process such a strong orientation. I needed to disrupt the progression. I did
this, first of all, through local permutations: instead of presenting the distortions in
increasing order (1, 2, 3, 4, 5, 6, 7), they are used in the order 1, 4, 5, 2, 6, 3, 7.
While this change modifies the local progression, it preserves the global orientation.
These local permutations introduce ‘accidents’—fractures—that make the listening
experience much more interesting and thwart an excessive sensation of
predictability. I have often used this technique that produces one of the major
articulations of musical discourse: a dialectic between predictability and
unpredictability. To avoid the effect of tension (which occurs quite clearly in
section X, as a result of the great enlargement of the range), the aggregates in
section VIII are alternatively enlarged or reduced through the addition or
subtraction of partials. The tape and the instruments realize these aggregates
simultaneously. Each one has a different duration, as a function of its contents (its
degree of distortion). There is also a sort of ‘spatial vibrato’ in the tape part—a
rapid forward-backward spatial movement. The frequency of this spatial vibrato is
also a function of the harmonic contents of the aggregates.14
The last aggregate of the very short section VIII ‘collapses’ brutally into a dense
storm of sounds, marking the start of section IX. This is a sort of ‘chaos’, it creates an
impression of disorder that was, nonetheless, carefully constructed. Beginning with
maximum instability, the textures gradually organize themselves. Progressively, they
sharpen their focus around a low E in the trombone and thus arrive at the opening of
section X, which we have already discussed. This ‘downpour’ of sounds is the result of
virtual ring modulations between the low sounds played by the strings—which
progressively stabilize around the trombone’s E. The modulations were calculated
232 T. Murail (trans. by A. Berkowitz & J. Fineberg)
with the strings’ spectra in mind: each harmonic of the first sound interacts with each
harmonic of the second. In other words, if sound A possesses five significant
harmonics (A, 2A, 3A, 4A and 5A) and sound B has three harmonics (B, 2B and 3B),
the resultant modulations will be A + B, A – B, A + 2B, A – 2B, A + 3B, A – 3B, then
2A + B, 2A – B, 2A + 2B, 2A – 2B, etc. This produces a huge mass of resultant notes
(Figure 34). All the combinations between the pairs of low sounds are exploited as
showers of notes and not as synchronized spectra—as were the harmonics of the
instrumental sounds in section III, which created the clouds of high percussive bell
sounds. The storm is organized according to global gestures (descending lines)
modified by controlled randomness algorithms (which the computer took care of).
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

The section is organized according to a multidimensional process that affects the


pitches (a strongly inharmonic situation at the beginning, progressively concentrating
on the spectrum of E), the densities (progressive decrease in density), the registers
(long descent towards the low E), and the durations (which lengthen). While this is a
very directional process, it is tempered by a dose of local-level unpredictability.
This concludes our exploration of Désintégrations. However, here are some brief
notes about the harmonic/timbral structure of the sections that were not analysed.

. Section II: Ring modulations of rich sounds (cf. section IX)—diffraction of


spectra into superposed, asynchronous trills.
. Section IV: Frequency shifting using curves, not lines (as in section III).
. Section V: Successive shifting of frequencies within a harmonic spectrum (the
only real harmonic spectrum of the piece) to obtain beating, then increasing
roughness.
. Section VI: Succession of frequency modulations.
. Section VII: Rapid permutation of seven harmonic distortion spectra. The
instrumental parts are approximated to the semitone because of the fast tempo.
. Section XI: Transposition from one octave to another of a harmonic spectrum;
each time the spectrum returns, the fundamental is pushed an octave lower: a
similar colour, but increasingly inharmonic. The final spectrum is based on the
theoretical fundamental G–3 and only utilizes one harmonic in 10. The timbral
effect is similar to the sound of a tam-tam.

Role of Computer-Aided Composition


The examples that we just looked at call for various types of calculations or
algorithms; sometimes these are very simple and sometimes they are more elaborate.
Obviously, a computer can help simplify tasks that are repetitive or calculations that
are difficult to do by hand (like exponential or logarithmic calculations). The idea of
composers getting help from a computer is not recent. Since the 1950s, calculating of
musical structures with a computer has been contemplated. The first ‘work’
calculated by a computer dates from 1956 (Suite Illiac, Lejaren Hiller, University of
Illinois, 1956). This work was followed by various composers’ development of what
Contemporary Music Review 233
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 34 Example of ring modulation, section IX. Note: Modulation between sound
‘A’, with six harmonics, and sound ‘B’, with five harmonics. The resultant sounds are
classified by harmonic level. The differential tones that were too low have been
eliminated.

we now call ‘algorithmic music’ (it must be said that this tendency has not really left
us very many masterpieces). This approach often turned out to be naı̈ve and led to a
reduction in the complexity of the musical act, which was in effect a contradiction of
the initial postulates.
In hindsight, the principal critique of ‘algorithmic music’ is that musical
phenomena are not as easily reduced to a series of numbers (numerical data that
234 T. Murail (trans. by A. Berkowitz & J. Fineberg)
the computer can manipulate) as some have thought. Therefore, the goal of totally
controlling the form and content of a piece of music with computer algorithms is a
mirage. There is no automatic relationship between an algorithm and the perception
of the musical (or at least, the sonic) phenomenon generated by that algorithm.
Computer music research in the 1960s and 1970s moved on to concentrate more on
sound synthesis, a trend that was facilitated by the increasing power of computers.
However, this new focus on synthesis often led institutions and researchers to forget
the contributions computers could make to the work of composition proper. For
example, when I began working at IRCAM in 1981, I found a variety of synthesis
programs there, but not one program capable of assisting composers in their daily
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

work—not even the kind of elementary little programs that could perform small but
tedious tasks, like converting frequencies into musical notes and vice versa. During
that time, I decided that I had to develop some rudimentary programming skills,
which allowed me to write small personal programs for spectral calculations,
modulations and distortions, exploitation of analytical data, duration calculations,
etc.
The computer can help us express musical images. I see the act of composition as
a sort of mental projection: I imagine more or less complex musical situations in
which the details are not yet defined, then I try to realize them. To do this, one
must analyse and decompose the global nature of these musical situations. The
musical ideas must be reduced into components that are much simpler than the
original idea. Without adequate conceptual tools to realize this simplification and
reconstruction of the original musical image, the final result runs the risk of being
very far removed from the original conception. It is at this level that computers can
be useful. They allow us to keep the connecting thread between the original idea
and the final realization intact. They do this in two ways: first, the computer
accelerates the processes of decomposing and then recomposing the sonic image;
and, second, the computer can propose more refined solutions than those that we
might have intuitively chosen. This is, of course, due to the computer’s capacity for
performing complex calculations; however, it is also the result of a computer’s
ability rapidly to propose a multitude of different solutions—between which the
composer can choose. Whereas, when working intuitively (with pencil and paper),
fewer possibilities can be imagined at one time, which encourages the composer to
accept the first solution that is found—or to be content with an only approximate
realization.
The role we are defining for computer-aided composition is thus, in the end,
somewhat modest. We are not asking the computer to invent the global shape of a
piece, or to determine its large-scale form; we don’t even really expect it to create any
of the material. The computer’s role will be situated somewhere between these two
levels, as a mediator, or perhaps an intermediary. This is the perspective with which I
have created a certain number of computer tools for myself over the years. These
programs responded to precise compositional needs, and not to theoretical
considerations. My first programs worked on small personal computers; then I
Contemporary Music Review 235
15
collaborated on the completion of the program Patchwork at IRCAM. Patchwork
offers the advantage of being an environment where composers can easily create their
own algorithms, produce representations of the obtained results in musical notation,
and play these results via a MIDI interface.16
The ideas behind this sort of computer-aided composition are very different from
those traditionally associated with ‘algorithmic music’. Algorithmic music’s ideas
were most likely derived from the movement’s heritage in serial writing:
permutations, combinatorial operations, etc. A mechanistic or ‘algorithmic’
approach in that sphere actually pre-dates the development of computers. Let’s take
some examples from Messiaen (in whose music one would probably not, at first
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

glance, expect to find a ‘scientific’ approach). We know that, at a certain point in


time, Messiaen was interested in serial techniques as practised by the ‘Darmstadt
school’. In addition to his piece Mode de valeurs et d’intensités, he developed his own,
rather particular, permutation systems. One of these systems involved establishing a
series of numbers that could be indexed—either a series of duration or of notes. With
a traditional series, the number of permutations increases exponentially as the
number of elements increases. Messiaen’s idea was to find a system that would create
a more limited number of permutations, following the model of his modes of limited
transposition (scales of pitches whose successive transposition ends up reproducing
the original scale). In this system he numbered the elements of a cell and used that
cell itself to determine the order of elements in its next permutation. For example, if
we take the series 5, 4, 1, 3, 2 to create the first permutation, we will take the 5th
element of our original cell, followed by the 4th element, the 3rd, etc. The first
permutation will thus be 2, 3, 5, 1, 4. This operation can then be repeated until the
initial series returns. The number of permutations with this system, instead of
exploding, will be exactly equal to the number of elements in the series. Other
composers were fascinated with magic squares, Pascal’s triangles, and of course who
can forget the Fibonacci series or the golden mean. Of course, it is very easy to
implement any of these techniques with a simple computer program and to use them
to derive a musical ‘translation’ of the numbers (this is especially easy in an
environment like Patchwork).
All of these models are enticing and some of them are very conceptually elegant,
but do they really guarantee any musical pertinence whatsoever? In certain cases,
combinatorial permutations can be an effective tool for use on details—like the local
permutations of a process. Or when the combinatorial operations take place within a
well-defined group of elements—where all the relationships can potentially make
sense—in this case, the permutations may have some value. For example, in a
reservoir of pitches belonging to a coherent spectrum—since all of the spectral
components maintain, by definition, a special relationship—permutation games can
have a certain interest or at least coherence. However, there is no general a priori
reason that one permutation—or any other mathematical or arithmetic manipula-
tion—should necessarily yield pertinent results. In music, everything depends on
relationships, context, resemblance, proximity between events and, of course, the,
236 T. Murail (trans. by A. Berkowitz & J. Fineberg)
more or less long-term memorability of events. The conscious creation of these
meaningful and memorable relationships is what creates a sensible musical discourse;
while successions created through automatic procedures may appear rigorous on
paper, their perceptual reality is often completely aleatoric.
Another critique that can be made of this mechanistic approach is that objects
are often considered from a linear point of view. This creates perceptual
absurdities, especially in the realm of durations. Let’s examine the poorly named
‘chromatic series of rhythms’. These sequences of rhythmic values actually form
arithmetic progressions. By contrast, the ‘chromatic’ pitch scale is built upon a
geometric progression of frequencies: noten+1=noten 6 21/12. In a geometric
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

progression, the ratio between two successive elements is constant; in an arithmetic


progression, this ratio increases or decreases constantly. Take, for example, this
typical ‘chromatic’ series of durations: 32nd-note, 16th-note, dotted 16th-note . . .
half-note, half-note + one 32nd-note, etc. The relationship (temporal ratio) of
16th-note to 32nd-note is 2:1; the relationship of dotted 16th-note to 16th-note is
3:2, etc. There is no real problem at this point: the ratios are different, but this
difference between the ratios and the difference between the durations are, at least
in principle, audible. But when we arrive at half-note + one 32nd-note to half-note,
the relationship becomes 17:16, which is very close to 1. In this case, the difference
between durations is no longer perceptible unless there is a clear pulse. If there is a
regular audible pulsation, these rhythms will cause a progression of delays relative
to that pulse; and these offsets are easily audible because they again fall within the
domain of perceptible differences—delay of one 32nd-note versus delay of two
32nd-notes, etc.).
To organize a truly coherent scale of durations, the focus must be on relationships:
i.e. one would have to create a progression of relationships between the elements—
and not a progression of absolute durations of the elements. The communal error of
organizing durations in a linear fashion is caused by traditional notation’s masking of
the true nature of musical materials. On scores, composers write C, C#, D, etc., which
seems to be a linear progression (a half-step is added each time); but the note names
mask the true nature of pitch, which (as we saw earlier) is a geometric series of
frequencies. Similarly, we write a scale of dynamics ppp, pp . . . ff, fff; however, this
simple progression also hides the fact that intensities too follow a logarithmic scale
(the physical strength of sound must increase tenfold to double its perceived
intensity). Creating a non-linear series of durations requires the use of curves; this is
more difficult than simply aligning or permuting rhythmic symbols and for that
reason we might feel justified in using the computer to realize these series.
On the other hand, computers can also realize systems of permutations and
combinations extremely easily. The large tables of permutations, inversions,
retrogrades, retrograde inversions, transpositions, etc. that generations of composers
have sweated over can be completed in mere fractions of a second by a computer
program. This might even prompt us to wonder whether, if we had had computers
earlier, would we not have renounced all of these ideas—which seem so simple and
Contemporary Music Review 237
(in the end) stripped of all their attractiveness, once they are reduced to mere
algorithms?
These observations are troubling because, if the computer can help with our
calculations, it can also reveal to us that what we are doing is truly simple and that it
may not be worth doing in the first place. I have had that experience personally:
certain techniques that seemed complex, and which I had judged interesting precisely
because of their complexity, turned out, after 5 or 10 years of practical experience and
after implementing them on the computer, to be ridiculously simple. Above, I
showed the complex frequency modulation chords from the opening of Gondwana: at
the time I was composing this piece, this technique seemed very new (I believe it in
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

fact was new) and complex—the manual calculations to realize them were, if not
complex, at least long and fastidious. Once it became easy to create frequency
modulation spectra, either by programming them on synthesizers or by calculating
their contents with a computer, they lost some of their earlier magic: they became
well-known sonorities, and the great simplicity of the procedure that generates them
was revealed.
All the same, this development allowed me to concentrate on higher-level work—
on the musical discourse itself. Computers free me from all sorts of ‘accounting’
issues and allow me to focus my creative effort on what is really important. What
might previously have seemed like the ultimate goal of the work is no longer any
more than a point of departure. This ease with which the computer generates
material can give composers much more freedom to imagine, to let their intuitive
ideas fully ripen into the imagined musical realization. Paradoxically, algorithms can
liberate our intuitions.

Territoires de l’oubli (1978)


Let’s go back a few years earlier to Territoires de l’oubli, a long piece for solo piano. At
the time of its composition, I was not yet using most of the techniques we’ve
discussed—frequency calculations, rhythmic calculations and computer-based
techniques. Nevertheless, in this piece there are already some tentative approaches
to these techniques. Of course, there is one big problem in writing this way for the
piano: equal temperament, which forces us to accept a cruder approximation of
spectral frequencies.
A second difficulty was my style of writing which, at the time I was composing
Territoires de l’oubli, functioned mostly through sonic masses, subtle movements,
imperceptible progressions, evolutions of timbres, interlocking textures, etc. All of
these are rather easy to create with an orchestra, or even with smaller chamber
ensembles. However, the percussive, non-sustained sound of the piano made the
construction of these types of structures difficult. Strong constraints, nevertheless,
can force you to discover creative solutions; I tried, therefore, to make use of these
constraints. One of my responses to this problem was to make full use of the piano’s
sustain pedal: it is completely depressed from the beginning to the end of the piece.
238 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Thus, the notes that are struck will always resonate until their natural extinction. In
contrast to the currently widespread attitude in contemporary piano literature, my
original idea was not to treat the piano as a percussion instrument, but to treat it as
an instrument of resonance. Obviously, you cannot avoid hearing the attacks, the
percussion of the hammers, but the main focus here is the progressive transformation
of the global resonance of the piano. Above all else, the piece was written to create
and modify these resonances, and not to create percussive or rhythmic effects. As a
result, the writing is somewhat supple. Since the resonance of the piano is not entirely
predictable—it depends on many factors, including the room in which it is played—a
certain rhythmic flexibility has been built in: the performer can interpret the length of
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

the fermatas, and can repeat certain fragments, with the goal of letting the resonances
bloom or evaporate.
Since processes, global transformations of texture from one state to another,
underlie Territoires, the pianist must perform the very delicate task of creating these
progressive changes. It is not sufficient for the pianist to concentrate on any single
instant. The pianist must maintain the progressive evolution of a musical passage in
his memory: understanding the nature of the transformation and the objective
towards which it is aimed, in order to be able to guide these processes—which are
sometimes quite long (they can last four to five pages)—in a way that will clearly
recreate them for the listener. Examples of these processes are very gradual
accelerations or decelerations. The pianist must carefully control the slowing or
acceleration, so as not to risk arriving at the goal tempo prematurely, which would
create an undesired moment of tempo stasis. The same kind of planning ahead is, of
course, equally important for controlling dynamics. The complexity of the piano
writing grows greater when the piece arrives at junctures with superposed processes:
for example, one process is often abating while the next one is beginning to establish
itself. Another type of junction is created when musical material has transformed in
such a way that it becomes unrecognizable; then, from this resulting material, this
sort of residue, a new process begins, and so on. Processes overlap incessantly in this
piece, which makes it difficult to divide it into clear sections. In the score, rehearsal
letters mostly serve as reference points for the performer or for the analyst; however,
they do not necessarily correspond to marked caesuras for the listener.

Echoes
The first example17 that we will examine (page 7 of the score; Figure 35), uses echoes
as its model; however, this echo is a little unusual because it is combined with a
technique of harmonic resonance. The (rather simple) point of departure consists of
two intertwined melodies.
A bit later in this process, when the general dynamic level augments slightly, the
lower melody needs to be played slightly less loudly than the upper melody: this
allows the two melodic streams to be distinguished from each other. At the
beginning, the melodic fragments use very few notes and are confined to a restricted
Contemporary Music Review 239
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 35 Territoires de l’oubli, page 7.

range. Progressively, this range enlarges, the number of notes increases, and the
contours become more complicated. Let’s imagine building a melody with neumes. A
neume is a very simple, very clearly shaped contour. Gregorian neumes consist of
contours using two, three or four notes. However, one can invent slightly more
elaborate neumes, which can be used as the elementary units of melodic fragments.
These melodic fragments will become increasingly complex if we place additional
neumes as substitutes for some of the notes within the neumes already used. Today,
we would describe the resulting melodies as ‘fractal’.
The two melodic streams created this way are then reflected in echoes. The model
for this process was more the electronic echo chamber than the natural phenomenon
of echo. Moreover, the composing of this sort of process allows some liberties to be
taken. Rhythmic liberties: instead of being regular, the repetitions undergo
progressive deceleration. Modification of timbre: in natural echo and analogue
electronic echoes (like the ones in use at the time this piece was composed), the
repetitions are filtered, causing the upper harmonics to disappear progressively. In
this case, however, I use my compositional liberty to produce the inverse effect: more
and more harmonics appear over the course of the repetitions. To avoid leaving the
audible domain (and the keyboard), the highest of these harmonics are transposed
down one or more octaves; thus an echo—through this process—can sometimes
appear in a lower register than the original note (Figure 36).
To implement this principle, I built a sort of grid where melodies appeared with
their echoes—according to the system of rhythmic slowing. Each echo has more and
more harmonics, transposed if necessary. The mass of pitches I ended up with,
obviously, was too large to be playable on the piano. Therefore, I intuitively selected
the elements that seemed most interesting to me, and that created musical structures
that were playable.
240 T. Murail (trans. by A. Berkowitz & J. Fineberg)

Figure 36 Territoires de l’oubli, page 9.


Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Towards the end of this passage, a polarization arises for quite a while around the
note C; it then dies away very gradually creating confusion between the echoes and
the melodies from which they originate (page 11, 2nd system). The resulting mixture
of melodies and echoes transforms, ‘congeals’, into a sort of rhythmic swaying (page
12). The idea behind this whole section can be seen as a progressive proliferation of
pitches generated through the accumulation of echoes, leading to a point where the
original structures become unrecognizable. After only a few pages, the music seems a
bit anarchic, a sort of ‘organized chaos’. Inside this chaotic system appear rhythmic
polarizations and resonant frequencies, such as the C mentioned above (these louder
resonant modes in the midst of saturated sonic spaces are a real acoustic
phenomenon that is easily perceived in concert halls, for example). The music
finishes by contracting back on itself, around the poles of frequential and temporal
attraction—a bit like a black hole, where matter folds back on itself. At the end of this
process of proliferation then coagulation, the music settles on semi-repetitive
formulas, with the left hand and right hand moving independently. This type of
procedure can be found again and again throughout the whole piece: there is a
constant oscillation between semi-regular pulsations and rhythmic configurations
that appear very ‘chaotic’.

The Natural Resonance of the Piano


At letter B, page 4, we find another one of these moments of semi-regular pulsation—
created by a repetitive formula in the extremely low register of the piano (Figure 37).
When approximated to the semitone, the spectrum of three of the low sounds that
make up this formula have a common spectral component, a G3 (5th harmonic of
Eb1, 6th harmonic of C1, 7th harmonic of A0). Due to the repetitions of these pitches,
the G3 emerges naturally, without actually being played. If the piano is resonant
enough, this phenomenon will start to emerge on the top of page 5 (Figure 38).
At the end of the first system on page 5, the G is actually played, but the performer
must make sure that the played note emerges from the resonance of the G harmonic.
Contemporary Music Review 241
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 37 Page 4, letter B.

Figure 38 Page 5; the appearance of G3.

The letter ‘R’, used as a dynamic, signifies ‘do not play louder than the resonance’;
this allows the resonance of the note to be sustained, without hearing the note struck.
The G then starts to crescendo and progressively emerges.
A similar phenomenon is produced on page 17, where successively C#4, G3, then
D5 emerge softly from the resonance of a low ostinato and then congeal in a repeated
chord. Before arriving at letter E, the ad libitum repetition of the chord G–C#–D
allows the sonority to ‘deflate’—arriving at ppp. Therefore, letter E does not so much
mark a new section as it does a point of inflection (the moment where the curve
changes direction, from increasing to decreasing or the inverse). The bass sounds
242 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 39 Section E: generator sounds, ‘additional’ sounds, and ‘differential’ sounds.

Figure 40 Page 19, 1st system.

Figure 41 Page 34.


Contemporary Music Review 243
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 42 Mozart: Sonata in C minor. Note: Transformations of the arpeggio object.


For the last transformation (outlined by a box), the harmonic field changes during the
execution of the object.

(vestiges of page 16) are held over and then disappear progressively. The effect is as if
some contrabasses of the orchestra were performing a gradual diminuendo to silence.
Another example that makes use of the piano’s natural resonance occurs at the end
of the piece, where the three sounds F1, D#4 and C#7 are repeated for quite a while.
The harmonics of F1 are progressively amplified—affecting, among others, the 7th
harmonic (a slightly lowered D#4). This creates a beating between the overtone of
244 T. Murail (trans. by A. Berkowitz & J. Fineberg)
F—the lowered D#—and the equal-tempered D# played directly by the pianist. This
beating causes the D# to start vibrating in a very special way, which colours the entire
end of the piece.

Ring Modulation
Let’s return to section E. It starts on the chord G–C#–D. These three pitches are used
as sound generators for a ring modulation. The intervals contained within this chord
have certain specific characteristics. The interval C#–D is ‘dissonant’, in the
traditional sense, but it is softened by the G: the perfect fifth G–D has a consonant
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

harmonic nature, while the interval G–C# is somewhere between dissonance and
consonance.18
We saw above how to calculate the ring modulation of two sounds, along with
their harmonics. Here, I created imaginary modulations between all three sounds. If
we designate their frequencies with the letters a, b and c, we will calculate the
interactions between a and b, between b and c, between a and c, sometimes between
a, b and c, and between the harmonics of these sounds (up to the 5th harmonic). The
obtained result constitutes a vast table of frequencies in which we can trace a kind of
path, by first concentrating on the simplest combinations (between a, b and c), then
by introducing the second harmonics, that is 2a, 2b, 2c, then the third 3a, 3b, 3c, etc.
By exploring more and more harmonics and their combinations, we move away, little
by little, from the initial anchoring to G, C#, D—and this introduces considerable
changes in the musical flow (see Figure 39).
The chord written in small notes on the 4th beat of Figure 40 contains three
‘additional’ sounds (plus some harmonic and inharmonic partials). The dynamic
marking 4R indicates that the pianist must play slightly louder than the current level
of resonances. The lower chord on the 8th beat helps make the ‘differential’ sounds
audible.
At the end of the section, the generator chord progressively disappears. The whole
reservoir of possible notes has already been used and now a ‘filtering’ effect appears:
the lowest pitches are eliminated. The cut-off frequency of the ‘filter’ slowly rises,
until the sonic texture is reduced to a high trill C7–Db7.
Another example of virtual ring modulation occurs at the end of the piece. At letter
G (page 30), several different musics are superimposed. The first element, low
resonances, a reminder of the music that preceded it (a sort of ‘stormy’ music, made
with percussive gestures and trills in the low register of the piano), will be heard until
the end of the piece. However, it will grow gradually simpler as it condenses onto a
single frequency (F1). The second element, a sequence of sounds in the middle
register centred around C#4 (this C# is also inherited from the previous section),
smoothly changes its polarity: D# is substituted for the C# as a pole of attraction and
ends up attracting all of the nearby sounds to itself. The third element: a progression
of ascending movements that are progressively drawn towards C#7. These three
sounds (F1, D#4, C#7) then start to interact, in the same way I described above (see
Contemporary Music Review 245
Figure 41). However, the final result is quite different, because these three pitches are,
in fact, part of the same harmonic spectrum—or at as close as is possible with equal-
tempered notes. The D#4 is very close to the 7th harmonic of F1 (we saw before that
this creates beating with the exact—real—harmonic of F); the C#7 is the 7th
harmonic of D#4, or if you prefer, the 49th harmonic of F119 The resultant sounds of
a ring modulation whose inputs are part of the same harmonic spectrum will
themselves be part of this harmonic spectrum. The pitches obtained in this section
are, therefore, close to the harmonic spectrum of F. The modulation enriches the
global timbre but does not produce the ‘anarchic’ effect of proliferation there was in
section E.
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Conclusion (provisional): the piano is, in principle, a ‘tempered’ instrument; but


as we have seen its resonances are not tempered. It is, therefore, possible to make the
piano sound very different: by playing with its resonances, it is possible to make the
listener almost forget that the sounds that he is hearing are all equally tempered. That
was one of the goals of Territoires de l’oubli, but this type of piano writing can also be
found in several of my later piano pieces. In these works, the note (in this case,
meaning the piano’s attack) has very little importance. The point of departure is
something else, and the sounds that listeners perceive are also something else:
textures, objects, complex aggregates. . . Let’s explore these different notions a bit
further.

Musical Atoms
The organization of musical discourse, traditionally, has used notes as the point of
departure. These notes are assembled either horizontally into melodic lines or
vertically into chords; melodic lines and chords are then superposed to create
polyphony or an accompanied melody. This traditional conception (still very much
present in academic teaching) is, in fact, very limited. Music can be conceived in
categories that are far vaster; moreover, this new sort of conception is not in
conflict with the traditional approach, but rather incorporates it. Let’s return to the
notion of a ‘note’: notes are normally considered the smallest element of musical
discourse, the musical ‘atom’. In the etymological sense, ‘atom’ means ‘indivisible
element’—an object that one cannot divide into smaller elements. Moreover, the
very notion of a note is actually quite ambiguous: the term is simultaneously used
to refer to a sonic event (a ‘musical sound’) and a symbolic object (the ‘note’ that
appears in the score).
However, the perceptual atom is only rarely the musical note. Perception is
interested in much larger objects, in structured ensembles of sounds (e.g. a melodic
sequence of notes). Additionally, we cannot say that the musical note (seen as a
sound) is indivisible; just as, since Niels Bohr, the atom is also no longer the atom,
since it can be broken down into smaller particles. If the atom can be compared to a
miniature solar system, similarly, a musical sound is a complex world into which we
can enter and within which we can explore.
246 T. Murail (trans. by A. Berkowitz & J. Fineberg)
We saw that spectral analysis allowed us to dissociate complex sounds into their
elementary components—with different frequencies, amplitudes and phases. Each
sound has a specific dynamic evolution along with attack and extinction transients;
and, in fact, each of the sound’s components has its own, independent dynamic
evolution. This huge internal richness is what makes certain sounds particularly
interesting to human perception. Thus, we arrive at a two-part pronouncement. First,
the musical note (seen as a sound) can, in fact, be broken down into very much
smaller elements. Second, more often than not, the note is not in and of itself an
object of perception: it is usually only one element within of a much larger perceptual
group. Therefore, a note is just one level within a hierarchy of musical (perceptual)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

structures.

Musical Objects
Many themes in music from the ‘classical’ period are built on very simple structures
like scales or arpeggios. That a theme includes, for example, the sequence C, Eb, G is
not really important. Even the fact that this sequence could help establish the key of C
minor is not essential. What is really important is for the listener to be able to
recognize this ‘arpeggio’ object itself: once learned, the sequence C, Eb, G will become
available for transformation later in the piece (e.g. through transpositions and
modulations to G, B, D or even Bb, E, G, Db, etc.). The harmonic colours and the
intervals will change, but all of these objects share a strong common identity. For
perception, what matters is the similarity of dynamic movement in ascending
arpeggios (Figure 42).
In computational language, we would say that each of these ‘arpeggio’ objects is an
‘instantiation’ of the same class, ‘arpeggio’. Each individual—each object—can still
be unique, through the interplay of parameters defined for the given class. This
similarity of structure can work to our advantage when employing a computer-
assisted composition program such as Patchwork.20
This notion of object is quite unlike the traditional notion of thematic
development; it is closer to the leitmotif idea, though it is different from that as
well. Musical objects as I’m defining them are extremely supple; they can be modified,
even to the point of progressively changing their identity (by subjecting them to
processes of transformation). The original form of the object, after successive
metamorphoses, can be forgotten—this is in complete contrast to the Wagnerian
leitmotif, whose role is of course to be recognized. Nevertheless, the idea of a class of
objects, from which other objects are derived, is the same in both cases. Because of its
role as a beacon for the listener, the leitmotif most often does not participate in the
development of forms and textures: it remains isolated in the midst of the discourse.
This is not exactly the kind of function I’m trying to endow objects with. Debussy
might provide a better illustration. While his music is not dominated by the idea of
thematic development, you never lose your footing when listening, perception is
never disoriented, and you always find points where your memory can anchor itself.
Contemporary Music Review 247
Debussy uses cells, motions and contours that allow for the identification of
similarities between objects. This makes it very difficult to analyse his music with
classical techniques: something else is going on.
In computer science terminology an object contains both data and the means
(‘methods’) for the exploitation of the data. The data for an arpeggio-object are a
harmonic field and some parameters. The method employed is the ‘arpeggio’
method, which consists of separating out certain sounds from the harmonic field, as a
function of certain parameters: speed of the arpeggio, range, size of steps, number of
steps, direction (ascending or descending), etc.
From the object class ‘arpeggio’, which we have just defined, we can derive
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

subclasses, another notion commonly used in both computer science and music
(whether consciously or unconsciously). Thus, one subclass of an arpeggio could be a
broken arpeggio: instead of a unidirectional motion, there will be a zigzag path. An
ordinary arpeggio and a broken arpeggio have different contours, yet they are clearly
related. The data and the methods of exploitation can be varied infinitely; however,
there will always be some sort of (more or less loose) relationship links, and these
links will at least be visible from one step to the next, though after a certain number
of operations, it may very well become quite difficult to recognize the original object.
With these ideas in mind, we might take a fresh look at the music of the past.
Instead of holding on only to traditional criteria (thematic development, formal
models, tonal progressions. . .), we could explore structural and statistical
phenomena, as well as everything else that concerns the actual perception we have
of a piece, rather than focusing exclusively on its theoretical conception. To the idea
of a musical object, we could add other notions, such as texture. Rather than speaking
of counterpoint, polyphony, accompanied melody, etc., we could simply categorize
all of these as different types of texture. For example, seen this way, four-voice
counterpoint, which for a long time seemed to be the most perfect and advanced
form, is but one particular, limited texture—a specific configuration of textural
organization amongst an infinity of others. Though this perhaps pushes the point a
bit far, we could say that four-part counterpoint is simply a subgroup of much vaster
structures, such as Ligeti’s micro-polyphony . . .
Another perspective is that of the Norwegian composer Lasse Thoresen, who
developed a theory of textures, layers and strata in music. According to Thoresen,
within musical textures, certain layers are more visible (audible) than others.
However, the importance given to the various layers varies for each listener. For
example, classically trained musicians generally have the impression that popular
rock music sounds ‘impoverished’ (without depth). Our perception of the
foreground, the most apparent layer, is what ties in with our musical education
and thus it is often what we attend to: classically trained musicians seek harmonic
progressions, melodic development, etc.—all things that will not be found in popular
music. For rock musicians, by contrast, the most important layers are the rhythmic
and timbral layers—harmony and melody are mere ornaments in the background.
Everything is changed if we view things from this angle.
248 T. Murail (trans. by A. Berkowitz & J. Fineberg)
In one way or another, this type of analysis totally ‘short circuits’ traditional
notions of thematic development and formal models. If we now add in the idea of
process—transformation from one texture to another or generation of objects whose
characteristics vary progressively—we obtain some absolutely fascinating results.21 A
complex musical image—composed of textures and objects—comes to life, and then,
by way of transformations affecting its components, evolves towards another quite
different image (into which the various processes at work will progressively transform
it). Numerous recent compositions have employed this type of organization:
processes and metamorphoses alter the musical objects, generating intermediate
situations with new, even unheard of characters—while also conferring a tension
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

(and a powerful sense of directionality) to the musical discourse through the


instability created by these transformations.

Allégories (1990)
Allégories is written for six instruments: flute, clarinet, violin, cello, horn and
percussion. It also requires a real-time electronic performance apparatus consisting of
a Macintosh computer, a MIDI keyboard (that does not, itself, make any sound, but
sends MIDI signals), and a Yamaha TX-816 synthesizer. The TX-816 includes eight
modules (each of which has the power of a DX-7 synthesizer), which can produce a
total of 8 times 16 polyphonic voices. These 128 voices allow me to create a sort of
real-time additive synthesis. Since the electronic textures in the piece are too complex
to be played directly by one keyboard player, the computer controls the synthesis
modules using the commands sent by the MIDI keyboard as cues. The computer uses
the program MAX (the Macintosh version of which was still under development at
IRCAM when this piece was composed).
At the time I composed Désintégrations (1983) for orchestra, this type of system
did not exist, and real-time realization was still very difficult. This is why composers
continued to rely on pre-recorded tapes to play back their electronic sounds.
However, these tapes created a major problem: synchronizing the tape and the
instrumental ensemble. In Désintégrations the conductor is forced to use an earpiece
through which he hears ‘clicks’ corresponding to the beats in the score. The tape has
four tracks, one of which is reserved for these ‘clicks’—which faithfully follow the
changes of tempo and meter.22 Obviously, the ‘click track’ technique imprisons the
conductor: any rubato whatsoever becomes impossible. This is a difficult constraint
for the conductor, but also for the composer, who can no longer count on the
suppleness of interpretation to repair potential holes in the writing. In a sense, the
interpretation is fully planned in advance and fixed—at least as far as durations are
concerned. In certain cases, this can be a good thing, because potential
misinterpretations are avoided; but sometimes a good interpretation can transfigure
a piece and reveal within it aspects that the composer himself had not imagined, and
this potential is eliminated by the ‘click-track’. This is why real-time electronics are
Contemporary Music Review 249
desirable, at least in terms of allowing a much more supple synchronization with
instruments and conductors.
The electronic techniques used in Allégories are relatively modest; yet it still
attempts to replicate the idea behind Désintégrations, where the electronic sounds
enrich and complete the instrumental discourse. However, there is one major
difference: in Allégories, the electronic sounds follow the conductor, and not the
other way around. The electronic part is essentially decomposed into small events
(objects or textural elements), which are triggered at the right moment by an
instrumentalist playing on a MIDI keyboard. The notes played by the
instrumentalist have nothing to do with the sounds one hears, they are simply
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

codes interpreted by the computer—each one corresponding to musical events,


which are sometimes already stored in the program and sometimes generated on-
the-fly, during the performance.

Additive Synthesis
We saw earlier that all musical sounds are divisible into elementary sonic
components. Inversely, a sound can be reconstructed from these elementary
components. The reason that additive synthesis is so attractive to me resides in the
ease with which the composer can control (‘compose’) each detail of the sound.
Almost the entire tape of Désintégrations was created in this way. Certain of the
electronic sounds evoke percussion, piano, trombone or cello; however, in reality,
they are totally artificial sounds obtained through analysing instrumental spectra.
These spectra are then manipulated, re-interpreted and deformed by the computer
before being used as the basis for synthesizing these completely artificial sounds. With
this technique, sounds that evoke instruments can be ‘re-composed’ just as easily as
hybrid sounds (sonic ‘monsters’).
In a certain way, this mode of synthesis is very primitive—and, in any case, it is
very laborious. Its roots date back to Stockhausen’s first experiments, in which he
sought to construct sounds from sinusoidal generators. The technology available at
that time was certainly awkward: the generators were large boxes that had to be tuned
by hand, then recorded and mixed over and over again (since each generator was
monophonic). This all became much easier with computers. Nevertheless, creating
sounds with additive synthesis remains complex and difficult. For example, in
Désintégrations to create an interesting sound it was often necessary to keep track of
10–30 components per sound, with 10–15 separate parameters for each component:
pitch, dynamic, duration, time of attack, dynamic envelopes, spatialization envelope,
vibrato—with its different parameters (envelope, frequency, amplitude), spatializa-
tion, etc. There were often several hundred parameters for a single sound.
Programming these parameters manually was, of course, impossible. Therefore, I
needed to write a program that could calculate all of the necessary parameters as a
function of global musical data. For example, I needed to be able to specify to the
computer that an oboe spectrum would be used, that the global duration would be x
250 T. Murail (trans. by A. Berkowitz & J. Fineberg)
seconds, that the attacks would not be simultaneous (but rather staggered with an
acceleration effect), that the vibrato would have a certain frequency (speed) for the
lowest component and another for the highest component, etc. The program then
performed all of the necessary intermediate calculations, carried out any interpola-
tions needed, and supplied the list of parameters required for synthesis. Clearly this
work remained rather cumbersome, even with computer assistance; however, even
now additive synthesis still seems the appropriate procedure if you want to control
the finest details of the sound.
The Yamaha DX and TX synthesizers function on the principle of frequency
modulation, which allows the construction of rich sounds with relatively few
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

parameters. Nevertheless, the detailed make-up of these sounds is often beyond the
programmer’s control. In Allégories, I actually use the potential of frequency
modulation synthesis very little: only for some sounds, which are played at the
beginning of the piece. All of the other electronic sonorities in the piece are created
through additive synthesis: the synthesizer emits only sinusoidal tones, whose
amplitude envelopes (percussive sounds, very soft attacks, shorter or longer
resonances) and aspect (various vibratos or phase differences) are varied.

Some Examples of How Electronic Sounds Are Used


The electronic sound that opens the piece is an exception to this rule: it is a very
complex sound that sounds like coloured noise and is produced through frequency
modulation. Nevertheless, as long as we know the carriers used and the ratio of
modulation (modulator/carrier), we can analyse its components.
The synthesizer plays four superimposed spectra (see Figure 43). The resultant
sound is in the ‘noise’ family and slightly resembles a tam-tam: tam-tam resonance
is also a sort of coloured noise—a complex agglomeration of frequencies that are
very close to one another. These ‘coloured noise’ sounds differ from ‘white noise’
in that particular colours (frequency bands) and registers (low, high) are audible
within the ‘noise’. As we saw above, the problem that tam-tams—and in general, all
percussion instruments—pose is that there is no way to know exactly how the
instruments that will be used in a given performance will sound. In other words, in
concert situations, the colour of the tam-tam (or other percussion instruments) is
almost never exactly what the composer had in mind. If a musical effect, like
mixing the sounds into an instrumental aggregate (remember the cymbal in section
I of Désintégrations), depends on a precise colour, this can be a real problem. For
this opening to Allégories, my solution was to mix a real tam-tam (with the natural
life of its rich resonance) with synthetic frequencies that precisely supply the
required harmonic/timbral colour. Thus, one possible role for electronics is to
specify or enrich the frequency-content of acoustic sounds. Furthermore, certain
components of frequency modulation sounds were used in writing the instrumental
parts—which enhances the fusion between the synthesizer and the instruments
(Figures 44 and 45).
Contemporary Music Review 251
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 43 Frequency modulation, beginning of Allégories.


252 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Numerous electronic sounds in Allégories are similar to the resonances of metallic
percussion instruments. This type of very dense, complex aggregate enhances fusion;
i.e. it reinforces the perception of an aggregate as a timbre more than a harmony.
Nevertheless, the percept of such rich and complex spectra remains ambiguous
(Figures 46 and 47a, b, c).
Clouds of high sounds are another type of sound frequently heard in the piece (e.g.
in sections A and O)—or clouds of low sounds in section I. These clouds are
composed of notes selected semi-randomly from a spectral reservoir. The
synchronization of the instruments to these clouds is, thus, only approximate—a
synchronized beginning is all that matters in this context (Figure 48).
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

The pitches in these clouds come from the upper portion of a distorted harmonic
spectrum. The precise sequence shown in Figure 48 does not occur in the score; it is
one among thousands of possible combinations, only a few of which were actually
used in the piece.
The synthesized sounds are sometimes in a closer relationship with the
instrumental sounds; in several sections of the score (e.g. sections C, L, M and
N), they create echoes or pre-echoes of instrumental sounds. At other times, they
add synthesized formants to the notes played by the instruments (e.g. the end of
section N). Often, the attacks of the partials are desynchronized so as to produce a

Figure 44 Electronic frequency components doubling the instruments. Note: The


sound of the synthesizer is shown with an approximation to the nearest eighth of a tone.
The instruments are approximated to the nearest quarter-tone. The slight difference in
frequency between the instruments and the electronics does not diminish the fusion
(since it is smaller than the interval of the critical band).
Contemporary Music Review 253
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 45 Allégories: beginning of section A.

Figure 46 Percussive aggregates, section H (bars 2 and 12, respectively). Note: The
chords are represented in the form of arpeggios to facilitate their reading.

sort of sweep through the spectrum. All of these synthetic sounds are based on
spectral analyses of the instruments that they complement. However, they are never
used to replace an acoustic instrument; rather they enrich or diffract the
instrument’s sound.
254 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Musical Construction of Allégories


Let’s return to the idea of musical objects. The analysis I’m presenting now is an a
posteriori look at the piece. I do not pretend to have composed the piece in this way—
in any case, not consciously. However, the successive transformations of the initial
Contemporary Music Review 255
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 47 (a) Section H, bars 1–13. (b) Section H, bars 1–13 (continued). (c) Section
H, bars 1–13 (continued).

Figure 48 Semi-random clouds of high sounds.

object that I will describe are certainly present in the music, even if they do not result
from a deliberate pre-compositional plan.
The initial object is simple, almost banal, but choosing it was not so simple. I
needed a very special, malleable object: one that was susceptible to metamorphosis,
but also one that was sufficiently distinctive that it could be easily recognized—yet
not so distinctive that it could not undergo extensive transformations. It is helpful if
such an object is simple and striking, but it is not necessary—on the contrary—that it
be complex or even very interesting. A perfect example is the initial cell of
256 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Beethoven’s Fifth Symphony: a not very sophisticated melodic fragment. However,
this simple idea allows for many subsequent transformations. Without wanting to
inflate the analogy or compare my piece to Beethoven’s, this is a bit like what happens
here.
Figure 49a shows a schematic representation of the initial object. It consists of what
Messiaen calls a ‘rocket group’: rapid ascending lines of several instruments
superimposed, which reaches a small accent, prolonged by a trilled resonance. Over
the course of the piece, a certain number of ‘subclasses’ of this group are created,
which in turn are used to form new ‘subclasses’. For example, at the very beginning,
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 49 (a) Schematic representation of the initial object. (b) Object preceded by an
anacrusis – a horn call. (c) The trilled resonances dissolve into semi-random clouds of
sounds.
Contemporary Music Review 257
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 50 (a) Section A, bars 37–42. (b) Section A, bars 37–42 (continued).

the object is preceded by an anacrusis—a horn call (see Figure 45). This ‘subclass’
returns again in section G (Figure 49b). Later in section A, the trilled resonance
(actually transformed into tremolos) dissolves into clouds of sounds—the ones we
spoke of just a couple of pages ago (Figure 49c, Figure 50a and Figure 50b).
258 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Or, on other occasions, that resonance shatters into a melodic entanglement of
intertwined spirals (Figure 51). Another frequently used subclass is a ‘rocket group’
that reaches a resonant chord (a sort of amplification of the little initial accent; Figure
51b).
These derived forms are transformed in turn; allowing the creation of the table
shown in Figure 52. With this diagram, it is easy to follow the successive
metamorphoses of the object. For example, the simplification to ‘rocket group’ +
percussive chord (a), then the simplification of the ‘rocket group’ to groups of
grace notes as an anacrusis to the chord (a, b, o). At letter c, only the chord itself
remains, sometimes followed by a small ornamental group. The percussive attack
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

then progressively weakens, leaving objects with a soft attack (crescendo–


decrescendo) and long resonance (c, h). Then the different components of the
chord desynchronize (h)—at this point, the ambitus of the objects has also become
very large.
The ‘cloud’ of sounds, which at the outset is only a short resonance of trills,
achieves autonomy at letter d, becoming a fully fledged musical structure. While
section d is very short, its contents are developed later at letter l (section d can thus be
considered as a sort of pre-echo of section l). Sometimes, the ‘cloud’ superimposes
itself upon the interlocking texture of h. This occurs in section m, which itself is pre-
figured by another pre-echo in section e. Similarly, the form ‘o’ (intertwined
descending spirals) comes from the final phase of an object found in a. At the centre
of the piece, there are some inverted forms. The structure of these objects was
reversed as in a mirror (in the previous schema this sort of derivation is indicated by
dotted arrows). However, the harmonic contents do not undergo this mirror-

Figure 51 (a) The resonance shatters into a melodic entanglement of intertwined


spirals. (b) A ‘rocket group’ reaches a resonant chord.
Contemporary Music Review 259
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 52 Various transformations of the initial object. Note: The small letters
correspond to the sections of the piece in which one can hear these various forms.

symmetrical inversion, which, in a spectral context, would not make sense—or


would, at least, be very arbitrary.
In fact, the harmonic contents change continuously. Allégories attempts to create
a formal discourse linked to functional development of the harmony. The harmonic
successions are integral to the form of the piece and not simply a ‘colouration of
time’. As such, the harmonies are quite different from Messiaen’s conception—in
which harmonies ‘colour’ the durations. For me, harmonic progressions are equally
260 T. Murail (trans. by A. Berkowitz & J. Fineberg)
important as the formal and dynamic structure of gestures and durations; poorly
chosen harmonies or durations can contradict and destroy the musical discourse
that one hoped to create. Herein lies one of my primary compositional concerns:
finding the harmonic progression that best represents the musical image that I have
in mind. This is by no means an obvious task—especially since it is not only the
intrinsic colour of the object that counts, but also its relation to the larger context.
Moreover, these harmonic successions are often realized by complex aggregates,
possessing a large number of finely adjusted components. Organizing the harmonic
evolution of such aggregates is not easy; there are no formulas or algorithms that
can juggle all the aspects, and, in the end, the best judges are still intuition and
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

experimentation.
If we look again at the global evolution of the piece, we can see that an interplay of
relationships is created. They can be schematized as shown in Figure 53.
Once again, this schema corresponds to the final state of the piece, and not to a
completely pre-established plan. My initial plan, for example, contained five parts;
however, in the end only four remained. What is now section l, which is comprised
of many ‘clouds’ of sounds, was initially supposed to occur just after section c.
However, it seemed to me that section l was too elaborate for that particular
moment—it would have been too close to the beginning of the piece. It is hard to
explain these types of decisions in a purely rational way. Perhaps I needed to hear
less distorted forms of the initial object at this early stage of the piece. On the other

Figure 53 Global structure of Allégories. Note: The arrows indicate a progressive


transition from one section to another. The double diagonal lines indicate a rupture.
Contemporary Music Review 261
hand, the structure of the ‘clouds’ did function well as a sort of parenthesis;
therefore, I inserted an abridged version of this future section l, which became
section d. In the same way, e is a summary of the future section m. It also seemed
to me necessary to have a return to the initial situation before going on to explore
more distorted and distant regions (section g, which evokes the beginning of
section a). These distant correspondences between sections are symbolized on the
diagram by dotted lines.
The passage from one section to another can occur continuously, without rupture,
when one process provokes a progressive change of texture. These smooth transitions
are marked with an arrow. In these cases, there is no clearly perceptible end or
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

beginning to the sections—the letters are mere reference points for analysis or
rehearsals. At other moments, the transition from one section to the next provokes a
rupture in the discourse (symbolized by a double diagonal line). Please note that the
smooth transitional processes occur at the beginning and end of the piece: the most
disjointed part is part III.
As I said earlier, the harmonic processes support the formal processes. In the same
way that the three sections in part I are smoothly connected gesturally, there is a
single (smooth) harmonic progression that unifies them as well. This harmonic
process is built of a series of distortions of an aggregate drawn from a harmonic series
(this aggregate can be found at the beginning of section C). The piece opens with very
distorted spectra (a strongly inharmonic starting point that nonetheless is related to
the harmonic goal), then the spectra grow progressively less and less distorted, in a
zigzag evolution that avoids too much predictability, until the tension has been
released and the ‘defective’ harmonic spectra that opens section C (and was the basis
for all the distortions) is heard.
The harmonic object towards which the process is directed is a fragment of a
harmonic spectrum (containing partials 3, 5, 7, 9, 11, 13, 15, 18, 20 and 29).
Figure 54 shows this aggregate and its first two distortions. The reference partials
used to calculate the distortions are harmonics 3 and 29. For the first distortion,
the third harmonic is raised by 4.5 Hz, while the 29th harmonic is lowered by 62
Hz: the rest of the spectrum is modified as a function of those reference notes.
Thus a compressed spectrum is created: the low partials are raised and the higher
ones are lowered. The second distortion (3rd harmonic raised by 0.8 Hz, 29th
harmonic lowered by 90 Hz) generates another spectral compression with a
different colour. These ‘first two’ distortions are in fact the last two chords in the
progression, since the process converges on the harmonic spectrum of C (null
distortion) (Figure 55).
This convergence does not happen in the linear way you see on the graph. I wanted
dynamic harmonies that are continually changing. They needed to be oriented
towards a specific goal, but without creating the effect of an inexorable slide (which
would surely have resulted from a purely linear evolution of the distortion
coefficients). While we are certainly moving towards a goal, the trajectory is
capricious. To reduce the sensation of predictability a bit more, I vary slightly the
262 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 54 Fragment of a harmonic spectrum on Bb and two distortions. Note: The


values indicated under the partials 3 and 29 are the deviations in hertz that affect
them.

Figure 55 Evolution of the distortions from B to C. The curves indicate the amounts of
distortion that affect the two reference partials.

number and quality of spectral components in each aggregate. Figure 56 shows the
final harmonic progression, which extends from the beginning of section B to the
Contemporary Music Review 263
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 56 Harmonic progression from B to C.

beginning of section C (with indications for the harmonic ranks used and the
reference deviations). And Figures 57a, b, c and d show the corresponding portion of
the final score.
These timbre-harmony aggregates are often quite interesting in and of
themselves. Nevertheless, it is, yet again, the relationships between the elements
that matter most. The entire goal is to organize the progression in a satisfying
manner. There is no hard and fast rule for this; it is a complex question, especially
with these types of rich, microtonal aggregates. However, in spite of the novelty of
the harmonies, the problems that must be solved are eternal: renewal or repetition
of the aggregates, presence or absence of ‘common tones’, attention to the motion
of the outer-most ‘voices’ (which are generally more salient), interplay of registers,
etc.
In certain cases, we need to hear a quick turnover of pitches (or at least have the
illusion of constantly hearing new pitches). This is what happens in this section of
Allégories, where the harmonic rhythm is rapid. Here, any impression of pitch stasis
would lead to an effect of redundancy or of ‘pleonasm’, that would be unpleasant—
because it would contradict the formal direction of the passage. However, when we
arrive at the final aggregate (at letter C)—which is by nature harmonic—we find
ourselves in a situation of harmonic stability—making pitch repetitions or even some
redundancies welcome.
I believe that the kinds of problems we have discussed arise in every period and in
all types of music. They are rarely highlighted and explained by traditional analysis,
which tends to look for the generative techniques of a musical style, rather than
studying the phenomenological reality of musical works. By studying this
264 T. Murail (trans. by A. Berkowitz & J. Fineberg)
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

phenomenological reality, one can say—as Messiaen liked to affirm—that ‘the music
of Mozart is not tonal, but rather chromatic’. One could also say that very many
‘serial’ works are seductive because they are, in fact, modally organized (emphasized
notes, frozen harmonic fields. . .). With regard to pieces that are called ‘spectral’, they
are undoubtedly more valuable for their original formal organization and the novel
Contemporary Music Review 265
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

Figure 57 Allégories, section B.

ways they shape time than for their harmony–timbre aggregates (which, though often
strikingly different, have no intrinsic value except insofar as they express the form
and manipulate our perception of time).
266 T. Murail (trans. by A. Berkowitz & J. Fineberg)

Notes
[1] The absence of a precise and agreed-upon definition of a musical sound is sufficient to make
the interpretation of musical language directly modelled on grammatical-linguistic schemata
impossible.
[2] We can mention, for example, a Japanese bamboo flute called the shakuhachi, which is able
to produce a variety of ‘Aeolian’ sounds (that is to say mixtures of breath and sound). For
this reason it has become quite fashionable among young composers, who are not necessarily
Japanese.
[3] Though, in classical music theory, timbre is considered little more than an inexplicable
residue: ‘that which allows for the differentiation of sounds with the same pitch and
intensity.’
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

[4] Intonation exists in languages devoid of pitch, but it only serves to specify intention, or
expression (interrogation, exclamation), while in tonal languages, pitch is itself a
discriminating feature with its own impact on meaning.
[5] One of the Russian republics, situated to the North of Mongolia, whose ethnicity and culture
is similar to the Mongols.
[6] In this analysis, we formulated the hypothesis that the piano is a ‘harmonic’ instrument (i.e.
one whose spectrum would correspond precisely to a harmonic series). The sound of the
piano is, in reality, a bit inharmonic and presents a slight harmonic ‘distortion’. This kind of
harmonic distortion is a very interesting phenomenon about which we will speak more later.
[7] ‘Out-of-tune’ is used here to mean an involuntary and awkward result, one that does not
make sense in the stream of musical discourse. While one can certainly seek effects of
intervallic awkwardness with an expressive or colouristic goal, as long as the context is
coherent the sensation produced is not that the music is out of tune.
[8] In other words, if one has a fundamental of 100 Hz, the third harmonic will be 300 Hz (3 6
100), the fifth harmonic will be 500 Hz (5 6 100), etc. The relationship between harmonics
4 and 3 will thus be 4/3, and so on.
[9] Terhardt’s algorithm attributes a ‘perceptual weight’ to each of the partials of the sound. This
‘perceptual weight’ depends upon the amplitude of the partial, but also on possible masking
phenomena and the frequency response curve of the ear. If the weight of a given partial is
zero or very weak, it can probably be ignored.
[10] The spectra of the upper register of the flute, oboe and clarinet are all very similar. Their
timbre remains recognizable because of how they are played and because of the differences in
how they sustain the sound. Vibrato, breath effects, emission noises, etc. produce secondary
effects allowing the instruments to be identified. However, within a rich orchestration, these
instruments can easily substitute for one another without changing the global sonority.
[11] In my more recent mixed instrument and electronic pieces—written after this conference—I
have used techniques allowing the computer playback of the synthetic sounds to be
synchronized with the conductor’s beat.
[12] A frequency modulation or ring modulation spectrum can actually be fully harmonic if the
carrier and modulator or the sounds to be modulated are in a mathematically simple
relationship: in other words, if they are part of the same harmonic spectrum. In the graphic
representation above, a linear spectrum will be harmonic if the line that represents it
intersects the x axis at a whole number value (i.e. the value of ‘i’, the index of modulation).
[13] The Yamaha DX7 was the first commercial synthesizer to use the technique of frequency
modulation.
[14] See ‘Target Practice’ (in this issue), Example 1.
[15] This conference included a description of the Patchwork program for computer-assisted
composition, some basic notions of how MIDI represents notes, and some examples of
Contemporary Music Review 267
simple musical algorithms. At that time, all of this was relatively new for composers. Now,
however, these concepts are better known and documented. Therefore, it did not seem
necessary to transcribe those passages.
[16] Since the time of these conferences, a newer program OpenMusic has largely replaced
Patchwork. Both programs are based on a similar paradigm, but the newer realization has
greater possibilities. OpenMusic is now widely used by composers.
[17] During the conference, Dominique My performed these examples on the piano; she also
performed the work in concert.
[18] The notions of harmonicity and roughness ought to take into account the interactions
between all possible combinations of pitches in an aggregate. In this case, it is simple, but
when the harmonic or spectral aggregates contain numerous, non-tempered components,
the problem becomes extremely complicated.
Downloaded by [Karlstads Universitetsbibliotek] at 07:35 09 January 2012

[19] Because 7 6 7=49. In fact, owing to approximation errors, C# would correspond more
closely to the 51st harmonic (or 50th or 52nd, all of which are quite close to each other and
all of which would have to be approximated to C# when approximating to the nearest
semitone).
[20] And even more so with its successor, OpenMusic.
[21] Striking examples of textural transformation can be found in Gérard Grisey’s Modulations.
At one point in the piece, a complex texture (close to Ligeti-style micro-polyphony)
progressively simplifies, becoming a sort of counterpoint, which in turn congeals into a
sequence of chords.
[22] Tape can now be replaced by digitized sound-files, but the problem of synchronization
remains.

You might also like