Visual
Music A Short History
&
Aesthetics
k E ff
My
Visual Music
A Short History and Aesthetics
Michael Filimowicz, PhD
Optophonia
Copyright 2022
Vancouver, BC, Canada
optophonia.com
The Poetics of Synaesthesia
iTunes Visualizer, music-reactive generative visuals
Poetics in this investigation refers broadly to ‘principles of making’ and by extension,
general design principles. The origins of poetics was with Aristotle’s text, Poetics, which
analyzed literary-theatrical narratives. Today the scope of poetics is much wider and can
cover any media- and meaning-making terrain where patterns of making can be
formally analyzed.
1
For most of its long history, the term poetics subsumed attempts to reveal
the inner logic of a work of art in an examination of its formal and
constituent features while inevitably raising problems of intention,
meaning, and interpretation.
With the advent of new technologies and an increasing differentiation of
media, the medium of print has lost some of its status while other
technologies vie for acceptance alongside it. Accordingly, in critical
discourse, new media studies have gained ascendancy over poetics.
Poetics, broadly understood, takes as its subject matter a hermeneutic
process productive of meaning and responsive to communication, even
where this process is intentionally made difficult for artistic purposes, a
view that has been hotly contested as a result of the emergence of new
technologies. (source)
Visual music predates the contemporary concept of the music video by at least half a
millennia, depending on how one qualifies it. Most historical commentary on visual
music finds an origin point in the Medieval color organ, which is the earliest known
technological precursor to today’s iTunes Visualizer. Many commentators, however, do
not go so far back, and relate visual music to early 20th Century trends in Modernism.
In today’s maker culture, there is often a lack of deep historical sense and context. At the
start of the first video embed below, an Arduino-tinkerer claims that the color organ
originated in the 1970s (!) when there was a genre of music-light interactive consumer
novelties that were also called color organs. Here is what Wikipedia has to say about
color organs:
The dream of creating a visual music comparable to auditory music
found its fulfillment in animated abstract films by artists such as Oskar
Fischinger, Len Lye and Norman McLaren; but long before them, many
people built instruments, usually called “color organs,” that would
display modulated colored light in some kind of fluid fashion comparable
to music.
— William Moritz
The term color organ refers to a tradition of mechanical devices built to
represent sound and accompany music in a visual medium. The earliest
created color organs were manual instruments based on the harpsichord
design. By the 1900s they were electromechanical. In the early 20th
century, a silent color organ tradition (Lumia) developed. In the 1960s and
’70s, the term “color organ” became popularly associated with electronic
devices that responded to their music inputs with light shows. The term
2
“light organ” is increasingly being used for these devices; allowing “color
organ” to reassume its original meaning.
In 1590, Gregorio Comanini described an invention by the Mannerist
painter Arcimboldo of a system for creating color-music, based on
apparent luminosity (light-dark contrast) instead of hue.
In 1725, French Jesuit monk Louis Bertrand Castel proposed the idea of
Clavecin pour les yeux (Ocular Harpsichord). In the 1740s, German
composer Telemann went to France to see it, composed some pieces for it
and wrote a book about it. It had 60 small colored glass panes, each with a
curtain that opened when a key was struck. In about 1742, Castel proposed
the clavecin oculaire (a light organ) as an instrument to produce both
sound and the ‘proper’ light colors.
In 1743, Johann Gottlob Krüger, a professor at the University of Hall,
proposed his own version of the ocular harpsichord.
In 1816, Sir David Brewster proposed the Kaleidoscope as a form of
visual-music that became immediately popular.
In 1877, US artist, inventor Bainbridge Bishop gets a patent for his first
Color Organ.The instruments were lighted attachments designed for pipe
organs that could project colored lights onto a screen in synchronization
with musical performance. Bishop built three of the instruments; each was
destroyed in a fire, including one in the home of P. T. Barnum.
In 1893, British painter Alexander Wallace Rimington invented the Clavier
à lumières. Rimington’s Colour Organ attracted much attention, including
that of Richard Wagner and Sir George Grove. It has been incorrectly
claimed that his device formed the basis of the moving lights that
accompanied the New York City premiere of Alexander Scriabin’s
synaesthetic symphony Prometheus: The Poem of Fire in 1915. The
instrument that accompanied that premiere was lighting engineer Preston
S. Millar’s chromola, which was similar to Rimington’s instrument.
In a 1916 art manifesto, the Italian Futurists Arnaldo Ginna and Bruno
Corra described their experiments with “color organ” projection in 1909.
They also painted nine abstract films, now lost.
In 1916, the Russian Futurist Painter Wladimir Baranoff-Rossiné
premiered the Optophonic Piano in Kristiana (Oslo, Norway) and later at
the Bolshoi Theatre (Moscow, Russia) in 1925.
3
Castel’s Ocular Organ, A caricature of Louis-Bertrand Castel’s “ocular organ” by Charles Germain
de Saint Aubin. Source
4
In 1918, American concert pianist Mary Hallock-Greenewalt created an
instrument she called the Sarabet. Also an inventor, she patented nine
inventions related to her instrument, including the rheostat.
In 1921, Arthur C. Vinageras proposed the Chromopiano, an instrument
resembling and played like a grand piano, but designed to project “chords”
composed from colored lights.
In the 1920s, Danish-born Thomas Wilfred created the Clavilux, a color
organ, ultimately patenting seven versions. By 1930, he had produced 16
“Home Clavilux” units. Glass disks bearing art were sold with these
“Clavilux Juniors.” Wilfred coined the word lumia to describe the art.
Significantly, Wilfred’s instruments were designed to project colored
imagery, not just fields of colored light as with earlier instruments.
In 1925, Hungarian composer Alexander Laszlo wrote a text called
Color-Light-Music ; Laszlo toured Europe with a color organ.
In Hamburg, Germany from the late 1920s–early 1930s, several color
organs were demonstrated at a series of Colour-Sound Congresses
(German:Kongreß für Farbe-Ton-Forschung).Ludwig Hirschfeld Mack
performed his Farbenlichtspiel colour organ at these congresses and at
several other festivals and events in Germany. He had developed this color
organ at the Bauhaus school in Weimar, with Kurt Schwerdtfeger.
The 1939 London Daily Mail Ideal Home Exhibition featured a “72-way
Light Console and Compton Organ for Colour Music”, as well as a 70 feet,
230 kW “Kaleidakon” tower.
From 1935–77, Charles Dockum built a series of Mobilcolor Projectors, his
versions of silent color organs.
In the late 1940s, Oskar Fischinger created the Lumigraph that produced
imagery by pressing objects/hands into a rubberized screen that would
protrude into colored light. The imagery of this device was manually
generated, and was performed with various accompanying music. It
required two people to operate: one to make changes to colors, the other to
manipulate the screen. Fischinger performed the Lumigraph in Los
Angeles and San Francisco in the late 1940s through early 1950s. The
Lumigraph was licensed by the producers of the 1964 sci-fi film, The Time
Travelers. The Lumigraph does not have a keyboard, and does not
generate music.
In 2000, Jack Ox and David Britton created “The Virtual Color Organ.”
The 21st Century Virtual Reality Color Organ is a computational system for
translating musical compositions into visual performance. It uses
supercomputing power to produce 3D visual images and sound from
5
Musical Instrument Digital Interface (MIDI) files and can play a variety of
compositions. Performances take place in interactive, immersive, virtual
reality environments such as the Cave Automatic Virtual Environment
(CAVE), VisionDome, or Immersadesk. Because it’s a 3D immersive world,
the Color Organ is also a place — that is, a performance space.
Visual element from Wladimir Baranoff-Rossiné’s Optophonic Piano. Image Source
The Optophonic Piano projected revolving patterns onto a wall or ceiling
by directing a bright light through a series of revolving painted glass disks,
filters, mirrors and lenses.
There are various kinds of luminous filters; plain coloured ones, optical
elements such as prisms, lenses or mirrors; filters including graphic
elements and, finally, filters with coloured shapes and defined outlines.
Add to this the possibility of modifying the position of the projector, the
6
screen frame, the symmetry or asymmetry of the compositions and their
movements, as well as their intensity. You will then be able to reconstitute
this optical piano that will interpret an infinite number of musical
compositions. The key word here is interpret, because, for the time being,
the aim is not to determine a unique rendering of an existing musical
composition for which the author did not foresee any light being
superimposed. In music, as in any other art, one has to take into account
elements such as the talent and sensitivity of the musician in order to fully
understand the composer’s thoughts. (source)
3D animation rendering the internal operations & effects of the Optophonic Piano on YouTube.
7
Contrary to the Young Maker at the start of the video below, the Color Organ did NOT originate in
the 1970s. Image Source
8
Analog color organ tutorial on YouTube.
There aren’t any videos of the Medieval color organ, but in the 20th Century many
artists were inspired by the concept, and with new electronic and audiovisual
technologies, what is sometimes called sound-image or music-visual ‘synaesthesia’ was
often pursued in a range of creative works in different media and performance contexts.
Below is an excerpt from a performance of Scriabin’s Prometheus: Poem of Fire
painstakingly recreated at Yale University in 2010.
Poem of Fire on YouTube.
9
Scriabin suffered from the natural condition of synesthesia, which made
him associate musical notes and keys with colors. For example, the pitch
“D” represented bright yellow, while “A” looked like dark green, and “D
flat” felt like deep purple. In addition, in his late works traditional tonality
is replaced by a set of unique harmonic spaces that inhabit a world of
polyrhythmic uncertainty. In his quest to transfigure the word, Scriabin
thought it necessary to confront the forces of evil. His Ninth Sonata is
aptly nicknamed “The Black Mass,” and Scriabin tellingly regarded the
performance of this work as “practicing sorcery.” Whatever the case may
be, it is certainly a work of great musical concentration and extreme
emotional intensity. (source)
Many experimental filmmakers made use of optical representations of sound to
manipulate the audio track that is played by analog projectors, reversing the usual
process of transcribing optical audio by drawing sounds directly onto film to create the
soundtrack.
Optical Sound on YouTube.
10
McLaren’s Dots.
Visual music approaches are particularly popular in creative practices where
electroacoustic composition and animation intersect, and the synaesthetic explorations
that we saw above in the realm of analog film continue today in the use of integrating
the output 3D modeling and animation software with sound synthesis.
Diego Garro’s Patah on Vimeo.
11
Here is a very recent example of some new trends emerging in computational visual
music (to use that term in a very broad sense, since ‘visual music’ can encompass quite a
range of creative practices) related to an increasing interest in data and algorithms.
Michele Zaccagnini is developing a process he calls Deep Mapping, which is
an approach that allows the composer to store and render musical data
into visuals by “catching” the data at its source, at a compositional stage.
The advantages of this approach are: accuracy and discreteness in the
representation of musical features; computational efficiency; and, more
abstractly, the stimulation of a practice of audiovisual composition that
encourages composers to envision their multimedia output from the early
stages of their work.The drawbacks are: prerecorded sounds cannot be
deep-mapped and deep mapping presupposes an algorithmic
compositional approach. (source)
Deep Map #1
12
Visual Music in Art & Film
Visual Music is a concept also sometimes employed in the purely visual arts, where
some painters have found inspiration in the concept of music to describe their
abstractions.
Paul Klee, New Harmony, 1936
13
Wassily Kandinsky, Improvisation (Dreamy), 1913
In Visual Music of the moving image variety (e.g. film and video, rather than music as
the “referent” or metaphor for abstractions in painting, such as those of Kandinsky or
Klee), the image track is often in a fantasia mode vis-à-vis the soundtrack. While as is
usually the case with sound design, the script, footage and first edits usually precede the
production of sound (though there are exceptions, as with Ben Burtt’s collection of
sounds for Star Wars where he often recorded interesting sounds before knowing what
14
to do with them), with visual music there is typically a pre-existent work of music, which
the moving image takes as an inspiration or motivation for free form and abstract play.
Visual Music as a form can range from abstraction (e.g. a play on geometric shapes, light
or color), to specific references to technologies of mediation (e.g. bright flashes of
overexposed film, video footage modulated by rhythms in a techno beat, or even the
generative imagery produced by iTunes Visualizer), to highly stylized and very abstract
characterizations of personae with degrees of recognizable action and even plot (for
instance, the struggle of an orange triangle to escape from thick lines and grids of black,
which metaphorically morph into the bars of a prison or cage, as below with Synchromy
No 4 Escape).
Mary Ellen Bute’s Synchromy №4 Escape
15
Fan video, Black Eyed Peas “I Gotta Feeling” as motion-visual geometry.
Powercord vs Philter Phreak
16
Photo-collage and montage editing have also been a feature of Visual Music, and highly
stylized music video production (especially for electronica genres) can often blur the
usual stylistic boundaries between what one might typically refer to as a ‘music video’
versus a work of Visual Music.
One Dot Zero vol 10
The ident is based around the idea of the roots of computer technology in
the pre digital world — a world of music boxes, jacquard looms, punch
cards and relay switches. Music box mechanisms were the precursors to
punch cards as ways of communicating binary information, the Jacquard
loom used punch-cards to essentially program the loom to create complex
textile patterns (looked at now they resemble 8-bit computer drawings).
The first real computer circuit was created using telephone relay switches.
Our contemporary digital world is linked to pre-electric era of automated
crafts and musical automata. This has a resonance with the tenth
anniversary of onedotzero — both in the way that it references history but
also it is mirrored in a lot of the work that’s being produced now.
Computers have become almost invisible, powerful tools which are being
used to facilitate craft. (source)
17
In its ‘classic’ mode,’ visual music is often linked to the tradition of seeking an
experience of synaesthesia as a spiritually heightened fusion of the senses. Its
antecedents are in Wagner’s Gesamtkunstwerk (indeed, Richard Wagner’s “Evening
Star” is the music of Mary Ellen Bute’s Synchrony №2) or total artwork (a fusion of the
all the arts and consequently, of the senses), but it also has roots in the Symbolist and
Spiritualist movements of the 19th century — all of this has its origins in Schopenhauer’s
philosophy of music, in which music was framed as the highest art due to its ability to
directly represent the Will (for Schopenhauer, the Will as a category included
magnetism, love, rage, and electricity — in other words, forces in general, whether
internal to one’s self and unconscious, or external in the workings of the world).
As an outgrowth of the Romantic and Symbolist movements, music was
elevated to a status of supremacy over all the other forms of creative
expression. The other arts, notably poetry and painting, were said to aspire
to the “condition of music.” Artists came to believe that painting should be
analogous to music.
Proponents of musical analogy based their aesthetic theories on an
abstraction of the idea of music, rather than on a clear understanding of
musicology. For them music represented a non-narrative, non-discursive
mode of expression. They reasoned that music, in its direct appeal to
emotions and senses, transcended language. Just as music was a universal
form of expression, so should the visual arts attain universality by evoking
sensual pleasure or an emotional response in the viewer.
Advocates of musical analogy and color music also depended upon the
related notion of synaesthesia; that is, they believed in the subjective
interaction of all sensory perceptions. This common acceptance of
synaesthesia resulted from two divergent philosophical positions.
According to the more romantically inclined artists and writers, the
interchangeability of the senses was evidence of mystical correspondence
to a higher reality. On the other hand, some artists joined forces with
scientific researchers to study synaesthesia as a phenomenon of human
perception. (Source)
Many works in the filmic synaesthetic tradition can be read as a counter-modernist
inclination, a vestige of romantic impulse in the development of 20th century
mediation.
18
Sound Design
The term Sound Design came into vogue in the 1970s to describe a new role in the
creation of the sound film, analogous to a “director” or “cinematographer” of the
soundtrack. In the early ’70s Dolby noise reduction, which had already established itself
widely in music production and distribution during the ’60s, expanded its application to
the area of film sound. The specific properties of Dolby — increased dynamic range,
improved spatialization, better frequency response, and reduction of the noise floor —
combined with Dolby’s strategy of providing relatively affordable licensing to theater
owners so that it’s noise reduction technology could be widely adapted, provided for the
first time a universal standard in cinematic sound reproduction, allowing the sound mix
heard in the theaters to be closer to that heard on the mix stage than at any time
previously. (source)
The aesthetic possibilities opened up by this technological change were exploited by
filmmakers, particularly those based in San Francisco’s “Hollywood North.” For
instance, the two artist-technicians usually credited with being the first “sound
designers” (the first to receive this designation), Ben Burtt and Walter Murch, were each
given unprecedented periods of time to explore sound as a dimension of film. Burtt was
given more than a year to build a sound effects library for George Lucas’s Star Wars film
soundtrack, famously (in widely circulated images) “wandering the desert” with field
recording gear, tapping on phone wires and recording sounds that would eventually
support such futuristic technologies as X-Wing Fighters and light sabers. Murch spent a
year mixing and remixing Francis Ford Coppola’s Apocalypse Now, trying out multiple
edits and approaches to what is likewise (as in the case of Star Wars) regarded as a
paradigm shift in film mixing.
In entertainment industry contexts, sound‘s role is often described in the relevant
literature as a form of “subordination” to the film’s imagery. In other words, the work of
the soundtrack is to reinforce, through its specific effects (heightened emotion, spatial
depth, representing sound sources, clarity of speech, rhythmic pacing and the like) the
narratological and often “realist” motivations of the image track (and reciprocally,
19
reserving “weird sound.” often simply taken from the realm of avant-garde music — for
dream sequences, aliens, monsters and the like (source). Such an approach is typical of
more narrative, mainstream or commercial projects.
In contrast, experimental approaches to sound design tend to assert a relative autonomy
for the soundtrack in relation to the moving image. But at the same time there is often
an “associational intent” at work, in that there is an attempt to create poetic or
connotational relationships between sound and image.
Experimental Music
In her paper “Experimental Music Semiotics,” Morag Josephine Grant elaborates an
intriguing Peircean semiotic approach to understanding the distinction between
experimental music and other forms of avant-garde, classical or “new” music. In her
analysis, experimental music has a heightened interest in the indexical relationship to
sound, whereas other forms may be better described as having a stronger affinity for the
symbolic.
Definitive for the icon in similarity with the object referred to, definitive
for the index is contiguity with the object referred to, definitive for the
symbol is its dependence on a standard rule of interpretation. (source)
It is in this notion of “a standard rule of interpretation” that one can find a slew of
correspondences to non-experimental (including avant-garde) music practices. For
instance, an interval can be read as also referring to a moment in the score, a minor
third, and all the rules of harmony with its allowances and strictures of what one is to do
with a minor third (or not do with it). In the recapitulation of a theme, the work itself
can be understood as the interpretant which contextualizes its significance. One can
expand this to other fields of music as well. Jazz, for instance, can be understood as a
metaphor or symbol for communication (call and response, dialogue). To draw her
distinction sharply, she offers the striking example of the sound of a telephone:
20
So why don’t telephones ring in music? They ring all over the place in
literature. They appear in pictures more often than they do in music. They
are the hinge of countless film plots.
The case of experimental music is immediately different because it can
deal with the telephone as a telephone.
Grant cites Winfried Nöth, noting that “the index makes no assertions regarding its
object, but merely shows us the object or draws our attention to it,” the index is “of fact,
of reality, and of experience in time and space.”
Experimental music does not draw a distinction between the realms of “music” and that
of “sound and noise.” It forces our attention to the causal dimensions of sonic
experience, the productions of sonorous bodies, rather than to the systemic
embeddedness of a sound in a formal logic or system (such as score, or the rules of
harmony). Indeed, to further bolster her argument that non-experimental (but still
scored) music has more of a symbolic character, we need only note the aspects of
rhetoric that accompany such compositions: themes, argument, development,
recapitulation, verse, refrain, and the like.
Grant does not assert, however, that non-experimental music is only symbolic, or that
experimental music is only indexical. Indeed, her essay devotes much time to exploring
devil’s advocate, borderline, and seemingly contradictory examples to her schema. She
notes that “signification generally involves complex hybrids of these categories (icon,
index, symbol).” But as a general description of what may make some music
“experimental” and others not, it does have a subtle cogency. For instance, John Cage’s
silent work 4’33” can be understood in relation to Peirce’s example of the indexical
weather vane signifying even the absence of wind.
Even if there is no wind at a particular moment, the weather vane still
fulfills its purpose, confirming that there is no wind… It is specifically
created to draw our attention to something by contiguous relationship
with it. Even if there is never wind again, a weather vane will not stop
being a weather vane…
21
The silence evoked by Cage in this work is analogous to the absence of wind — it is still
significant (hence a work of experimental music) even though it does not sound.
What is interesting to note about visualized sound in a Peircean semiotic context is that
the sound which results from such processes has its “origin” in an indexical (directly
causal) relationship. The nature of the index-causality is different in these cases: pen or
scratches on film emulsion in McLaren’s work, or digital drawing tablet in the case of
Xenakis‘ UPIC system. However, once we as listener-viewers are experiencing the work,
the synaesthetic play of visual-sonic percepts takes on an iconic dimension as well, as we
start to see that the sounds high up in the visual frame also sound high-pitched, or thick
bands may produce clusters, while thin bands produce purer tones, or as we notice the
way in which rhythmic syncretism in sound and image reinforce each other. At times
one feels that one is really ‘seeing the sound’ but simultaneously sounds and images
have their own autonomy — in fact, in an iconic sense one is only seeing certain aspects
of the sound — both the visual and sonic imagination have their own resonances which
can’t be entirely merged.
New Visualizations in Music
22
In the context of visual music, we would be amiss if we didn’t touch on new practices of
visualizing musical form that have gone far beyond traditional music notation using
bass and treble clefs, note and rest values, bars, measures, time signatures, and all of the
other inscription apparatus of classical music common practice. As musical form
exploded in its sheer variety of approaches in the 20th Century, so too did ways of
representing the new kinds of sounds and their composition. The images below are
visualizations of musical compositions, and are taken from Sylvia Smith’s article Visual
Music.
23
24
25
Optophonia
Optophonia is a visual music NFT platform based on Wladimir Baranoff-Rossine’s
Optophonic Piano. Optophonia reimagines this early pioneer visual music device as a
vehicle for artistic collaboration and collecting in the NFT era. The original Optophonic
Piano (circa 1916) was a Russian-Futurist invention for the production of music
accompanied by live visual processing. It had many performances including a
presentation at the Bolshoi Theater in 1925. It has since inspired many visual music
works over the course of the last century.
Piano Con Moto — piano, video and electronics (2007)
26
“Piano Con Moto,” for piano, video, computer music, and three computers
(live visual and audio processing), revisits a topic that captivated a
generation of artists during 1910s and 1920s: the possibilities for
synthesizing two divergent media into one artistic expression to create a
new art form. Alexander Scriabin and Wladimir Baranoff-Rossiné used a
type of piano capable of navigating between audio and visual realms:
Scriabin imagining a “Tastiera per Luce”, a color piano, for the
performance of his “Promethée”; Baranoff-Rossiné's “piano
optophonique” projected light through painted and rotating glass plates,
the colors and rhythms of which closely complemented the music.
Arising from this context, a pianist, video artist, composer, and a video
programmer collaborate to create a work involving a temporal-visual
element, bringing a fused expression of two components. This
five-movement work creates a fused expression of musical and visual
components. (source)
Optophonia is built on Unreal Engine’s leading edge real-time rendering technology,
allowing users to collect, display and even create new optophonic works as NFT media.
The original Optophonic Piano consisted of painted rotating cylinders and moving light
filters for projecting complex visual palimpsests, the mechanics of which were triggered
by the device’s piano-like keys while musicians played nearby instruments. The
Optophonia app will allow users to import their own visual art NFTs and play back the
media generatively or interactively for a complex and never-the-same sensory
experience, while also affording the possibility of minting these user-created works as
new NFTs.
Acknowledgement: some elements of this text have been excerpted from a previous
article in the Parsons Journal for Information Mapping.
27
About the Author
Dr. Michael Filimowicz (artist name Myk Eff) is Senior Lecturer in the School of
Interactive Arts and Technology (SIAT) at Simon Fraser University in Vancouver,
British Columbia, Canada. He has a background in computer mediated
communications, audiovisual production, new media art and creative writing. His
research develops new multimodal display technologies and content, exploring novel
form factors across different application contexts including gaming, immersive
exhibitions and simulations. He has exhibited in many new media shows such as
SIGGRAPH Art Gallery, Re-New, ARTECH, Archetime, Intermedia and IDEAS, and
currently streams his digital images on the Loupe Art platform. His work has also been
featured in journals (e.g. Leonardo), many monographs (e.g. Infinite Instances and
Spotlight) and a textbook (Reframing Photography, Routledge).
https://filimowi.cz
https://twitter.com/myk_eff
28