0% found this document useful (0 votes)
20 views95 pages

You've Got Music (Part 7 - Audio Technology) - 1

The document discusses the role and responsibilities of sound engineers in live performances, emphasizing their importance in balancing audio for the audience. It explains the fundamentals of sound, including how sound waves are created and transmitted, and details the complexities of audio engineering, including recording, mixing, and digital sound storage. Additionally, it covers sound compression methods and formats, highlighting the differences between uncompressed and compressed audio files.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views95 pages

You've Got Music (Part 7 - Audio Technology) - 1

The document discusses the role and responsibilities of sound engineers in live performances, emphasizing their importance in balancing audio for the audience. It explains the fundamentals of sound, including how sound waves are created and transmitted, and details the complexities of audio engineering, including recording, mixing, and digital sound storage. Additionally, it covers sound compression methods and formats, highlighting the differences between uncompressed and compressed audio files.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 95

PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.

org
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Chapter 25

Sound Fundamentals
 Introduction
The guitar player can wail, and the drummer can crash away on the skins. But if the band
you're listening to live is sounding really on, there's a good chance an unseen member of the
band is having a good night, too.

That person is the sound engineer, and he's responsible for getting every subtle tweak of voice and
instrument into your ears in a balanced way.

All instruments and voices aren't created equal. For instance, some instruments are louder than others
while the acoustic signature of some instruments can get lost. A live sound engineer's job is to wrestle
with these factors and coax the correct overall sound out of any situation. The fact is, the band can
have a great night but the audience may never know it if the live audio engineer isn't doing their job
properly. In many ways, the live sound engineer is as important to a live band performance as any
member on stage.

Good live sound engineering and concert sound engineering requires more than plugging in some
amplifiers and turning a few volume knobs. It demands knowledge of acoustics and electronics
combined with the collaborative skill of an artist to work with a band or producer to give them the
sound they want. Every venue is different -- from the cozy bar to a medium-sized concert hall to an
outdoor arena -- and each brings its own challenges to audio engineering. But it's the live sound
engineer's job to tame acoustics and bring the musicians' efforts home to the audience.

An audio engineer, also called audio technician, audio technologist, recording engineer, sound
engineer, sound operator, or sound technician, is a specialist in a skilled trade that deals with the use
of machinery and equipment for the recording, mixing and reproduction of sounds. The field draws on
many artistic and vocational areas, including electronics, acoustics, psychoacoustics, and music. An
audio technician is proficient with different types of recording media, such as analog tape, digital
multitrack recorders and workstations, and computer knowledge. With the advent of the digital age, it
is becoming more and more important for the audio technician to be versed in the understanding of
software and hardware integration from synchronization to analog to digital transfers.

An audio engineer is someone with experience and training in the production and manipulation of
sound through mechanical or electronic means. As a professional title, this person is sometimes
designated as a sound engineer or recording engineer instead. A person with one of these titles is
commonly listed in the credits of many commercial music recordings (as well as in other productions
that include sound, such as movies).
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Audio engineers are generally familiar with the design, installation, and/or operation of sound
recording, sound reinforcement, or sound broadcasting equipment, including large and small format
consoles. In the recording studio environment, the audio engineer records, edits, manipulates, mixes,
or masters sound by technical means in order to realize an artist's or record producer's creative vision.
While usually associated with music production, an audio engineer deals with sound for a wide range
of applications, including post-production for video and film, live sound reinforcement, advertising,
multimedia, and broadcasting. When referring to video games, an audio engineer may also be a
computer programmer.

In larger productions, an audio engineer is responsible for the technical aspects of a sound recording
or other audio production, and works together with a record producer or director, although the
engineer's role may also be integrated with that of the producer. In smaller productions and studios the
sound engineer and producer is often one and the same person.

In typical sound reinforcement applications, audio engineers often assume the role of producer,
making artistic and technical decisions, and sometimes even scheduling and budget decisions.

The field of audio engineering is a very broad one, which no single book can dissect all its aspects.
However, in this section of this book, I will be exposing you to the basics of sound engineering and
music production techniques. We shall try to cover the core knowledge sound engineers and music
producers need to have to be proficient on the job.

 Sound and its Elements


Sounds are pressure waves of air. If there wasn't any air, we wouldn't be able to hear sounds. There's
no sound in space.

We hear sounds because our ears are sensitive to these pressure waves. Perhaps the easiest type of
sound wave to understand is a short, sudden event like a clap. When you clap your hands, the air that
was between your hands is pushed aside. This increases the air pressure in the space near your hands,
because more air molecules are temporarily compressed into less space. The high pressure pushes the
air molecules outwards in all directions at the speed of sound, which is about 340 meters per second.
When the pressure wave reaches your ear, it pushes on your eardrum slightly, causing you to hear the
clap.

Sound is transmitted mechanically through compression and expansion of air. Speakers work by
moving back and forth, manipulating the air around the speaker. Microphones record sound by means
of a diaphragm whose motions are converted into electric currents, with variations in current
corresponding to higher and lower compressions. The ear works the same way, with the ear drum
producing changes in current that go to the brain. The sounds we hear, even a steady pitch, are
rapidly changing areas of pressure. A steady pressure would not produce any sound.

We express sound graphically as a wave form, with the frequency corresponding to the compression
and expansion of air and the amplitude representing the volume. A pure wave (a sine wave) produces
a very flat sound.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

A440-A sine wave with frequency equal to 440 Hz1 (Sound Wave Pictured Below)

The x direction (horizontal) represents a time change (the area pictured is less than 1/100th of a
second) and the y direction is amplitude. In this sound wave, areas above the blue line are higher
compression than standard air pressure, and areas below the blue line are areas of expansion, with
pressure below the standard air pressure (the gray lines are reference point for volume). In a standard
volume sound wave, these changes are minutiae, enough to be noticed as sound but not enough to
sense a change in pressure.

A440-A Higher Volume Sound Wave with the same frequency (Notice the Higher Amplitude of the
Sound Wave)

The waves above have a flat sound, unlike a piano note or a guitar string. This is because
instruments produce overtones, or sounds that have a harmonic relationship to the note being
played. Overtones are at whole number intervals from the original sound wave. For example, an
A440 could have as its overtones an A880 (One octave above-twice the frequency), an E1320 (A fifth
above-3 times the original) (Not quite an E, but very close), an A1760 (two octaves above-4 times the
original frequency), etc. These overtones have lower volumes than the original note; therefore, a note
is classified by its loudest overtone, which happens to be the root note.

Waves can be added together, with the total amplitude at each moment in time being added
together. For example, an A440 and an A880 at equal volume would produce an irregular wave with
a repeating interval.

An A440 Wave + An A880 Wave = A complex wave

Overtones can be added in this way, so that a note on a guitar would appear an an even more
complex wave. A sound wave from a piece of music would be even more complex.

An A440 On Guitar
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

An Orchestra Playing2 (There are two tracks (stereo-left and right) separated by a black line)

How Computers Store/Play Sound

Computers store sound digitally, in 1s and 0s, rather than the actual sound. To show how computers
store sound, we will use the graphical representation of sound.

A sound file has 2 primary properties: its frequency (in Hz), and its resolution (in bits). The
frequency is not related to the frequency of a note, but refers to the number of times a computer will
check the sound level. The most common frequencies are 22,050 Hz (1 Hz=1 Time/Second), 32,000,
and 44,100 (CD Sampling Rate). At each check, the sound inputs a volume in the form of a whole
number (1,2,3,...). Because computers store numbers in binary form (1's and 0's), the bit number
refers to the number of digits used. For example, a 2-bit resolution would have four levels: 00 (0), 01
(1), 10 (2), and 11 (3). A 4-bit resolution has 16 levels, etc.

Example of Sampling Rate/ Converting to Binary (They use Voltage instead of volume-the two are
proportional)4
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

The number of levels for an X-bit resolution is equal to 2X, so a higher bitrate equals a higher quality
of sound. The most standard form for sound is 16-bit (65,536 levels), with older files having 8-bit
resolution (256 levels). Frequency and bitrate determine the size of the file-each frequency mark
contains the number of bits (8 bits is a byte). 3 seconds of a 44,100 Hz, 16-bit file is 16(bits per
reading)x44100(readings per second)x3(seconds)=2,116,800 bits (/8 bits per byte)=264,600 bytes
(264 kilobytes).

To play back the sound, the computer plays a tone at the volume it has from the data file. because
theses tones go so fast (44,100 of them a second), we cannot hear them-we only hear the sound that
was recorded into the computer. To represent this graphically, we could plot the volume along a y-
axis and the time on the x-axis. By making straight lines between the dots, we see the basic "wave"
the computer plays.

Recording Wave, Sample Taken by Computer and Resulting Wave


PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Consequences of Digital Recording/Playback

While digital music allows sound to be manipulated and copied very quickly (as opposed to analog
recordings, where a track had to be played in its entirety in real time to be copied or transferred with
an effect applied), the means of converting to digital open the sound up to quality problems. For
example, if the sampling rate is the same as the frequency of a wave, all that would be sampled is a
constant level, which would produce no sound. While most sound files are sampled above the range
of human hearing (28 kHz, with the max human range being about 20 kHz), and most problems are
more complex due to complex sound waves, the problems loss of resolution can cause are still
important, and appear more with compressed sound.

Part 3 - Sound Compression and Formats


PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Storing sound uncompressed (all the 1's and 0's are intact from the original recording of the song)
requires lots of space, with songs taking about 10 MB per minute. This makes most songs 30-70 MB
in size, too unwieldy to use on the internet or to store on a re-writeable medium (floppy disc, zip
disc). Therefore, most digital sound is compressed in some way to make it more manageable.

How Sound is Compressed

Different sound formats use different compression methods. While some formats use creative
compression methods (such as storing only the volume difference between samples), most
compression methods use 2 processes to decrease a file's size: clipping and re-sampling.

Clipping is the process of removing uncritical parts of a sound file. This could be the upper and lower
volume bounds, or the lower and higher volumes of a sound file. Because sound is primarily a
frequency, clipping the extremes of a song don't change the file drastically. Not too much is clipped;
a 16-bit file with 65,356 sound levels would lose maybe 1% or 2% of its levels. Another way to clip
is to remove unhearable frequencies. The human ear can only hear from 20-25000 Hz, and an
uncompressed digital sound file will have a sampling rate of 44,100 Hz. The compressing program
uses an algorithm to eliminate the lower frequencies (which we would perceive as vibration by
actually "feeling" a rumble) and higher frequencies (which we would not hear). Frequency clipping
can also be a result of re-sampling (see below), but is still used even when a file is not re-
sampled. Clipping has very little effect on the general form of the song, but has a great effect on the
"richness" of the song. Although we cannot hear the parts of a song that are clipped, what is clipped
is often integral to the sound of the song. A song with drastic volume clipping (anything more than
about 3 percent of the level) will sound distorted, and any clipping makes a clip sound flat (see the
examples below).

Re-sampling is the process of sampling a digital file again. A 44100 Hz file may be re-sampled at
22,050Hz, halving the file size. The compressing program would record every other (or every third or
fourth, depending on the original frequency and re-sampling frequency) volume level. The
disadvantages to re-sampling are a little more obvious-the higher frequencies are clipped as a result of
not being able to distinguish between the changing sound levels (this clipping tends to have a harsher
sound than standard clipping). Nonetheless, re-sampling is the most effective way to decrease a file's
size.

Format Description/Samples

Wav, Aiff

.wav and .aiff files are the standard uncompressed formats for Windows and Macintosh,
respectively. They store sound in the way discussed in the last section, without any clipping and at a
high sampling rate (generally 44,100, 16-bit format). Because they are similar, only the .wav formats
will be given samples.

a note on all samples files: To compare the effects of compression, all processed sound files will be
presented in three ways: as a sine wave (the standard a440), a guitar playing (a basic instrument with a
non-regular wave) and a group playing5 (a very complex sound wave).

Wav File-44,100 Hz, 16-Bit files.

A440 (3 sec, mono, 258 KB) Guitar (28s, mono, 2.35 MB) Band (Stereo, 20s, 3.37 MB)

MP3, WMA
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

MP3 has become the most highly used compression format recently, thanks to its ability to compress
files to very manageable sizes with very little loss in quality. MP3 compression involves both
clipping and re-sampling. The quality of .mp3 files is not indicated by frequency and bit-rate, but is
replaced with a kbps (kilobit per second) marking, which represents the space used per second in that
file (the wavs above have an equivalent rating of 1,411 kbps). The most popular sampling rate is 128
kbps, which reduces the sound to about 1 MB per minute with little loss in quality. It is very difficult
to tell the difference between the .wav file and .mp3 files at higher kbps ratings-listen to the cymbal
crash at the beginning of the band file and how it goes from being a quick crash to a long "wash-
out". Also pay attention to the change in overall tone of the song. MP3 files cannot be recorded in
mono format, so all the clips here are stereo. WMA is windows proprietary compressed sound
format, and uses an almost identical compression scheme as MP3 files.

MP3 file - 256 kbps, 44 KHz sampling rate

A440 (94 KB) Guitar (876 KB) Band (627 KB)

192 kbps, 44 KHz sampling rate

A440 (70.5 KB) Guitar (657 KB) Band (470 KB)

128 kbps, 44 KHz

A440 (47 KB) Guitar (438 KB) Band (314 KB)

96 kbps, 44 KHz

A440 (35.3 KB) Guitar (328 KB) Band (235 KB)

64 kbps, 22 KHz

A440 (23.5 KB) Guitar (219 KB) Band (156 KB)

32 kbps, 22 KHz

A440 (11.8 KB) Guitar (109 KB) Band (78.4 KB)

16 kbps, 16 KHz

A440 (5.96 KB) Guitar (54.8 KB) Band (39.2 KB)

RealMedia Files

RealMedia has become one of the standard sound formats because of its ability to be streamed, or
played without being fully downloaded (WMA files can also be streamed, but they still use the same
compression technique). Because of the difficulty of getting a streaming server and the loss of
reliability of a streaming file (streaming files often stop for a second and start again due to the lack of
bandwidth to keep it going), these files must be downloaded and then played. RealAudio files use a
standard compression, but are different from MP3 files in that they try to optimize quality as they
compress the sound, rather than a one-size fits all approach used by MP3s. Because RealAudio is
meant to be streamed, files are classified by their ideal bandwidth to be played on, such as a 56K
modem or a DSL line (the files below are listed in descending order of quality)

RealAudio File - Corporate LAN


PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

A440 (30.4 KB) Guitar (237 KB) Band (248 KB)

Dual ISDN Line (same compression as Corporate LAN for mono files)

Band (163 KB)

Single ISDN

A440 (21.7) Guitar (164 KB) Band (113 KB)

56K Modem

A440 (19 KB) Guitar (118 KB) Band (87.6 KB)

28K Modem

A400 (11.6 KB) Guitar (76.4 KB) Band (53 KB)

Other Formats (Quicktime, .au, Liquid Audio, etc.)

The samples presented here are only samples of the sound files available. There are many other
types of sound files, each with different uses. Quicktime is the proprietary streaming format for
Macintosh, but was not included because it the compressor was not readily available. .au is an older,
8-bit files type used when .wav and .aiff were too big to be readily used in older computers, and is
now and obsolete file type. Liquid Audio is also becoming a standard as a secure, streaming format,
but the compressor is not readily available. It is the system used here on campus for the Music Library
course reserve sound materials (Sunsite2.berkeley.edu/Music), available only on library
workstations. There are many other sound file types, many created for a specific program. The file
types presented here are the standards for listening to sound as a whole, as opposed to working with it
or listening to it in the background of game or other multimedia file.

 Room Acoustics
The single most important and influential link in the audio reproduction chain is also the least
understood and most neglected - the listening room itself. Unfortunately, this is also the most
difficult or costly "component" to change. What follows will be a brief overview of the
immensely complex and multi-faceted topic of room acoustics and listening room
design. Additionally, I hope to pass along a few helpful tips that will allow you to realize
maximum benefit from your present (or future) listening environment.

There are many factors that influence the "sonic signature" of a given space. To try and
illuminate them all would require and in-depth course on acoustics. The more conservative
goal of here is to explore a few of the topics most germane to the Audiophiles' listening room
environment. Three that stand out as important considerations are: room size, rigidity and
mass, and reflectivity. Let us examine each of these in more detail.

Room Size:
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

In our discussions, room size will be broken down into two subclassifications: dimensions
(height, width and length) and cubic volume. From a practical standpoint, room volume will
be an important criteria in choosing loudspeakers and the amplifier necessary to drive them to
the desired sound pressure (loudness) level. Assuming that the listener wants to "fill the
room" with sound, a large environment will require both a larger loudspeaker and a more
powerful amplifier to do the job. Smaller spaces usually dictate smaller speakers.

The dimensions of the room (and their ratios) do much to influence the sound in a listening
room. The height, length and width will determine the resonant frequencies of the space and,
to a great degree, where the speakers and listener should be located (see our separate article
on speaker placement). The longest room dimension, the diagonal, will determine the ability
of the room to support low frequencies. Ideally we would like to have a diagonal dimension
equal to or greater than the wavelength of the lowest frequency we expect to generate within
the room. This ideal quickly becomes impractical for most of us when we realize the
gargantuan nature of low frequency sound waves in air. A 20 Hz wavelength is 56.6 feet in
length! Fortunately, we need only one-quarter of this dimension to achieve adequate bass
response.

Rigidity And Mass:

Rigidity and mass both play significant roles in determining how a given space will react to
sound within. They have a strong relation to the low frequencies - both qualitatively and
quantitatively speaking. Low frequencies can be tremendously powerful, capable of flexing
walls, ceilings and occasionally, floors. Flexure of this type is termed diaphragmatic action.
To illustrate this concept, think of the room as a small box. If the box is made of cardboard,
the walls vibrate easily. The same box made of concrete would exhibit little
movement. Diaphragmatic action dissipates low frequencies, robbing the bass of both impact
and extension. Therefore, the more rigid/massive the walls, floor and ceilings in our listening
rooms, the less diaphragmatic action and the tighter, more defined and powerful the bass.

An ideal room would have absolute rigidity and infinite mass. While such a "perfect" room is
theoretically impossible, the closer we can approximate the ideal, the better. The closest I
have come to reproducing the "ideal" room, was in the design of a West Texas recording
studio done a number of years ago. The availability of native materials allowed the walls of
this structure to be constructed from solid rock to a thickness of 16". The bass reproduction in
the control room was absolutely the cleanest, tightest and most powerful I have yet
experienced. Although the studio monitors and electronics in use were inferior to many of
today’s better hi-end audio systems, listening to this system with master tapes was truly a
religious experience.

Our goal then, is to reduce the amount of diaphragmatic action in the listening room. We can
accomplish this task by increasing the mass and rigidity of all surfaces within the listening
environment. This can dramatically improve low frequency detail, solidity and overall
accuracy. In existing rooms using drywall construction, we can simply add an additional
layer of sheet-rock, making sure to tightly couple the new layer with screws and adhesive. In
new construction, we can look at using not only two layers of sheet-rock, but double-wall
techniques, more robust framing materials and thicker drywall material. At this stage these
changes are quite inexpensive.

Reflectivity:
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

In simple terms, reflectivity is the apparent "liveness" of a room. Professionals prefer the term reverb
time or Rt-60. Rt-60 defined, is the amount of time (in seconds) it takes for a pulsed tone to decay to
a level 6OdB below the original intensity. A live room has a great deal of reflectivity, and hence a
long Rt-60. A dead room has little reflectivity and a short Rt-60.

Rt-60 measurements are most useful in determining the acoustic properties of larger spaces such as
churches, auditoria, etc. In smaller environs the Rt-60 measurements become so short as to be useless.
In these confined spaces, individual reflections from nearby surfaces dominate the sonic picture and
are the primary focus for the audiophile.

Reflections can be both desirable and detrimental. This depends on their frequency, level and the
amount of time it takes the reflections to reach our ears following the direct sounds produced by the
speakers. Our brain blends together all of the sounds reaching our ears within 5-30 ms of the original.
Reflections arriving approximately 30-50 ms or more after the original will be perceived as separate
sounds. This phenomenon is known as the Haas effect. It is these initial reflections that are most
important to the brain in determining the apparent size of the listening room. By manipulating the
ratio of direct vs. reflected sound, we can fool the brain into thinking we are listening in a larger room
than actually exists. The idea is to reinforce the direct output from the speaker with reflections of the
proper level, frequency and arrival time, while eliminating the detrimental ones. This can be
accomplished by proper positioning of the speaker and listener, and through implementation of
various acoustic correction products such as those made by Acoustic Sciences Corporation, RPG and
others.

Comb filtering is another form of unwanted reflection. This condition is created when a speaker is
placed near a reflective surface (wall, floor, furniture etc.). The result is image smear and/or frequency
response anomalies. The comb filter effect occurs when the direct sound and the reflected sounds
arrive at the listeners' ears out of phase, thus canceling each other. This problem can be avoided by
placing your speakers well away from reflective surfaces, or by treating nearby problem areas with
absorptive and/or diffusive materials.

A simple test can help us to identify problematic reflections in our listening rooms. The hand clap test
is so named for obvious reasons. Simply sit in your normal listening position and clap your hands
once, listening carefully to how the sound is affected. Do you hear a slow, even decay, a single hard
reflection or a multiple of closely spaced repeats. These faster echoes are known as flutter echoes and
are created when sound bounces back and forth between two reflective surfaces. Flutter echoes and
strong distinct echoes that must be eliminated if optimum sound quality is to be expected. Again,
judicious use of acoustic correction materials can be of great help.

Our hand clap test described above will not, unfortunately, expose another common acoustical
anomaly - that of standing waves (Acousticians sometimes use the term room modes to define this
effect). Here we are describing a type of low frequency reflection, caused by dimensional
relationships within the room. Low frequency standing waves can be predicted mathematically when
the dimensions of the room are known. Standing waves build up in the listening environment and
conspire to sabotage the low-end performance of our stereo systems. A low frequency standing wave
is likely to "bloat" the character of the bass, causing severe peaks at points throughout the range. The
only cost-effective method available for the treatment of standing waves is the use of ASC Tube
Traps. These units, placed in the comers (The point of maximum pressure) can dramatically improve
the quality of low frequency sound in a space plagued by standing waves.

In our zeal to control every last reflection, a potential problem should be understood and avoided. The
over-damping of the midrange and high frequencies is a common problem resulting from the overuse
of highly absorptive materials (Sonex, Fiberglas insulation). Too much absorption here will skew the
proper tonal of music, causing the all important midrange/high frequency region to be attenuated, and
low frequencies to become too prominent. Many acoustic treatment products exist on the market
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

today. Each has merit and is designed for treatment of specific problems. It is unwise however, to
purchase any of them without a prior understanding of the particular room deficiencies you are
experiencing. Careful selection of the acoustic correction methods employed is important if optimum
results are expected.

Each of the currently available acoustic control materials represent an effective means of subduing or
eliminating a variety of room problems. However, choosing the right "tool" for the job is tantamount
to success. Let's look at some of the more common problems encountered by the audiophile, and chart
a course for corrective action.

If your room is overly live (caused by midrange/high frequency reflections), it can exhibit a bright
character. If you don't have a real problem with hard, distinct echoes, you should try adding a few
ASC Tube Traps, some Sonex or Owens Coming 500 series compressed Fiberglas board.

Another way of taming reflections is through the use of diffusion. Diffusers disperse hard reflections
in a random manner, eliminating reflections just as effectively as absorption. Proponents of diffusion
argue that the method is preferred as it returns energy back into the room in the form of ambience.
The RPG Diffusers are the most prolific form of this kind of device. Additionally, ASC Tube Traps
offer a balanced combination of absorption and diffusion.

If you have determined that your room suffers from flutter echoes, try damping one or both of the
reflective surfaces creating the problem. Remember that flutter echoes occur between two parallel
surfaces (like two walls), and that damping of at least one of these reflective surfaces should control
or eliminate the problem. Small rooms are often rife with flutter echoes and are the major cause of
imaging problems within these rooms.

Loudspeaker arrangements for critical listening

Before we examine specific room designs, let us first examine the optimum speaker layouts
for both stereo and 5.1 surround systems. The reason for doing this is that most modern room
designs for critical listening need to know where the speakers will be in order to be designed.
It is also pretty pointless having a wonderful room if the speakers are not in an optimum
arrangement.

The picture below shows the optimum layout for stereo speakers. They should form an
equilateral triangle with the center of the listening position. If one has a greater angle than
this the center phantom image becomes unstable — the so-called "hole in the middle" effect.
Clearly, having an angle of less than 60º results in a narrower stereo image.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

5.1 surround systems are used in film and video presentations. Here the objective is to
provide both clear dialog and stereo music and sound effects, as well as a sense of ambience.
The typical speaker layout is shown below

Here, in addition to the conventional stereo speakers there are some additional ones to
provide the additional requirements. These are as follows:

Center dialog speaker: The dialog is replayed via a central speaker because this has been found to
give better speech intelligibility over a stereo presentation. Interestingly the fact that the speech is
not in stereo is not noticeable because the visual cue dominates so that we hear the sound coming
from the person speaking on the screen even if their sound is coming from a different direction.

Surround speakers: The ambient sounds, and sound effects, are diffused via rear mounted speakers.
However they are, in the main, not supposed to provide directional effects and so are often
deliberately designed, and fed signals, to minimize their correlation with each other and the front
speakers. The effect of this is to fool the hearing system into perceiving the sound as all around with
no specific direction.

Low-frequency effects: This is required because many of the sound effects used in film and video,
such as explosions and punches, have substantial low-frequency and subsonic content. Thus, a
specialized speaker is needed to reproduce these sounds properly. Note: this speaker was never
intended to reproduce music signals, notwithstanding their presence in many surround music
systems.

ECHO
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

In audio signal processing and acoustics, an echo (plural echoes) is a reflection of sound,
arriving at the listener some time after the direct sound. Typical examples are the echo
produced by the bottom of a well, by a building, or by the walls of an enclosed room and an
empty room. A true echo is a single reflection of the sound source. The time delay is the extra
distance divided by the speed of sound.

Echo is quite different from reverberation. A reverberation is perceived when the reflected sound
wave reaches your ear in less than 0.1 second after the original sound wave. Since the original sound
wave is still held in memory, there is no time delay between the perception of the reflected sound
wave and the original sound wave. The two sound waves tend to combine as one very prolonged
sound wave.

Causes of Echo

Excessive room echo is caused primarily by two things - hard surfaces and high ceilings. The
worst materials are glass, marble and concrete block. Next to this is drywall. The more hard
surfaces you have, the worse the echo will be. For example an entirely drywall room is much
worse than one with drywall walls and an acoustic tile ceiling.

The problem with a hard surface room is that there is nothing to stop the echo. The sound
simply bounces from the back wall, to the front, to the ceiling, to the floor, back to the walls,
floor, ceiling and so on. The decay time can be several seconds which absolutely destroys
intelligibility. The solution of course is to reduce the amount of hard surface area.

In what conditions do we hear an echo clearly?

The first is you need a surface that will reflect sound back to you. A concrete wall will do,
but it has to be facing you. It should be hard and relatively smooth, though it doesn't have to
be totally flat; it can curve.

The second is the surface has to be relatively large.

The third is the surface has to be far enough away, far enough for you to distinguish the echo
from the original sound. Since the delay between us talking and hearing the echo is the
distance to the wall and back divided by the speed of sound (the time it takes our voice to get
to a wall and "bounce back" to us) - if the distance is less than 16.2 meters then we won't
even notice it. If the walls are too far, the echo would be "lost in the way". If the area is open,
the echo would "get out" without coming back to you. So perfect examples for clear echo
environments are: Wells (sound comes back from the bottom) and caves (not too large ones
though).
,,,

The fourth is that there should not be confusing echoes from other surfaces that arrive back at
you just before or after the echo. So if there are other large surfaces facing you, they should
all be the same distance away from you.

Fifth, there should be little reverberation (basically where echoes create other echoes) to
confuse the primary echo. Those four characteristics will create a single, very distinct, loud
echo. Remove one or more of those conditions and the echo will be weaker or more confused.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

For example, inside an empty parking garage, there are lots of large hard surfaces facing you
(the concrete walls) but there are lots of them, at various distances from you. So you hear lots
of echoes, but they are blurred and hard to distinguish.

Fixing Echo

One of the major problems we face in Church and other venues today is echo-o-o-o-o-o.
Although a highly reverberant room can make a choir or organ sound great, it is at the cost of
lost intelligibility. Sure the choir SOUNDS good, but can you make out the words?

Intelligibility refers to the ability for a person to understand the words being spoken or sung.
It has nothing to do with how loud the sound is. You may have no problem hearing the voice,
but if the intelligibility is poor, you can't understand the words.

To reduce echo it is best to "soften" some of the surfaces. A ceiling made with acoustic tile is
a big help, not only for reducing echo, but also for wiring access. Heavy drapes and wall
hangings are also good. A carpeted floor is a bit better than a tile or wood floor.

If you are planning an entirely drywall room there are a couple of things you should consider.
Use large wood beams up the walls and across the ceiling. Not only does this trap the sound,
reducing echo, but it also looks nicer. Also don't make the ceiling too high. A low ceiling
reduces the echo delay time which ultimately kills the echo faster than a high ceiling. If you
have a room with echo problems, one sure solution is to add acoustical wall treatment.

Sound System Design

By paying careful attention to the sound system design, we can improve intelligibility quite
significantly. The important factors include:

1) Install accurate speakers which provide controlled dispersion angles and even coverage.
The speakers must be selected and installed to provide accurate sound and project it evenly
throughout the seating area of the room, and not into the ceiling where more echo is produced.

2) Use the right microphones for each application.


By using microphones which are carefully selected for accurate sound reproduction and proper pick-
up and proximity characteristics we can greatly improve intelligibility. There is nothing worse that
muddy sounding microphones or speakers to destroy intelligibility.

3) Use a speaker delay system in large rooms.

In larger rooms with front main speakers and fill-in speakers part way back, or under a
balcony we use a digital delay system to "time align" the speakers. Without this, even more
echo will be produced artificially.

 Musical Instrument Digital Interface (MIDI)


It is a connectivity standard that musicians use to hook together musical instruments (such as
keyboards and synthesizers) and computer equipment. Using MIDI, a musician can easily create and
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

edit digital music tracks. The MIDI system records the notes played, the length of the notes, the
dynamics (volume alterations), the tempo, the instrument being played, and hundreds of other
parameters, called control changes.

Because MIDI records each note digitally, editing a track of MIDI music is much easier and more
accurate than editing a track of audio. The musician can change the notes, dynamics, tempo, and
even the instrument being played with the click of button. Also, MIDI files are basically text
documents, so they take up very little disk space. The only catch is that you need MIDI-compatible
hardware or software to record and playback MIDI files.

MIDI Connectors

MIDI messages are sent in only one direction, so a second cable is necessary for two-way
communication. There is no error detection capability in MIDI, so the maximum cable length is set at
15 meters (50 feet) in order to limit interference. The cables terminate in a 180° five-pin DIN
connector. Standard applications use only three of the five conductors: a ground wire, and a
balanced pair of conductors that carry a +5 volt signal. Some proprietary applications, such as
phantom-powered footswitch controllers, use the spare pins for direct current (DC) power
transmission. Opto-isolators keep MIDI devices electrically separated from their connectors, which
prevents the occurrence of ground loops and protects equipment from voltage spikes.

USB and FireWire

All recent computers are equipped with either USB and/or FireWire connectors, and these are now
the most common means of connecting MIDI devices to computers (using appropriate format
adapters). Adapters can be as simple as a short cable with USB on one end and MIDI DIN on the
other, or as complex as a 19 inch rack mountable CPU with dozens of MIDI and Audio In and Out
ports. The best part is that USB and FireWire are "plug-and-play" interfaces which means they
generally configure themselves. In most cases, all you need to do is plug in your USB or FireWire
MIDI interface and boot up some MIDI software and off you go.

Current USB technology generally supports communication between a host (PC) and a device, so it is
not possible to connect to USB devices to each other as it is with two MIDI DIN devices. (This may
change sometime in the future with new versions of USB). Since communications all go through the
PC, any two USB MIDI devices can use different schemes for packing up MIDI messages and sending
them over USB... the USB device's driver on the host knows how that device does it, and will convert
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

the MIDI messages from USB back to MIDI at the host. That way all USB MIDI devices can talk to
each other (through the host) without needing to follow one specification for how they send MIDI
data over USB.

Most FireWire MIDI devices also connect directly to a PC with a host device driver and so can talk to
other FireWire MIDI devices even if they use a proprietary method for formatting their MIDI data.
But FireWire supports "peer-to-peer" connections, so MMA has produced a specification for MIDI
over IEEE-1394 (FireWire), which is available for download on this site (and incorporated in IEC-
61883 part 5).

The MIDI input and output lines are separate from each other, and few devices pass the input data
to the output port. A third type of port, the "MIDI THRU" port, exists so that data can be forwarded
to another instrument in a "daisy chain" arrangement. Not all devices contain MIDI-THRU ports, and
devices that lack the ability to generate MIDI data, such as effects units and sound modules, may not
include MIDI-OUT ports.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Chapter 26

The Sound Reinforcement System


 Sound Reinforcement System Overview
A sound reinforcement system is the combination of microphones, signal processors, amplifiers,
and loudspeakers that makes live or pre-recorded sounds louder and may also distribute those
sounds to a larger or more distant audience. In some situations, a sound reinforcement system is
also used to enhance the sound of the sources on the stage, as opposed to simply amplifying the
sources unaltered. A sound reinforcement system may be very complex, including hundreds of
microphones, complex audio mixing and signal processing systems, tens of thousands of watts of
amplification, and multiple loudspeaker arrays, all overseen by a team of audio engineers and
technicians. On the other hand, a sound reinforcement system can be as simple as a small public
address (PA) system in a coffeehouse, consisting of a single microphone connected to a loudspeaker.
In both cases, these systems reinforce sound to make it louder or distribute it to a wider audience.

The main components of a typical sound reinforcement system include:

Sources: Microphones, DI (direct injection) boxes, playback devices (CD etc).

Mixing console(s)

Signal processing devices (equalizers, effects, crossovers, etc)

Amplifiers

Speakers

The other components are Cables, including multi-cores and other connection accessories.

So, let’s examine each component in details.


PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Line-ins and Microphones


The Basics

Microphones are a type of transducer - a device which converts energy from one form to another.
Microphones convert acoustical energy (sound waves) into electrical energy (the audio signal).

Different types of microphone have different ways of converting energy but they all share one thing in
common: The diaphragm. This is a thin piece of material (such as paper, plastic or aluminum) which
vibrates when it is struck by sound waves. In a typical hand-held mic like the one below, the
diaphragm is located in the head of the microphone.

Location of Microphone Diaphragm

When the diaphragm vibrates, it causes other components in the microphone to vibrate. These
vibrations are converted into an electrical current which becomes the audio signal.

Note: At the other end of the audio chain, the loudspeaker is also a transducer - it converts the
electrical energy back into acoustical energy.

Types of Microphone

There are a number of different types of microphone in common use. The differences can be divided
into two areas:

(1) The type of conversion technology they use

This refers to the technical method the mic uses to convert sound into electricity. The most common
technologies are dynamic, condenser, ribbon and crystal. Each has advantages and disadvantages, and
each is generally more suited to certain types of application. The following pages will provide details.

(2) The type of application they are designed for

Some mics are designed for general use and can be used effectively in many different situations.
Others are very specialised and are only really useful for their intended purpose. Characteristics to
look for include directional properties, frequency response and impedance (more on these later).
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Mic Level & Line Level

The electrical current generated by a microphone is very small. Referred to as mic level, this signal is
typically measured in millivolts. Before it can be used for anything serious the signal needs to be
amplified, usually to line level (typically 0.5 -2V). Being a stronger and more robust signal, line level
is the standard signal strength used by audio processing equipment and common domestic equipment
such as CD players, tape machines, VCRs, etc.

This amplification is achieved in one or more of the following ways:

Some microphones have tiny built-in amplifiers which boost the signal to a high mic level or line
level.

The mic can be fed through a small boosting amplifier, often called a line amp.

Sound mixers have small amplifiers in each channel. Attenuators can accommodate mics of varying
levels and adjust them all to an even line level.

The audio signal is fed to a power amplifier - a specialised amp which boosts the signal enough to be
fed to loudspeakers.

Dynamic Microphones

Dynamic microphones are versatile and ideal for general-purpose use. They use a simple design with
few moving parts. They are relatively sturdy and resilient to rough handling. They are also better
suited to handling high volume levels, such as from certain musical instruments or amplifiers. They
have no internal amplifier and do not require batteries or external power.

How Dynamic Microphones Work

As you may recall from your school science, when a magnet is moved near a coil of wire an electrical
current is generated in the wire. Using this electromagnet principle, the dynamic microphone uses a
wire coil and magnet to create the audio signal.

The diaphragm is attached to the coil. When the diaphragm vibrates in response to incoming sound
waves, the coil moves backwards and forwards past the magnet. This creates a current in the coil
which is channeled from the microphone along wires. A common configuration is shown below.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Earlier we mentioned that loudspeakers perform the opposite function of microphones by


converting electrical energy into sound waves. This is demonstrated perfectly in the dynamic
microphone which is basically a loudspeaker in reverse. When you see a cross-section of a speaker
you'll see the similarity with the diagram above. If fact, some intercom systems use the speaker as a
microphone. You can also demonstrate this effect by plugging a microphone into the headphone
output of your stereo, although we don't recommend it!

Technical Notes:

Dynamics do not usually have the same flat frequency response as condensers. Instead they tend to
have tailored frequency responses for particular applications.

Neodymium magnets are more powerful than conventional magnets, meaning that neodymium
microphones can be made smaller, with more linear frequency response and higher output level.

Condenser Microphones

Condenser means capacitor, an electronic component which stores energy in the form of an
electrostatic field. The term condenser is actually obsolete but has stuck as the name for this
type of microphone, which uses a capacitor to convert acoustical energy into electrical
energy.

Condenser microphones require power from a battery or external source. The resulting audio
signal is stronger signal than that from a dynamic. Condensers also tend to be more sensitive
and responsive than dynamics, making them well-suited to capturing subtle nuances in a
sound. They are not ideal for high-volume work, as their sensitivity makes them prone to
distort.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

How Condenser Microphones Work

A capacitor has two plates with a voltage between them. In the condenser mic, one of these
plates is made of very light material and acts as the diaphragm. The diaphragm vibrates when
struck by sound waves, changing the distance between the two plates and therefore changing
the capacitance. Specifically, when the plates are closer together, capacitance increases and a
charge current occurs. When the plates are further apart, capacitance decreases and a
discharge current occurs.

A voltage is required across the capacitor for this to work. This voltage is supplied either by a
battery in the mic or by external phantom power.

Cross-Section of a Typical Condenser Microphone

The Electret Condenser Microphone

The electret condenser mic uses a special type of capacitor which has a permanent voltage
built in during manufacture. This is somewhat like a permanent magnet, in that it doesn't
require any external power for operation. However good electret condenser mics usually
include a pre-amplifier which does still require power.

Other than this difference, you can think of an electret condenser microphone as being the
same as a normal condenser.

Technical Notes:

Condenser microphones have a flatter frequency response than dynamics.

A condenser mic works in much the same way as an electrostatic tweeter (although obviously in
reverse).
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Directional Properties

Every microphone has a property known as directionality. This describes the microphone's sensitivity
to sound from various directions. Some microphones pick up sound equally from all directions,
others pick up sound only from one direction or a particular combination of directions. The types of
directionality are divided into three main categories:

Omnidirectional
Picks up sound evenly from all directions (omni means "all" or "every").

Unidirectional
Picks up sound predominantly from one direction. This includes cardioid and hypercardioid
microphones (see below).

Bidirectional
Picks up sound from two opposite directions.

To help understand a the directional properties of a particular microphone, user manuals and
promotional material often include a graphical representation of the microphone's directionality.
This graph is called a polar pattern. Some typical examples are shown below.

Omnidirectional

Captures sound equally from all directions.

Uses: Capturing ambient noise; Situations where sound is coming from many directions; Situations
where the mic position must remain fixed while the sound source is moving.

Notes:

Although omnidirectional mics are very useful in the right situation, picking up sound from every
direction is not usually what you need. Omni sound is very general and unfocused - if you are trying
to capture sound from a particular subject or area it is likely to be overwhelmed by other noise.

Cardioid

Cardioid means "heart-shaped", which is the type of pick-up pattern these mics
use. Sound is picked up mostly from the front, but to a lesser extent the sides as well.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Uses: Emphasising sound from the direction the mic is pointed whilst leaving some latitude for mic
movement and ambient noise.

Notes:

The cardioid is a very versatile microphone, ideal for general use. Handheld mics are usually cardioid.

There are many variations of the cardioid pattern (such as the hypercardioid below).

Hypercardioid

This is exaggerated version of the cardioid pattern. It is very directional and


eliminates most sound from the sides and rear. Due to the long thin design of hypercardioids, they
are often referred to as shotgun microphones.

Uses: Isolating the sound from a subject or direction when there is a lot of ambient noise; Picking up
sound from a subject at a distance.

Notes:

By removing all the ambient noise, unidirectional sound can sometimes be a little unnatural. It may
help to add a discreet audio bed from another mic (i.e. constant background noise at a low level).

You need to be careful to keep the sound consistent. If the mic doesn't stay pointed at the subject
you will lose the audio.

Shotguns can have an area of increased sensitivity directly to the rear.

Bidirectional

Uses a figure-of-eight pattern and picks up sound equally from two opposite
directions.

Uses: As you can imagine, there aren't a lot of situations which require this polar pattern. One
possibility would be an interview with two people facing each other (with the mic between them).

Variable Directionality
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Some microphones allow you to vary the directional characteristics by selecting omni, cardioid or
shotgun patterns.

This feature is sometimes found on video camera microphones, with the idea that you can adjust the
directionality to suit the angle of zoom, e.g. have a shotgun mic for long zooms. Some models can
even automatically follow the lens zoom angle so the directionality changes from cardioid to
shotgun as you zoom in.

Although this seems like a good idea (and can sometimes be handy), variable zoom microphones
don't perform particularly well and they often make a noise while zooming. Using different mics will
usually produce better results.

Microphone Impedance

When dealing with microphones, one consideration which is often misunderstood or overlooked is
the microphone's impedance rating. Perhaps this is because impedance isn't a "critical" factor; that
is, microphones will still continue to operate whether or not the best impedance rating is used.
However, in order to ensure the best quality and most reliable audio, attention should be paid to
getting this factor right.

If you want the short answer, here it is: Low impedance is better than high impedance.

If you're interested in understanding more, read on....

What is Impedance?

Impedance is an electronics term which measures the amount of opposition a device has to an AC
current (such as an audio signal). Technically speaking, it is the combined effect of capacitance,
inductance, and resistance on a signal. The letter Z is often used as shorthand for the word
impedance, e.g. Hi-Z or Low-Z.

Impedance is measured in ohms, shown with the Greek Omega symbol Ω. A microphone with the
specification 600Ω has an impedance of 600 ohms.

What is Microphone Impedance?

All microphones have a specification referring to their impedance. This spec may be written on the
mic itself (perhaps alongside the directional pattern), or you may need to consult the manual or
manufacturer's website.

You will often find that mics with a hard-wired cable and 1/4" plug are high impedance, and mics
with separate balanced audio cable and XLR connector are low impedance.

There are three general classifications for microphone impedance. Different manufacturers use
slightly different guidelines but the classifications are roughly:

Low Impedance (less than 600Ω)


PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Medium Impedance (600Ω - 10,000Ω)

High Impedance (greater than 10,000Ω)

Note that some microphones have the ability to select from different impedance ratings.

Which Impedance to Choose?

High impedance microphones are usually quite cheap. Their main disadvantage is that they do not
perform well over long distance cables - after about 5 or 10 metres they begin producing poor
quality audio (in particular a loss of high frequencies). In any case these mics are not a good choice
for serious work. In fact, although not completely reliable, one of the clues to a microphone's overall
quality is the impedance rating.

Low impedance microphones are usually the preferred choice.

Matching Impedance with Other Equipment

Microphones aren't the only things with impedance. Other equipment, such as the input of a sound
mixer, also has an ohms rating. Again, you may need to consult the appropriate manual or website
to find these values. Be aware that what one system calls "low impedance" may not be the same as
your low impedance microphone - you really need to see the ohms value to know exactly what
you're dealing with.

A low impedance microphone should generally be connected to an input with the same or higher
impedance. If a microphone is connected to an input with lower impedance, there will be a loss of
signal strength.

In some cases you can use a line matching transformer, which will convert a signal to a different
impedance for matching to other components.

Microphone Frequency Response

Frequency response refers to the way a microphone responds to different frequencies. It is a


characteristic of all microphones that some frequencies are exaggerated and others are attenuated
(reduced). For example, a frequency response which favours high frequencies means that the
resulting audio output will sound more trebly than the original sound.

Frequency Response Charts

A microphone's frequency response pattern is shown using a chart like the one below and referred
to as a frequency response curve. The x axis shows frequency in Hertz, the y axis shows response in
decibels. A higher value means that frequency will be exaggerated, a lower value means the
frequency is attenuated. In this example, frequencies around 5 - kHz are boosted while frequencies
above 10kHz and below 100Hz are attenuated. This is a typical response curve for a vocal
microphone.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Which Response Curve is Best?

An ideal "flat" frequency response means that the microphone is equally sensitive to all frequencies.
In this case, no frequencies would be exaggerated or reduced (the chart above would show a flat
line), resulting in a more accurate representation of the original sound. We therefore say that a flat
frequency response produces the purest audio.

In the real world a perfectly flat response is not possible and even the best "flat response"
microphones have some deviation.

More importantly, it should be noted that a flat frequency response is not always the most desirable
option. In many cases a tailored frequency response is more useful. For example, a response pattern
designed to emphasise the frequencies in a human voice would be well suited to picking up speech
in an environment with lots of low-frequency background noise.

The main thing is to avoid response patterns which emphasise the wrong frequencies. For example,
a vocal mic is a poor choice for picking up the low frequencies of a bass drum.

Frequency Response Ranges

You will often see frequency response quoted as a range between two figures. This is a simple (or
perhaps "simplistic") way to see which frequencies a microphone is capable of capturing effectively.
For example, a microphone which is said to have a frequency response of 20 Hz to 20 kHz can
reproduce all frequencies within this range. Frequencies outside this range will be reproduced to a
much lesser extent or not at all.

This specification makes no mention of the response curve, or how successfully the various
frequencies will be reproduced. Like many specifications, it should be taken as a guide only.

Condenser vs Dynamic
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Condenser microphones generally have flatter frequency responses than dynamic. All other things
being equal, this would usually mean that a condenser is more desirable if accurate sound is a prime
consideration.

 Mixing Consoles

Sound mixers (AKA sound desks, sound consoles or sound boards) are amongst the most common
type of equipment in the world of audio production. Every sound operator must know what a sound
mixer is and how to use it. The tutorials below cover the general layout and functions of sound
mixing devices.

A sound mixer is a device which takes two or more audio signals, mixes them together and
provides one or more output signals. The diagram on the right shows a simple mixer with six
inputs and two outputs.

As well as combining signals, mixers allow you to adjust levels, enhance sound with
equalization and effects, create monitor feeds, record various mixes, etc.

Mixers come in a wide variety of sizes and designs, from small portable units to massive
studio consoles. The term mixer can refer to any type of sound mixer; the terms sound desk
and sound console refer to mixers which sit on a desk surface as in a studio setting.

Sound mixers can look very intimidating to the newbie because they have so many buttons
and other controls. However, once you understand how they work you realise that many of
these controls are duplicated and it's not as difficult as it first seems.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Applications

Some of the most common uses for sound mixers include:

Music studios and live performances: Combining different instruments into a stereo master mix and
additional monitoring mixes.

Television studios: Combining sound from microphones, tape machines and other sources.

Field shoots: Combining multiple microphones into 2 or 4 channels for easier recording.

Channels

Mixers are frequently described by the number of channels they have. For example, a "12-
channel mixer" has 12 input channels, i.e. you can plug in 12 separate input sources. You
might also see a specification such as "24x4x2" which means 24 input channels, 4 subgroup
channels and two output channels.

More channels means more flexibility, so more channels is generally better. See mixer
channels for more information.

Advanced Mixing

The diagram below shows how a mixer can provide additional outputs for monitoring,
recording, etc. Even this is just scratching the surface of what advanced mixers are capable
of.

Sound Mixer Channels

Each input source comes into the mixer through a channel. The more channels a mixer has,
the more sources it can accept. The following examples show some common ways to
describe a mixer's compliment of channels:
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

12-channel 12 input channels.

16x2 16 input channels, 2 output channels.

24x4x2 24 input channels, 4 subgroup channels and two output channels.

Input Channels

On most sound desks, input channels take up most of the space. All those rows of knobs are
channels. Exactly what controls each channel has depends on the mixer but most mixers
share common features. The list below details the controls available on a typical mixer
channel.

Input Gain / Attenuation: The level of the signal as it enters the channel. In most cases this
will be a pot (potentiometer) knob which adjusts the level. The idea is to adjust the levels of
all input sources (which will be different depending on the type of source) to an ideal level
for the mixer. There may also be a switch or pad which will increase or decrease the level by
a set amount (e.g. mic/line switch).

Phantom Power: Turns phantom power on or off for the channel.

Equalization: Most mixers have at least two EQ controls (high and low frequencies). Good
mixers have more advanced controls, in particular, parametric equalization.

Auxiliary Channels: Sometimes called aux channels for short, auxiliary channels are a way
to send a "copy" of the channel signal somewhere else. There are many reasons to do this,
most commonly to provide separate monitor feeds or to add effects (reverb etc).
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Pan & Assignment: Each channel can be panned left or right on the master mix. Advanced
mixers also allow the channel to be "assigned" in various ways, e.g. sent directly to the main
mix or sent only to a particular subgroup.

Solo / Mute / PFL: These switches control how the channel is monitored. They do not affect
the actual output of the channel.

Channel On / Off: Turns the entire channel on or off.

Slider: The level of the channel signal as it leaves the channel and heads to the next stage
(subgroup or master mix).

Subgroup Channels

Larger sound desks usually have a set of subgroups, which provide a way to sub-mix groups
of channels before they are sent to the main output mix. For example, you might have 10
input channels for the drum mics which are assigned to 2 subgroup channels, which in turn
are assigned to the master mix. This way you only need to adjust the two subgroup sliders to
adjust the level of the entire drum kit.

Sound Mixers: Channel Inputs

The first point of each channel's pathway is the input socket, where the sound source plugs
into the mixer. It is important to note what type of input sockets are available — the most
common types are XLR, 6.5mm Jack and RCA. Input sockets are usually located either on
the rear panel of the mixer or on the top above each channel.

There are no hard-and-fast rules about what type of equipment uses each type of connector,
but here are some general guidelines:

XLR Microphones and some audio devices. Usually balanced audio, but XLRs can also
accommodate unbalanced signals.

6.5mm Musical instruments such as electric guitars, as well as various audio devices. Mono jacks
Jack are unbalanced, stereo jacks can be either unbalanced stereo or balanced mono.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

RCA Musical devices such as disc players, effects units, etc.

Input Levels

The level of an audio signal refers to the voltage level of the signal. Signals can be divided
into three categories: Mic-level (low), line-level (a bit higher) and loudspeaker-level (very
high). Microphones produce a mic-level signal, whereas most audio devices such as disc
players produce a line-level signal. Loudspeaker-level signals are produced by amplifiers and
are only appropriate for plugging into a speaker — never plug a loudspeaker-level signal into
anything else.

Sound mixers must be able to accommodate both mic-level and line-level signals. In some
cases there are two separate inputs for each channel and you select the appropriate one. It is
also common to include some sort of switch to select between inputs and/or signal levels.

Input Sockets and Controls

The example on the right shows the input connections on a typical mixer. This mixer has two
input sockets — an XLR for mic-level inputs and a 6.5mm jack for line-level inputs. It also
has a pad button which reduces the input level (gain) by 20dB. This is useful when you have
a line-level source that you want to plug into the mic input.

Some mixers also offer RCA inputs or digital audio inputs for each channel. Some mixers
provide different sockets for different channels, for example, XLR for the first 6 channels and
RCA for the remainder.

Input Gain

When a signal enters the mixer, one of the first controls is the input gain. This is a knob
which adjusts the signal level before it continues to the main parts of the channel. The input
gain is usually set once when the source is plugged in and left at the same level — any
volume adjustments are made by the channel fader rather than the gain control.

Set the gain control so that when the fader is at 0dB the signal is peaking around 0dB on the
VU meters.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Other Controls and Considerations

Phasing: Some equipment and cables are wired with different phasing, that is, the wires in
the cable which carry the signal are arranged differently. This will kill any sound from that
source. To fix this problem, some mixers have a phase selector which will change the
phasing at the input stage.

Phantom Power: Some mixers have the option to provide a small voltage back up the input
cable to power a microphone or other device. See Phantom Power for more information.

Sound Mixers: Channel Equalization

Most mixers have some of sort equalization controls for each channel. Channel equalizers use
knobs (rather than sliders), and can be anything from simple tone controls to multiple
parametric controls.

The first example on the right is a simple 2-way equalizer, sometimes referred to as
bass/treble or low/high. The upper knob adjusts high frequencies (treble) and the lower knob
adjusts low frequencies (bass). This is a fairly coarse type of equalization, suitable for making
rough adjustments to the overall tone but is not much use for fine control.

This next example is a 4-way equalizer. The top and bottom knobs are simple high and low
frequency adjustments (HF and LF).

The middle controls consist of two pairs of knobs. These pairs are parametric equalizers —
each pair works together to adjust a frequency range chosen by the operator. The brown knob
selects the frequency range to adjust and the green knob makes the adjustment.

The top pair works in the high-mid frequency range (0.6KHz to 10KHz), the lower pair
works in the low-mid range (0.15 to 2.4KHz).

The "EQ" button below the controls turns the equalization on and off for this channel. This
lets you easily compare the treated and untreated sound.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

It is common for mixers with parametric equalizers to combine each pair of knobs into a
single 2-stage knob with one on top of the other. This saves space which is always a bonus
for mixing consoles.

Notes About Channel Equalization

If the mixer provides good parametric equalization you will usually find that these controls
are more than adequate for equalizing individual sources. If the mixer is limited to very
simple equalization, you may want to use external equalizers. For example, you could add a
graphic equalizer to a channel using the insert feature.

In many situations you will use additional equalization outside the mixer. In live sound
situations, for example, you will probably have at least one stereo graphic equalizer on the
master output.

Sound Mixers: Auxiliary Channels

Most sound desks include one or more auxiliary channels (often referred to as aux channels
for short). This feature allows you to send a secondary feed of an input channel's audio signal
to another destination, independent of the channel's main output.

The example below shows a four-channel mixer, with the main signal paths shown in green.
Each input channel includes an auxiliary channel control knob — this adjusts the level of the
signal sent to the auxiliary output (shown in blue). The auxiliary output is the sum of the
signals sent from each channel. If a particular channel's auxiliary knob is turned right down,
that channel is not contributing to the auxiliary channel.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

In the example above, the auxiliary output is sent to a monitoring system. This enables a
monitor feed which is different to the main output, which can be very useful. There are many
other applications for auxiliary channels, including:

1. Multiple separate monitor feeds.


2. Private communication, e.g. between the sound desk and the stage.
3. Incorporating effects.
4. Recording different mixes.

Mixers are not limited to a single auxiliary channel, in fact it is common to have up to four or
more. The following example has two auxiliary channels — "Aux 1" is used for a monitor
and "Aux 2" is used for an effects unit.

Note that the monitor channel (Aux 1) is "one way", i.e. the channel is sent away from the
mixer and doesn't come back. However the Aux 2 channel leaves the mixer via the aux send
output, goes through the effects unit, then comes back into the mixer via the aux return input.
It is then mixed into the master stereo bus.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Pre / Post Fader

The auxiliary output from each channel can be either pre-fader or post-fader.

A pre-fader output is independent of the channel fader, i.e. the auxiliary output stays the
same level whatever the fader is set to.

A post-fader output is dependent on the fader level. If you turn the fader down the auxiliary
output goes down as well.

Many mixers allow you to choose which method to use with a selector button. The example
pictured right shows a mixer channel with four auxiliary channels and two pre/post selectors.
Each selector applies to the two channels above it, so for example, the button in the middle
makes both Aux 1 and Aux 2 either pre-fader or post-fader.

Sound Mixers: Channel Assigning & Panning

One of the last sets of controls on each channel, usually just before the fader, is the channel
assign and pan.

Pan

Almost all stereo mixers allow you to assign the amount of panning. This is a knob which
goes from full left to full right. This is where the channel signal appears on the master mix (or
across two subgroups if this is how the channel is assigned). If the knob is turned fully left,
the channel audio will only come through the left speaker in the final mix. Turn the knob
right to place the channel on the right side of the mix.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Assign

This option may be absent on smaller mixers but is quite important on large consoles. The
assign buttons determine where the channel signal is sent.

In many situations the signal is simply sent to the main master output. In small mixers with
no assign controls this happens automatically.

However you may not want a channel to be fed directly into the main mix. The most common
alternative is to send the channel to a subgroup first. For example, you could send all the
drum microphones to their own dedicated subgroup which is then sent to the main mix. This
way, you can adjust the overall level of all the drums by adjusting the subgroup level.

In the example pictured, the options are:

Mix: The channel goes straight to the main stereo mix

1-2: The channel goes to subgroup 1 and/or 2. If the pan control is set fully left the channel goes only
to subgroup 1, if the pan is set fully right the channel goes only to subgroup 2. If the pan is centered
the channel goes to subgroups 1 and 2 equally.

3-4: The channel goes to subgroups 3 and/or 4, with the same conditions as above.

For stereo applications it is common to use subgroups in pairs to maintain stereo separation.
For example, it is preferable to use two subgroups for the drums so you can pan the toms and
cymbals from left to right.

You can assign the channel to any combination of the available options.

In some cases you may not want the channel to go to the main mix at all. For example, you
may have a channel set up for communicating with the stage via an aux channel. In this case
you don't assign the channel anywhere.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Sound Mixers: PFL

PFL means Pre-Fade Listen. It's function is to do exactly that — listen to the channel's audio
at a point before the fader takes effect. The PFL button is usually located just above the
channel fader. In the example on the right, it's the red button (the red LED lights when PFL is
engaged).

Note: PFL is often pronounced "piffel".

When you press the PFL button, the main monitor output will stop monitoring anything else
and the only audio will be the selected PFL channel(s). This does not affect the main output
mix — just the sound you hear on the monitor bus. Note that all selected PFL channels will
be monitored, so you can press as many PFL buttons as you like.

PFL also takes over the mixer's VU meters.

PFL is useful when setting the initial input gain of a channel, as it reflects the pre-fade level.

PFL vs Solo

PFL is similar to the solo button. There are two differences:

PFL is pre-fader, solo is post-fader (i.e. the fader affects the solo level).

PFL does not affect the master output but soloing a channel may do so (depending on the mixer).
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Sound Mixers: Channel Faders

Each channel has it's own fader (slider) to adjust the volume of the channel's signal before it is sent
to the next stage (subgroup or master mix).

A slider is a potentiometer, or variable resistor. This is a simple control which varies the amount of
resistance and therefore the signal level. If you are able to look into the inside of your console you
will see exactly how simple a fader is.

As a rule it is desirable to run the fader around the 0dB mark for optimum sound quality, although
this will obviously vary a lot.

Remember that there are two ways to adjust a channel's level: The input gain and the output fader.
Make sure the input gain provides a strong signal level to the channel without clipping and leave it at
that level — use the fader for ongoing adjustments.

Sound Mixers: Subgroups

Subgroups are a way to "pre-mix" a number of channels on a sound console before sending them to
the master output mix. In the following diagram, channels 1 and 2 are assigned directly to the
master output bus. Channels 3,4,5 and 6 are assigned to subgroup 1, which in turn is assigned to the
master output.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Subgroups have many uses and advantages, the most obvious being that you can pre-mix (sub-mix)
groups of inputs.

For example, if you have six backing vocalists you can set up a good mix just for them, balancing
each voice to get a nice overall effect. If you then send all six channels to one subgroup, you can
adjust all backing vocals with a single subgroup slider while still maintaining the balance between
the individual voices.

Note that if your mixing console's subgroups are mono, you will need to use them in pairs to
maintain a stereo effect. For each pair, one subgroup is the left channel and the other is right. Each
channel can be panned across the two subgroups, while the subgroups are panned completely left
and right into the master output bus.

Sound Mixers: Outputs

The main output from most mixing devices is a stereo output, using two output sockets which
should be fairly obvious and easy to locate. The connectors are usually 3-pin XLRs on larger consoles,
but can also be 6.5mm TR (jack) sockets or RCA sockets.

The level of the output signal is monitored on the mixer's VU meters. The ideal is for the level to
peak at around 0dB or just below. However you should note that the dB scale is relative and 0dB on
one mixer may not be the same as 0dB on another mixer or audio device. For this reason it is
important to understand how each device in the audio chain is referenced, otherwise you may find
that your output signal is unexpectedly high or low when it reaches the next point in the chain.

In professional circles, the nominal level of 0dB is considered to be +4 dBu. Consumer-level


equipment tends to use -10 dBV.

The best way to check the levels of different equipment is to use audio test tone. Send 0dB tone
from the desk and measure it at the next point in the chain.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Many mixers include a number of additional outputs, for example:

Monitor Feed: A dedicated monitor feed which can be adjusted independently of the master output.

Headphones: The headphone output may be the same as the monitor feed, or you may be able to
select separate sources to listen to.

Auxiliary Sends: The output(s) of the mixer's auxiliary channels.

Subgroup Outputs: Some consoles have the option to output each subgroup independently.

Communication Channels: Some consoles have additional output channels available for
communicating with the stage, recording booths, etc.

 Effects and Signal Processors


Digital signal processors

Small PA systems for venues such as bars and clubs are now available with features that were
formerly only available on professional-level equipment, such as digital reverb effects,
graphic equalizers, and, in some models, feedback prevention circuits which electronically
sense and prevent feedback "howls" before they become a problem. Digital effects units may
offer multiple pre-set and variable reverb, echo and related effects. Digital loudspeaker
management systems offer sound engineers digital delay, limiting, crossover functions, EQ
filters, compression and other functions in a single rack-mountable unit. In previous decades,
sound engineers typically had to transport a substantial number of rack-mounted analog
devices to accomplish these tasks.

Equalizers

Equalizers exist in sound reinforcement systems in two forms: graphic and parametric. A
high-pass (low-cut) and/or low-pass (high-cut) filter may also be included. Parametric
equalizers are often built into each channel in mixing consoles and are also available as
separate units. Parametric equalizers first became popular in the 1970s and have remained the
program equalizer of choice for many engineers since then.

Graphic equalizers have faders (slide controls) which together resemble a frequency response
curve plotted on a graph. Sound reinforcement systems typically use graphic equalizers with
one-third octave frequency centers. These are typically used to equalize output signals going
to the main loudspeaker system or the monitors on stage.

High-pass (low-cut) and low-pass (high-cut) filters restrict a given channel's bandwidth
extremes. Cutting very low frequency energy (termed infrasonic, or subsonic, a misnomer)
reduces the waste of amplifier power which does not produce sound and which moreover can
be hard on the speakers. A low-pass filter to cut ultrasonic energy is useful to prevent
interference from radio frequencies, lighting control, or digital circuitry creeping into the
power amplifiers. Such filters are often included with graphic and parametric equalizers to
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

give full control of the frequency range. If their response is steep enough, high-pass filters
and low-pass filters function as end-cut filters. A feedback suppressor is an automatically-
adjusted band-reject or notch filter which includes a microprocessor to detect the onset of
feedback and direct the filter to suppress the feedback by lowering the gain right at the
offending frequency.

Compressors

Compressors are designed to manage the dynamic range of an audio signal. A compressor
accomplishes this by reducing the gain of a signal that is above a defined level (threshold) by
a defined amount (ratio). Without this gain reduction, a signal that gets, say 10% louder as an
input, will be 10% louder at the output. With the gain reduced, a signal that gets 10% louder
at the input will be perhaps 3% louder at the output. Most compressors available are designed
to allow the operator to select a ratio within a range typically between 1:1 and 20:1, with
some allowing settings of up to ∞:1. A compressor with an infinite ratio is typically referred
to as a limiter. The speed that the compressor adjusts the gain of the signal (called the attack)
is typically adjustable as is the final output of the device.

Compressor applications vary widely from objective system design criterion to subjective
applications determined by variances in program material and engineer preference. Some
system design criteria specify limiters for component protection and gain structure control.
Artistic signal manipulation is a subjective technique widely utilized by mix engineers to
improve clarity or to creatively alter the signal in relation to the program material. An
example of artistic compression is the typical heavy compression used on the various
components of a modern rock drum kit. The drums are processed to be perceived as sounding
more punchy and full.

Noise gates

A noise gate sets a threshold where if it is quieter it will not let the signal pass and if it is
louder it opens the gate. A noise gate's function is in a sense the opposite to that of a
compressor. Noise gates are useful for microphones which will pick up noise which is not
relevant to the program, such as the hum of a miked electric guitar amplifier or the rustling of
papers on a minister's podium.

Noise gates are also used to process the microphones placed near the drums of a drum kit in
many hard rock and metal bands. Without a noise gate, the microphone for a specific
instrument such as the floor tom will also pick up signal from nearby drums or cymbals. With
a noise gate, the threshold of sensitivity for each microphone on the drum kit can be set so
that only the direct strike and subsequent decay of the drum will be heard, not the nearby
sounds.

Effects

Reverberation and delay effects are widely used in sound reinforcement systems to enhance
the mix relative to the desired artistic impact of the program material. Modulation effects
such as flanger, phaser, and chorus are also applied to some instruments. An exciter "livens
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

up" the sound of audio signals by applying dynamic equalization, phase manipulation and
harmonic synthesis of typically high frequency signals.

The appropriate type, variation, and level of effects is quite subjective and is often
collectively determined by a production's engineer, artist, or musical director. Reverb, for
example, can give the effect of signal being present in anything from a small room to a
massive stadium, or even in a space that doesn't exist in the physical world. The use of reverb
often goes unnoticed by the audience, as it often sounds more natural than if the signal was
left dry. The use of effects in the reproduction of modern music is often in an attempt to
mimic the sound of the studio version of the artist's music.

 Power Amplifiers

Power amplifiers boost a low-voltage level signal and provide electrical power to drive a
loudspeaker. All speakers require power amplification of the low-level signal by an amplifier,
including headphones. Most professional audio amplifiers also provide protection from overdriven
signals, short circuits across the output, and excess temperature. A limiter is often used to protect
loudspeakers and amplifiers from overload.

Like most sound reinforcement equipment products, professional amplifiers are designed to be
mounted within standard 19-inch racks. Many power amplifiers feature internal fans to draw air
across their heat sinks. Since they can generate a significant amount of heat, thermal dissipation is
an important factor for operators to consider when mounting amplifiers into equipment racks.
Active loudspeakers feature internally mounted amplifiers that have been selected by the
manufacturer to be the best amplifier for use with the given loudspeaker.

In the 1970s and 1980s, most PA amplifiers were heavy Class AB amplifiers. In the late 1990s power
amplifiers in PA applications have became lighter, smaller, more powerful and more efficient due to
increasing use of switching power supplies and Class D amplifiers, which offer significant weight and
space savings as well as increased efficiency. Installations in railroad stations, stadia and airports,
their high efficiency allow them to run with minimal additional cooling and with higher rack densities
compared to older amplifiers.

Digital loudspeaker management systems (DLMS) that combine digital crossover functions,
compression, limiting, and other features in a single unit have become popular since their
introduction. They are used to process the mix from the mixing console and route it to the various
amplifiers in use. Systems may include several loudspeakers, each with its own output optimized for
a specific range of frequencies (i.e. bass, midrange and treble). Bi, tri, or quad amplifying a sound
reinforcement system with the aide of a DLMS results in a more efficient use of amplifier power by
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

sending each amplifier only the frequencies appropriate for its respective loudspeaker. Most DLMS
units that are designed for use by non-professionals have calibration and testing functions such as a
pink noise generator coupled with a real-time analyzer to allow automated room equalization.

The amount of amplifier power used in a performance setting depends on a number of factors, such
as the desired Sound Pressure Level of the performers, whether the venue is indoors or outdoors,
and the presence of competing background noise. The following list gives a rough "rule of thumb"
for the amount of amplifier power used in different settings:

"Small Vocal" system - About 500 watts

"Large Vocal" system - About 1,000 watts

"Small Club" system - About 9,000 watts

"Large Club" system - About 18,000 watts

"Small Stadium" system - About 28,000 watts

 Output transducers (Main loudspeakers)


A simple and inexpensive PA loudspeaker may have a single full-range loudspeaker driver, housed in
a suitable enclosure. More elaborate, professional-caliber sound reinforcement loudspeakers may
incorporate separate drivers to produce low, middle, and high frequency sounds. A crossover
network routes the different frequencies to the appropriate drivers. In the 1960s, horn loaded
theater loudspeakers and PA speakers were almost always "columns" of multiple drivers mounted in
a vertical line within a tall enclosure.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

The 1970s to early 1980s was a period of innovation in loudspeaker design with many sound
reinforcement companies designing their own speakers. The basic designs were based on
commonly known designs and the speaker components were commercial speakers. The areas
of innovation were in cabinet design, durability, ease of packing and transport, and ease of
setup. This period also saw the introduction of the hanging or "flying" of main loudspeakers
at large concerts. During the 1980s the large speaker manufactures started producing standard
products using the innovations of the 1970s. These were mostly smaller two way systems
with 12", 15" or double 15" woofers and a high frequency driver attached to a high frequency
horn. The 1980s also saw the start of loudspeaker companies focused on the sound
reinforcement market. The 1990s saw the introduction of Line arrays, where long vertical
arrays of loudspeakers with a smaller cabinet are used to increase efficiency and provide even
dispersion and frequency response. This period also saw the introduction of inexpensive
molded plastic speaker enclosures mounted on tripod stands. Many feature built-in power
amplifiers which made them practical for non-professionals to set up and operate
successfully. The sound quality available from these simple 'powered speakers' varies widely
depending on the implementation.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Many sound reinforcement loudspeaker systems incorporate protection circuitry, preventing


damage from excessive power or operator error. Positive temperature coefficient resistors,
specialized current-limiting light bulbs, and circuit-breakers were used alone or in
combination to reduce driver failures. During the same period, the professional sound
reinforcement industry made the Neutrik Speakon NL4 and NL8 connectors the standard
input connectors, replacing 1/4" jacks, XLR connectors, and Cannon multipin connectors
which are all limited to a maximum of 15 amps of current. XLR connectors are still the
standard input connector on active loudspeaker cabinets.

The three different types of transducers are subwoofers, compression drivers, and tweeters.
They all feature the combination of a voicecoil, magnet, cone or diaphragm, and a frame or
structure. Loudspeakers have a power rating (in watts) which indicates their maximum power
capacity, to help users avoid overpowering them. Thanks to the efforts of the Audio
Engineering Society (AES) and the loudspeaker industry group ALMA, power-handling
specifications became more trustworthy, although adoption of the EIA-426-B standard is far
from universal. Around the mid 1990s trapezoidal-shaped enclosures became popular as this
shape allowed many of them to be easily arrayed together.

A number of companies are now making lightweight, portable speaker systems for small
venues that route the low-frequency parts of the music (electric bass, bass drum, etc.) to a
powered subwoofer. Routing the low-frequency energy to a separate amplifier and subwoofer
can substantially improve the bass-response of the system. Also, clarity may be enhanced,
because low-frequency sounds take a great deal of power to amplify; with only a single
amplifier for the entire sound spectrum, the power-hungry low-frequency sounds can take a
disproportionate amount of the sound system's power.

Professional sound reinforcement speaker systems often include dedicated hardware for
"flying" them above the stage area, to provide more even sound coverage and to maximize
sight lines within performance venues.

The number of speaker enclosures used in a performance varies a great deal, but the
following list gives a rough idea of how many cabinets are used in a typical venue:

"Small Vocal" system - Two full range speakers mounted on tripod stands.

"Large Vocal" system - Four full-range speakers for wide-area coverage.

"Small Club" system - Two subwoofers and two mid/high speakers.

"Large Club" system - Four subwoofers and four mid/high speakers.

"Small Stadium" system - Four subwoofers, four mid-bass speakers, and four mid/high speakers

Foldback or Monitor Speakers

An important part of most sound reinforcement systems is the foldback system, also known as
the monitor system. Foldback is a separate mix just for the performers, so they can hear
themselves.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Although the general idea is for the performers to hear the same thing as the audience, in
practice foldback is usually tailored for each performer. For example, the vocalists might hear
a mix with their own voice louder than everything else to help them monitor their own sound.

Foldback Speakers

The most recognisable foldback speaker enclosures are wedge-shaped


cabinets often seen at the front of the stage. These are referred to as stage
monitors, floor monitors, or simply wedges. Wedges are often designed
to be placed at different angles or stood on end depending on the stage
setup.

Of course, conventional speaker cabinets can also be used for foldback.

Another popular method for delivering monitor feeds is the wireless headset.

The Foldback Mix

There are several ways to create the foldback mix:

Use the same mix as the front-of-house. This is the easiest method and simply provides a duplicate
feed of the main mix (i.e. what the audience hears).

Use auxiliary channels on the main mixing console to create a foldback mix. This mix will typically be
sent to the stage via a milti-core cable.

Use a separate mixing console for the foldback mix. This console could be located on or near the
stage, often just off-stage.

In-Ear Monitors
In-ear monitors are headphones that have been designed for use as monitors by a live
performer. They are either of a "universal fit" or "custom fit" design. The universal fit in ear
monitors feature rubber or foam tips that can be inserted into virtually anybody's ear. Custom
fit in ear monitors are created from an impression of the users ear that has been made by an
audiologist. In-ear monitors are almost always used in conjunction with a wireless
transmitting system, allowing the performer to freely move about the stage whilst
maintaining their monitor mix.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

In-ear monitors offer considerable isolation for the performer using them, meaning that the
monitor engineer can craft a much more accurate and clear mix for the performer. A
downside of this isolation is that the performer cannot hear the crowd or other performers on
stage that do not have microphones. This has been remedied by larger productions by setting
up a pair of microphones on each side of the stage facing the audience that are mixed into the
in-ear monitor sends.

Since their introduction in the mid-1980s, in-ear monitors have grown to be the most popular
monitoring choice for large touring acts. The reduction or elimination of loudspeakers other
than instrument amplifiers on stage has allowed for cleaner and less problematic mixing
situations for both the front of house and monitor engineers. Feedback is easier to manage
and there is less sound reflecting off the back wall of the stage out into the audience, which
affects the clarity of the mix the front of house engineer is attempting to create.

Chapter 27

Music Production Procedures & Techniques


PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

 MIDI Recording Techniques


The widespread use of MIDI sequencing programs with digital audio capabilities has changed the
way most composers and arrangers work. Many musicians are using the best of both worlds by
combining audio tracks with MIDI tracks. Let’s examine some practical recording techniques that can
help you work more efficiently and musically.

Technique #1: Select the appropriate metronome.

The way you play musical parts into the sequencer will be affected by the music you are hearing
during the recording process. Since most sequences are created by playing along with a metronome,
you should try to select a metronome source that is appropriate for the music you are sequencing.
There are several options for metronome sounds. For example, you might use either the side stick
sound or the closed hi-hat sound from the standard GM drum kit if you are transcribing a classical
piece. You might also use the closed hi-hat sound if you are composing a contemporary piece that
was going to be quiet and sparsely orchestrated. Playing along with a heavy drumbeat would make it
difficult to maintain the light feel you would want the song to have. It is usually very helpful to play
along with drumbeats for most contemporary music.

Here are three ways to create drumbeat metronomes for a sequence.

1. Import a drum loop audio file. There are many CDs of drum loops currently available from
companies such as East-West and Big Fish. These drum loops are audio recordings of both
live performances by drummers and electronic patterns created by drum machines and
samplers. To use drum loops as a metronome in a sequence, you use the sequencer’s
“import audio file” command to select the desired loops and to assign them to a specific
track. After you have imported the loops, you can repeat them as needed by either looping
the track or by copying and pasting. (Note: This method may not work for you if your song’s
tempo is much slower or faster than the loop’s original tempo. A substantial change of a
loop’s tempo can sometimes degrade the sound quality of the loop.)

2. Import a Standard MIDI File (SMF) drum loop or track. You can purchase a variety of drum
patterns saved as SMFs. These MIDI files can be played back using the GM drum kits on any
instrument. The MIDI data in these files can easily be edited to create a more customized
drumbeat. You can also create your own drumbeat SMFs by using a program such as Band-
in-a-Box. This program allows you to select drum parts such as intros, basic patterns and fills,
and to assign each part to a specific measure. When you have created the drum part for
your song, you can save the file in the SMF format and then import the file into your
sequencer. If you use this technique, you can enhance the drum track by recording
additional fills and percussion parts on other tracks.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

3. Create your own drum part. This method is the most laborious and old-fashioned,
but it often produces the most musical results. I usually create a basic four measure
looped pattern as a metronome for the first few tracks. After I have recorded several
other tracks, I re-record the drum part and create a finished drum set sound by
recording the sounds in the following order:

 track 1- snare/bass
 track 2 – open/closed hi-hat and ride cymbal (one or the other, not both
simultaneously)
 track 3 – snare and tom fills as needed with crash cymbal
 track 4 – percussion as needed (shaker, tambourine, woodblock, etc.)
 track 5 – additional percussion or effects – snare doubles, cymbal rolls, etc.

If any parts are difficult for me to play, I slow down the tempo of the song while recording.

Before leaving this topic, I want to remind you that, in some instances, the best metronome is no
metronome. It sometimes works to record your first track without a metronome and then to use
that track as the conductor for your additional tracks. On one of my pieces, I recorded a piano part
without a metronome and played very spontaneously. Using the piano part as my metronome, I
then recorded a clarinet track and two string tracks. The spacious feeling of time created in this way
would have been almost impossible for me to create by playing along with a metronome and then
editing the tempo track of the sequence.

Technique #2: Create a dummy track. In certain sequencing projects, the ideal first track is one that
contains all of the basic elements of the song. For example, while composing a song I might use
some type of keyboard sound and play a bass part and chordal accompaniment on the first
track. This track would then provide me with a rough idea of the song’s chord changes, bass part
and rhythmic structure. On a second track I would use some type of melody instrument and create a
lead line over my rough dummy track. After I had created the melody, I would then mute the
dummy track and re-record the keyboard part and bass part on separate tracks.

If I were transcribing a classical piece such as “Every Valley” from Handel’s Messiah, I would first
record the tenor solo part using a melodic sound such as an oboe or clarinet. This would help me
create more dynamic accompaniments because I can react and respond to the solo part while I am
recording additional tracks.

Technique #3: If necessary, alter the arrangement to enhance the sequence. This technique is
especially important if you are transcribing a printed score. Some time ago I was asked to create a
sequence of an orchestral accompaniment for a choral piece. The music contained a passage where
the choir sings over a very soft accompaniment. When the choir sang with my sequence, however,
they couldn’t hear the sequence and ultimately lost the beat. The easy solution would have been to
make the sequence louder, but this would have destroyed the desired musical effect. To solve the
problem, I added a soft timpani hit on the downbeat of almost every measure of the quiet
section. The timpani reinforced the pitch of the cellos and basses, but it also acted as a soft
metronome. Everyone could hear it easily because it was below the range of the voices. In
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

situations such as this, I have found that synchronization problems can often be solved by adding a
soft percussive sound either below or above the range of the voice. (A soft triangle hit also could
have solved the problem in this example.)

You will also have to alter an arrangement if you are creating an accompaniment sequence for a
choral piece that has one or more “a capella” sections – sections that are sung without an
accompaniment. The problem is to provide cues for pitch and rhythm without destroying the a
capella effect. The solution is to find ways to insert these cues into the sequence as unobtrusively as
possible. Try to use an instrumental sound that is already in the sequence. If the score includes
string parts, try adding soft pizzicato string lines. The pizzicato effect will clearly indicate both the
pitch and the beat. The singers will hear best when they are holding a note and when the part is
above or below the sound of their voices. In some cases, you might have to create a melody or bass
line to use as an anchor. There is no easy answer to this problem. By experimenting with different
sounds and melodic lines, however, you can usually find an aesthetically acceptable solution.

Technique #4: Adjust track volume levels when you have finished recording all of the
parts. It’s usually best to wait until all of your tracks have been recorded before you add
volume curves to tracks. It can be difficult to make a decision about a track’s relative
loudness if you can’t hear all of the tracks. Most sequencers allow you to draw a volume
curve using the mouse. However, you can often produce more musical results by assigning a
slider or knob to either cc#7 (volume) or #cc11 (expression) and to overdub the volume
changes in real time as you are listening to the sequence.
Technique #5: Create a tempo track when you have finished recording all
of the parts. Adding small variations in tempo can enhance almost every style
of music. Increasing or decreasing a song’s tempo by a few bpms (beats per
minutes) can make a dramatic difference in the song’s feel. The best way to do
this is to ―conduct‖ your sequence in real time by recording your tempo changes
as you listen to your sequence. There are several ways to do this. Sometimes
you can use an assignable slider to record tempo changes. You might also be
able to use the ―+‖ and ―-― keys on your computer keyboard. This is a very
important way to add musicality to a sequence, so it’s worth taking the time to
learn how to do this. Once recorded, tempo data can be copied, deleted and
edited in much the same way as other types of MIDI data.

 Music Production Procedures and Principles


Over the next few days Rhythm Creation is going to go way back down to basics by writing some
beginners guides. There are loads of people out there that want to start creating and producing their
own music and need a little help to get started. The following guides will be of immense help to you.

I am an electronic musician and so you may find at times throughout this guide that it is more aimed
at musicians who are looking to producing electronic based music, I shall cover all genres and styles
of producing as best as I can. You may also find at times that you may disagree with me as some of
the things that I will talk about will be my opinion only and there may well be other methods and
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

different ways of doing things, if you do disagree with me write it in the comments of the post and I
shall bring your ideas and thoughts up from the comments and into the main articles.

Part 1 - Equipment.

To start producing your own tracks the first thing your going to need to think about is the equipment
your going to need. Each musician is going to have a different set of equipment that they use,
obviously a guitar player is going to have his guitar, amplifier and cables for example (which are not
on this list). But this equipment list below is what I think is a minimum for anyone wanting to start to
record and write music at home, no matter what instrument you play or genre of music you are going
to be creating. You may find that you already own some of these items (for example the computer as
you must be using one to read this) and you may be surprised at how small the list actually is to get
started.

Hardware and Software

Music can be produced using both hardware and software. At one time music was produced solely
using hardware but due to advancements in computer speeds, all music production tasks can now be
achieved using software. Hardware is still used a lot in music production (Music producers love their
mixing desks, hardware effects units and especially their hardware synthesizers) but as computer
software can now compete extremely well with hardware and because you are new to music
production I would suggest you stick mainly with software for the moment. You will find that at a
later stage you can advance to using both software and hardware together and because most software
is actually based on their hardware counterparts there will be no need to relearn anything you have
learnt, should you want to go the hardware route at a later date.

Even though we are going to go the software route you are still going to need some hardware
equipment to run the software as well as be able to record both instrument and note information
(MIDI Data - We will go into this later) into the software environment.

Computer
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

This is obvious, but yes your going to need a computer and obviously you are using one to read this,
but is your computer going to be fast enough to run any music software your going to be using.

Basically the faster the computer you have the better experience you are going to have producing your
music. Music software can take up a lot of system resources due to it’s complexity and whilst it may
be written on the software box or web site that the recommended requirements are low, you may find
that once you start using the software and have lots of different channels all playing together that the
computer just isn’t going to cope with it. A fast processor and lots of RAM are needed to allow your
computer to cope better.

You are also going to need lots of hard drive space as music files and recordings can take up lots of
gigabytes. Hard drives are cheap these days so this shouldn’t be the problem it once was. You will
also want a DVD/CD writer so you can hand out CDs of you produced tracks and also back up your
work.

If you need to buy a new computer you may want to build the computer yourself (It’s not as hard as
you think). I did this myself for mine and you will find there are some benefits to doing this as you
can choose the components yourself to create a better computer that’s specifically designed with
music in mind. For example you can find cooling systems without fans, quieter cases, quieter hard
drives and graphics cards without cooling fans on. All aimed at reducing the sound that the computer
makes in your audio environment.

Soundcard
The most important component of your computer if you’re a musician. The main thing to watch out
for is going to be latency which needs to be as low as possible. A high latency will make your
computer unusable for recording music, as everything you record or play in will be behind everything
else in your track. ASIO Support (Audio Stream Input/Output) is a must if you are going to be using
Windows and a nice amount of audio inputs (with low noise).

You may find you can use your current soundcard and if you do have latency issues you may find you
can solve your latency issues by searching the web for better drivers. The KX Project drivers for
example works with EMU10K1 and EMU10K2-based sound cards such as the Soundblaster Live and
virtually eliminates latency issues.

Monitors/Speakers
If you are serious about music production you
should spend a good amount of money on some
near-field studio monitors as your tracks will
benefit a great deal in sound quality and should
sound great no matter where they are played.
Near-field monitors are different to normal
speakers/stereo systems as they are designed to
show you exactly what your music really sounds
like without affecting the sound in anyway. They
are also designed to be listened to with you closer
to them than conventional listening speakers.
Active monitors will already have a power
amplifier, passive monitors need an external
power-amp.

If you can’t afford a decent set of monitors or don’t want to invest just yet, you can still use a decent
normal speaker and amp or stereo system setup. But make sure you turn off any enhancements that
the stereo or amplifier creates usually called something like Bass Enhancement or Rock/Jazz/Dance
settings and turn any EQ (possibly called Bass and Treble) to their central positions. With a standard
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

pair of speakers you are going to need to do a lot of listening on different systems such as car stereos,
headphones, friend’s systems etc to get better mixes that sound good where ever they are played.

Never use those small computer speakers, it won’t be worth the time trying to get your tracks
sounding right and use a good cable to connect the soundcard to the system/amplifier.

MIDI Controller

MIDI (Musical Instrument Digital Interface) is a data system that is used between different
instruments (and your computer) to send note and time information. No audio recording (sound) is
contained within MIDI, it is only computer bits and bytes (digital data). A MIDI controller is typically
a musical keyboard with other various controls such as faders and pads that are used to play notes into
your tracks. If you are only going to be recording traditional instruments with microphones such as
guitars and drums then you may not need one, but they aren’t expensive and owning one will open
many new avenues for your music. If you are going to be producing electronic based music then a
MIDI controller is an essential piece of kit.

There are tons of different MIDI controllers available on the market today, some with musical
keyboard layouts, some with pads and faders, there are even guitars and drum kits which act as MIDI
controllers. But I’m specifically talking about a keyboard layout one with at least 2 octaves of keys
and one that also has some faders on to control certain sound aspects as you play. Always make sure
the keys have touch sensitivity and aftertouch, these features change the sound played depending on
how hard the note is being pressed (These are typically a standard feature, but check to make sure if
your buying a cheaper or older second hand one).

You are also going to need a MIDI cable if one doesn’t come with your MIDI controller to connect
your MIDI controller to your soundcard (The input may be called a joystick input on your soundcard).

Microphone(s)
If you plan on recording full drum kits then you are going to need quite a few microphones to get a
studio produced sound. If you are going to be using drum samples then you shouldn’t need to have as
many, in fact you may get by with only having one. Even if you are producing synthesizer and sample
based only music, I can’t stress enough how you should still have a good basic microphone to hand
for recording your own samples.

There are many good microphones on the market, but for the beginner music producer I would advise
getting a Shure SM57 (A bright sounding vocal and instrument microphone) or a Shure SM58 (Go for
this one if you are doing lots of vocals). You can’t go wrong with these microphones, they are
classics, built to last, can take very loud sounds and don’t need an external power source. They are
also very well priced and will give you very good quality sound for your money.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Recording Software
I shall mention some software packages in this section but I must point out that every musician is
different and whilst some musicians will swear by a piece of software as an essential piece of kit,
others will find it to be completely wrong in every way for what they require out of their software.
The trick here is to try out demos for each piece of software you come across before buying and
research well to make sure it is the right piece of software for the music you want to create. Ask other
musicians who create a similar style of music what they use, read the many reviews on the net and try
not to be sucked in by any adverts from the software companies (that piece of software might not be
the perfect solution that they want you to believe). Also check to make sure that there are not any free
alternatives that may be sufficient for what you require.

So what I’m going to do in this section of our beginners guide is to point out the different types of
software available, talk about some of the features and give a few examples of software packages for
each type. I am not going to choose the software for you or suggest a piece of software as required for
your music, that is your job. The best piece of advice I can give you when choosing your software is
to choose a package which you think you will enjoy using. If you find that making music becomes a
bit of a chore and not fun you will end up either giving up completely or your music and inspiration
for the creative process of writing music will suffer.

The Different Types of Software


These following categories are how I would categorize the different types of recording software
available today. Some software could be classed in two of these categories, have features like that of
other categories or be classed into a subcategory of that category, but these are the top level
categories.

Pre-Recorded Loop Based Mixing Software


This sort of software is the most basic type of music software available and so are great for beginners
and those not looking for anything too difficult to start with. They are very cheap to buy but are also
very limited. They basically work by using pre-created loops and samples usually supplied by the
same company in the software or as add-on packs. You can then use mix together these samples and
loops to create a track. They are great for young people or those who have no experience with music
but if you want to create your own sounds or plan on recording instruments and vocals, this category
of software is not for you. This is what I would call music gaming software (yes, some of them are
available for Playstation).

Examples of this type of software: eJay

Sample/Synth/Loop Based Sequencers


Software in this category is the real fun stuff, these have nice easy to use sequencers and are more
geared towards creating your own sounds using the software synthesizers included in the software
(These synthesizers can make a wide variety of good quality sounds and most can easily compete with
the hardware synths in sound quality). They also are very sound sample based with extremely good
sample manipulation abilities and loads of great effects that can be placed on your sounds. You will
need to collect or make your own samples, import them into the software where you’ll be able to use a
MIDI controller to play, edit and create some great sounding pieces of music within them.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Some have the facilities to use VST plugins and instruments (Extra effects and virtual instruments)
and can also be hooked up to other sequencer software (See below) using a technology called ReWire
to create a full blown recording studio environment. This is the category you should look at very
closely if it is dance/electronica or sample based music that you will be creating. If your looking to
recording instruments only, you may want to skip this category completely. They have a wide range
of users as they can be very good for beginners to music production as well as more than capable for
producing professional tracks.

Check each piece of software in this category as they can vary a lot. Reason for example emulates a
hardware environment extremely well with some amazing instruments, FruityLoops is more
loop/sample based but with VSTi support can be expanded in many different ways, where as Ableton
Live has been designed with Live Performance in mind and includes multi-track recording.

Choose carefully from this category and try before you buy to make sure that it is right for you. Check
to make sure that any included synths can make the sounds you want to produce by listening to
examples, synth presets on any demos or other peoples music you know has been created with that
software. I must point out that you will find that if you are going to choose to buy software from this
category that you may also need to use a piece of Audio File Recording and Editing Software (See
below). This is so you can record your own samples as well as clean up or edit any samples you may
get from other place such as from sample web sites.

Examples of this type of software: Reason, FruityLoops, Ableton Live

Recording/Instrument Sequencers with Plugin/Extendable Features


This is the category you should be looking at if your music is going to be more recording based. They
have great recording facilities and emulate a professional recording studio in a software environment.
They can also be extended to allow plugins such as VST plugins and intruments, these are effects and
instruments (synthesizers, drum machines etc) that can be added on. There are loads of these available
from a wide range of different companies.

These sequencers can usually allow the software from the previous category (Sample/Synth/Loop
Based Sequencers) to be integrated into them via the use of a Technology called ReWire (A kind of
virtual cable between the different software packages) allowing you to get the best of both worlds.
They all offer MIDI support too and with the VST Instruments can achieve the same as the
sample/synth/loop based sequencers can, but the enviroments could be considered to be less fun and
user friendly. Plus you may have to fork out extra money for the plug ins to get the sound you want.

These pieces of software can range drastically in price and features, so make sure you get the right
version as you will sometimes find there are cheaper ―Lite Versions‖ and more expensive ―Ultimate
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Versions‖. Think about whether you really need the extra features of the more advanced versions, you
may not need them now but in the future may require them so investigate all versions of any piece of
software fully.

Examples of this type of software: Cubase, Sonar, Logic

Recording Sequencers with Hardware Interface Options Software in this category is very similar
to the category above in that they emulate a professional recording studio but they also have the
options to have specially designed hardware interfaces very similar to a classic mixing desk. These
link into the software directly creating a very hands on approach. If you go into professional recording
studios today this is the system that you will see set up.

The hardware options can be very expensive in this category and so if you are a beginner I would not
advise that you go for this type of software/hardware.

Examples of this type of software: Pro Tools

Audio Recording Software


Software in this category is usually seen as an addition to the above categories, as they are used to
record and edit samples or individual channels of sound by editing the waveform. They come with
effects and processing that can be applied to the sound (although usually not in real-time like the
software sequencers above). They can also be used to apply effects and processing to your tracks as a
whole when you have completed the track and exported it from other software. (Mastering)

Some software in this category can be used similar to a multi-track recorder, but cannot do nearly as
much as a proper sequencer. If you are just looking for something to record a couple of tracks for
example just some vocals and a guitar, you may find that a piece of software from this category is all
your looking for. I’ve not advised a specific piece of software in this guide but I have to here. Please
give Audacity a go as it is Free and is a very capable piece of software and may be perfect for your
needs if your exploring this type of software.

Examples of this type of software: Audacity, Audition, Wavelab

Setting up your Studio


Your Room

For a home music production studio you are more than likely looking to transform an existing room
or area such as a bedroom or study into your personal studio. Whilst this is not an ideal solution for
a studio, we have to make best with the area that we have got. If you are lucky enough to have a
choice of rooms available to set-up in then your first thing to do is choose which one your going to
use. Here are a few things you should think about when choosing a room:

 Your Neighbours - If your studio is going to placed where your neighbours can hear you then
you will end up annoying them or not being able to work with much volume. The further
away from any neighbours the better for both of you. When making your tracks the
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

neighbours are going to hear the same track being played over and over as you change
things and this will drive them crazy.
 Unwanted Noise - The room should have no or as little unwanted noise as possible especially
if you are going to be recording using microphone. Unwanted noise could be for example
traffic, neighbours, pets, heating, air conditioning etc. Remove the unwanted noise or
choose a room away from this noise.
 The Shape and Size of The Room - Your room should ideally not be square as square rooms
will have certain frequencies which will resonate more than rectangular rooms have. With
very small rooms the sound will bounce more around the room, so maybe choose the bigger
room if you have the choice.

If you are going for a more recording based setup you might want to think about having a completely
separate recording room to your equipment/mixing room if this is possible. This means that your
microphones will be away from any noise created for example by the computer or if you play music
as a band, the musician and instrument being recorded can be away from the rest of you. This is not
a requirement and is not suitable for everyone and you may also need to buy extension cables and
thread them through your wall, so you can plug in your microphones in quickly and easily without
wires going all through your house.

The Sound Of Your Room

The sound of your room needs to be good for recording and mixing too, basically we want to be
hearing the sound of your music directly from the speakers, and not the sound the has bounced off
the walls of the room. If you have ever removed all items from a room when decorating you will
know how the room changes in sound. The less furnishings that are in the room the more
reverberation can be heard. Some people like some sort of room reverberation on their recordings
but most of the time you won’t want any at all. Reverberation can be added later on in your mix via
the software (or hardware), this gives us more control over the final sound. Some reverberation in
the room is fine, we just don’t want too much.

You can test what the room sounds like by clapping your hands. If you can hear the resonance just
after you clap, you may want to add some more soft furnishings such as curtains, rugs or cushions
which will all help to soak up these reverberations (dampen the sound).

Reverberations tend to happen more with sounds with higher frequencies, the problem you will
have with lower frequencies is vibration from objects around your room. To solve this once all your
equipment is set up, turn it on and turn it up very loud. Set up your MIDI controller to control a very
low bassy sound or anything else that is capable of creating a big bass sound such as a bass guitar.
Now go up each of the notes from the lowest you can hear and listen for objects that vibrate around
the room. You need to locate these objects and remove them completely from the room or if you
can’t do that then you need to stop them from vibrating somehow. These vibrations are adding
unwanted noise to the sound of your room.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Speaker Locations

Your speakers need to be placed well for you to get the most out of them. They want to be facing
towards you at ear level with some distance between the left and right speakers and some distance
between you and the speakers creating a triangle shape. It may specify optimum distances in the
manual that came with your speakers or monitors and you should use these specified distances. If
not I would go for something like 1.5 metres apart from each other and 1 metre away from you, use
your ears and set them up what you feel comfortable with.

The speakers should not be placed in corners of the room as this will accentuate the bass and there
should be as little surfaces and objects between you and the speakers as possible as reflections
(called early reflections) will bounce from the speaker on these surfaces to your ear.

The Rest Of Your Equipment

Your equipment needs to be set-up to give you a comfortable and productive environment.
Everything needs to be within easy reach, you want to place your MIDI controller somewhere so you
can still see your screen and play at the same time. Getting up and going across the room to play
your music in is not what you want to be doing.

If your computer or equipment makes any noise from fans, make sure you put it as far from
recording microphones as possible. If you have a uni-directional microphone (a microphone that
picks up what is in front of it and not much from behind it) make sure that you place the computer
behind it so that it will not be picked up as much. Placing your computer on the floor may also
reduce the noise recorded too.

Other Stuff To Do In Your New Studio


PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

 Keep Your Studio Tidy - A room which is tidy will not only make you feel better, it will make
your music better because you will enjoy being in your room and you will also be able to find
things when you need them.
 Reduce Hum - Other electrical items can create hum in your equipment and cables such as
microphone or guitar cable. So remove these from the room. Dimmer light switches are
particularly bad for this.
 Remove phones and distractions - Especially if your going to be recording a lot. You might be
coming to the end of the your greatest take ever when suddenly someone rings you.
 Get a Comfortable Chair - One that doesn’t squeak :-), If you are comfortable you will spend
more quality time on your music and your music will benefit from it.

What to Look for in a Professional Recording Studio

When booking a studio that is set up to records basic tracks, most of the recording resources
are readily available and good to go. However, don't assume this means that all is going to go
perfectly as planned. When booking a track session, most studios will have a list of
microphones and a floor plan that shows the layout of the recording space, control room and
the isolation booths. Make sure that you are able to visit the space before recording and take
note of some very specific issues.

How big is the live room?

Do you notice any weird tonal changes to your voice when you talk?

Can you hit a snare or kick drum to get a sense of the tone of the room?

How many isolation booths are there?

Are there good sight lines between the musicians from the booths to the live room and
control room?

Does the studio stock any amps for guitar and bass?

Are they in good working condition?

Are all the microphones on the list included in the studio booking?

Are they shared with another studio in the facility?

If so, are they all available for the time you are booking?

Is setup and breakdown time included in the price of the booking?

Is an engineer included in the price?

Will a full time assistant engineer be available?

What happens if you go over time?

Can the gear be loaded into the studio the night before the session?
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

These are some of the basic questions that must be answered before booking a studio to
record basic tracks. Be very clear about what you get for the rate you are paying and what
will cost you extra. If you need to bring in additional resources from outside the studio, such
as your own personal microphones or amps, make sure they are clearly labeled so as not to be
confused with the studio's gear.

Is the Studio Designed to Get the Sound You Are Looking For?

For the novice recording artist or producer, the most difficult thing to judge when booking a
studio is the sound quality of the recording space and control room monitors. If you know
people who have used the studio for basic tracks, ask them what their experience was, and
what to look out for. If you are using the studio's engineer, get a demo reel from them so that
shows examples of tracking sessions done at the studio. Get a cd and listen at home if you
can. Most studios have better monitors than you have at home and you can easily be fooled.

A big recording space is not necessarily a good one for basic tracks and it is important to
notice any strange tonal qualities that exist in the recording space. If your voice suddenly
sounds hollow or overly resonates the live room when talking to the studio personnel, this is a
bad sign. It means the recording space may have modal problems. (Specific frequencies a
room will resonate at.) A good recording space should make your voice sound vibrant and
alive, not hollow. Walk around the space as you talk to get a sense of the acoustics of the
whole room.

Consulting professionals

If all of this information completely terrifies you, then you may need to hire a professional
engineer to help you find a suitable recording space. A professional engineer with recording
experience will quickly notice problems in a recording space and give you advice that will
save you hours of your time, loads of money and a lot of headaches.

The greatest asset an artist or producer can have in a recording situation is to consult a
professional. Never seek the advice of a studio owner or manager to determine what will
work best for your project. The reason is simple, in a very competitive market they are
focussed on getting you to record in their facility. Once you are in, you are left to deal with
making the basic tracks work with the resources that are available.

A professional engineer will guide you with information about what to look for when
booking a studio for the tracking session. If you are looking for a big drum sound like Led
Zeppelin, you will not get it in a small space no matter how reverberant the space may be. It
is important that you understand the parameters of what you are looking for when booking a
studio. Take an engineer out to lunch and pick their brain, get suggestions for studios that fit
the sound you are going for that work within your budget. Consider their advice and opinions
carefully. Remember, the difference between a good production and a great one is a lot of
subtle decisions that add up over the course of a production.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

In the Studio

Because tracking sessions require a larger recording space and a lot of resources, most home
setups cannot effectively accommodate a full basic tracking session without compromise.
Aside from a suitable recording space that is free of environmental noise, one has to consider
acquiring extra microphones, cables, stands, headphones, preamps and more inputs to get into
the recording device. If your home recording setup cannot meet these basic needs you will
have to consider combining your resources with friends or renting the necessary equipment
from a local dealer to record your basic tracks.

The approach to recording in a home studio environment presents many challenges that are
not typically encountered in the commercial recording environment. Most home
environments are designed with rectangular shaped rooms. Parallel surfaces in a recording
space creates many problems including flutter echo, standing waves and room modes. It is
important to place instruments carefully to avoid or minimize the effect of room modes and
standing waves.

The Home Tracking Session

Making it Work

There is no question that recording basic tracks in a home studio environment is vastly more
compromised and difficult than recording in a commercial facility. That does not mean,
however, that the results cannot be as good or even better. Even the best designed
commercial recording facilities present their own challenges, and getting the sound you are
looking for will always require some careful planning.

The home environment also requires some ingenuity. It is more like McGyver, however, than
it is like CSI with all the latest gadgetry. Either way, you can accomplish your goal and get
great results for your basic tracks with style points being the primary difference. To help you
along your path to making a better home tracking session, here are some keys to success.

1. Carefully place the instruments:

In smaller recording spaces (anything smaller than 20 feet by 20 feet is a small space) I find
the best way to guide your drum sound is with the kick drum. In small recording spaces, the
kick is most obviously affected by room resonances which are very predominant in rooms
smaller than 20 by 20 feet. (More on this in Acoustics) Move the kick drum around facing the
center of the room in every place that you can get it with one thing in mind. Remember that
the rest of the drum kit and the drummer must be able to fit where you set it up.

Hit the kick drum until you find a place where the drum tone is strongest without being
muddy or over resonant. Basically, you are finding the placement that best resonates the drum
shell when struck. Set up the rest of the kit around this placement. The same approach applies
to all other instruments. When getting sounds for your basic tracks, always let your ear be the
best judge of whether something is good or not. Even if it looks 'wrong', go with the sound
before aesthetics. Sometimes interesting sounds can be achieved 'accidentally' by allowing
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

your ears rather than your eyes be the judge. Unless you are taking photos for the CD
artwork, no one will ever know or care how you got there.

Follow the same process when placing bass and guitar amps. If an amp sounds muddy, put it
on a chair or table to minimize the effect of the floor resonating the amp. If the amp sounds to
thin, it can be moved closer to a wall or corner where the early reflections will add extra low
frequencies to the tone.

2. Use of Gobos:

Gobos (short for go-between) are free standing absorptive or reflective barriers that are
placed between instruments. These can be used to help tighten or control the reverb or
resonances of the room from overly affecting the sound going into the close mikes.
Generally, it is a good idea to create a semicircular wall around the drum kit from the back
side. Never obstruct the sound of the drum kit with gobos in front of the kit.

The idea here is to minimize early reflections back into the close mikes that can negatively
color or flatten out the sound of the drums. If any part of your drum kit is closer than 10 feet
from any surface (other than the floor of course!), you will have problems with early
reflections. These early reflections create a comb filtering effect that makes the instrument
sound indistinct, muddy, thin or undefined. In a home recording situation you can use
mattresses, Couch cushions or even suspended packing blankets to help minimize these
negative effects.

If you are recording multiple instruments in the same room for your basic tracks, using gobos
between the instruments will help to isolate the bleed from one instrument to the next and
give you more control of individual sounds in the mix.

3. Miking Techniques:

Whether you want to use three mikes or thirty mikes for your basic tracks, you can get great
sounds by understanding what is most important to look for. To me, the most important
mikes for a drum kit are the overheads, followed by the Kick mic and finally the Snare mic.
Everything else is filling in whatever is missing. If these basic mikes are not set up well,
everything else will typically create more chaos.

The overhead mikes should capture the essence of the drum sound for your basic tracks. It is
the only true stereo perspective you have of the kit. In addition to capturing the sounds of the
cymbals, they also capture the sound of the snare and kick in the room. Play with different
mic positions including X/Y, Spaced Pair and ORTF. See what works best for your recording
space. Sometimes, setting up mikes above or behind the drummer will give you a better
perspective of the kit as the drummer would hear it.

In a way, the more mikes you use, the more difficult it will be to get a good sound for your
basic tracks. Close mikes are there to add detail to the overhead mikes, but because they also
pick up every other part of the drum kit, you will get loads of off axis phase issues. This will
have a tendency to whittle down the fullness of individual elements of the kit. If you focus on
a great overhead sound and then the kick and snare respectively, everything else will only
need a minimum of effort, if at all.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Don't forget the phase reverse switch. Most USB and Firewire interfaces do not have a phase
reverse switch built into the unit. This is a travesty if you are using more than one mic. Make
sure you have some phase reverse XLR turnarounds on hand at all times. Because signals
travel in waves over time and space, the signal reaching your overhead mic from the snare
drum is most often at the compression cycle (in phase) when the close mic is receiving the
rarefaction cycle (reverse phase) of the waveform. What the hell does this mean?

Basically, one mic is trying to push a lot of compressed air through your speaker at the same
time another mic is trying to create a vacuum of air particles. In other words, one mic is
trying to push the speaker outward while the other is trying to pull the speaker inward. This
results in a cancellation that leaves the snare most often sounding hollow and lifeless. Be very
aware of this as you go through your drum sounds. Be sure to check the phase of all mikes to
the kick and snare until the fullest sound is achieved.

4. Getting Sounds and Adjusting Levels:

Be sure to leave plenty of headroom when setting levels for your basic tracks. Many USB
interfaces clip well before the digital dBFS clip light goes on in the recording application.
There are many reasons for this, mostly due to inexpensive components and inadequate
power supplies. Remember, the performer will always play louder in the recorded
performance than when getting sounds. Set your levels at least 3 to 6 dBFS lower than where
you want them to end up when getting sounds.

Never attempt to set sounds for any song unless the musician is playing the exact part at the
correct tempo. This is the most common oversight I see with novice engineers recording
basic tracks. If the drummer is playing at 125 bpm, but the song you are going to record is at
90 bpm, you will tighten up the sounds too much and the drum sound won't breathe properly.
If you reverse the situation, your sounds will be too loose and open and when performing at
the faster tempo the sounds will become muddy and indistinct.

No one sound will work for every song, you will need to make adjustments for each track.
Try to record songs that are similar in tempo and vibe together. This way, your adjustments
from song to song will be minimized. Adjust the acoustics of the drum room to compensate
for the tempo. The faster the tempo, the deader the room will need to be. The slower the
tempo, the more reverberant a room should be. Make use of as many packing blankets, rugs,
pillows, mattresses as you can have at the ready to make these changes.

These are just a few tips to help you get pointed in the right direction with your basic tracks.
As always, use your ears, not you head when making decisions. Your head will talk you into
horrible decisions, your ears will tell you what is right or wrong. Don't be afraid to tear
everything down and start over if it just isn't working. Even the best laid plans designed by
professionals with years of recording experience can yield horrible results. If everything you
try is just not working, then start over with a completely different approach. What sense does
it make to waste hours of time trying to fit a round peg in a square hole. Aside from being
completely liberating, you will learn a ton of new ways to record!!!

5. Communication and Headphone Mixes:

There are very few things that can mess up a great recording setup more than bad headphone
mixes and a lack of good communication. It is worth the extra time to get a headphone mix
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

that works for everyone when laying the basic tracks. If that is not possible, then create two
or more even if they have to be mono mixes. Make sure the everybody can hear themselves
as well as everybody else.

Talkback mikes are a must in the studio to allow free communication between takes. These
mikes do not have to be recorded and can be shared by musicians if necessary. It may be
necessary to run them through a small mixer so they are added to the headphone mixes. The
engineer is usually responsible for opening up the talkback between takes, but using
inexpensive mikes that have an on/off switch can sometimes be a more convenient solution.

6. Miscellaneous Thoughts:

Remember that your basic tracking session is meant to lay a new foundation for the rest of
the recording. The most common issue that arises with basic tracks is that musicians will
have a tendency to overplay because they are not hearing the whole production. They will
naturally try to fill holes in the song that are meant to be filled later with other instruments. If
this is the case and you have a demo with all of the planned parts recorded, use this as a
reference in the studio and point out that it is important to stick with the plan and not try to
overplay your part in the production.

Make sure that everybody is comfortable with their headphone mixes and have everything
they need at the ready before you start recording. You want your musicians to be 100%
focussed on their performance, not on the fact that they can't hear themselves in the
headphones. Keep a close eye on this and ask regularly if there is anything anybody needs.

Finally, a little advice on using a click track for your basic tracks. Tread carefully when
asking musicians to play to click tracks. If the drummer practices regularly with a metronome
then this should not be an issue. If not, then use the click to introduce the desired tempo and
let the rest happen. You can easily edit a performance back to a click and still preserve the
feel if done carefully. You will never be able to make a lifeless performance sound great once
bludgeoned by a click track. If you find the drummer struggling to maintain consistency with
the click then take careful notice of how it affects the song. Maybe the song needs a faster
tempo to get the desired feel. It can be much harder to fight the natural tendencies of a
musician than to just go with it, get the feel you want, and deal with the rest later.

 The Music Production Process


Overdubbing is the next stage of the music production process. A well though out approach is
absolutely necessary when taking on this stage of the music production process. The
importance of the demo looms large in this step. If you have already sorted out the majority
of your ideas and the individual parts the overdubs will be primarily focused on capturing the
sounds and performances that fill out the production. Ignoring the demo stage in the music
production process can easily turn your studio production into a high priced demo. A very
common problem today...
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

What is Overdubbing?

Overdubbing, sometimes called "sweetening", is a process that allows performances to be


recorded synchronously with pre recorded material. Imagine recording your band where each
instrument has a dedicated track or series of tracks. If each performer is isolated acoustically
from the others, they can be rerecorded at will without affecting the other musicians'
performances.

The benefits of overdubbing are tremendous. It means that a single bad musician in a band
will not ruin the whole recording, because their part can be replaced. In the days of mono and
early stereo recording, everybody was in the same room and recorded together. The inability
of the singer to perform well might mean that the band would have to play the song over and
over again till the vocalist got their performance right.

In the professional recording world this was the music production process until the invention
of Sel/Sync recording in the 60's. Sel/Sync stands for Selective Synchronization. A multitrack
recorder with Sel/Sync capabilities would allow additional tracks to be recorded
synchronously with the original performance on the same tape machine. Later, those
performances would be mixed into mono or stereo for the commercial release.

The invention of isolation booths in recording studios soon followed, and allowed individual
musicians to be recorded with a minimum of bleed into the mikes of the other instruments. If
one person's performance was lacking, it could be easily be rerecorded without affecting the
other musician's performances. It also allowed more flexibility with processing during the
mixdown session.

Over the years, the number of tracks available to record on steadily increased allowing music
productions to get larger and more sophisticated. Overdubbing became the norm for almost
all music productions. Although some feel this has degraded the quality of music, very few
artists record without overdubbing.

So What are the Benefits?

The benefit of overdubbing is that it allows the each individual part to be focussed on and
perfected to the artist and producer's taste. This requires a lot of discipline and can sometimes
lead to performances that are technically perfect, yet sterile and lifeless. It's not natural for
musicians to perform individually. This is why a tracking session requires the whole band to
perform together. The drummer needs something to respond to in order for his/her
performance to sound "live" and not programmed.

Overdubbing is a very difficult thing to get right. Because of the lack of visual cues that
would normally lead a performance from one section to the next in a song, the musician has
to record their part blind against the prerecorded band. Subtle pushes and pulls in a
performance that may be conducted by subtle visual cues of the other musicians now
disappear. The overdubbing performer is then left to guess or adapt their performance to
match what was captured in the tracking session.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

This has naturally led most multitrack productions to the use of click tracks which even out
the tempo of the tracking performance. With a click track, the overdubbing process is less of
a guessing game and more of a known quantity. Because of the difficulties overdubbing
presented in the recording studio, musicians who were good at it became hired guns to
quicken the production process. Many musicians have made very successful careers only
working on other artists recordings in the studio.

Getting Into the Process

Multitrack recording is far more sophisticated than it may appear on the surface. If a song is
not thought out well enough in the demo stage, the music production can easily turn into a big
mess of overdubs in an attempt to find a magical part. This is the mud against the wall
approach. The engineer is then left to sort out all of this junk in an attempt to make it sound
professional.

In a professional music production, the overdubbing process must be very directed. If it is,
there will always be room for experimentation with the overdubs when called for. Many
music productions come to life in the overdubbing stage where key hooks in the song can be
created and developed. If the overdubs are created upon a foundation of quality work from
the tracking session, then a song can really take shape quickly. If not, the overdub stage is
relegated to a rescue mission in an attempt to save the song. Every part must be layered on
with a measured goal or what you will be left with, at best, is a good sounding demo instead
of a quality recording.

There are many stumbling blocks in the overdubbing process. Here is a list of the most
common ones encountered:

1. Easy to make everything too perfect. Performances can lack a vitality and freshness.
2. Layering too many parts usually makes everything sound smaller, not bigger and creates a
lot of extra work.
3. You can wear out a performer by having them repeat their performances too often.
4. Easy to lose perspective on the whole production. (forest through the trees syndrome)
5. Quality of sounds can become more important that the performance.
6. Easy to over complicate the process in an attempt to make a part sound unique.
7. The production can easily take on a "paint by numbers" feel.
8. Easy to accept average performances thinking they can be fixed with editing.

Where Has All The Time Gone

In a standard 10-15 song CD, overdubbing is the stage where the most time is spent in the
music production process. In a typical production that lasts about 3 months, 7-10 days are for
tracking, 10-14 days are for mixing, and everything in between is overdubbing. That's more
than 2/3 of the production time! You can see why it is critical to be well prepared for this
stage.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Today, many productions are taken on a song at a time. This is particularly true in modern
Hip Hop and R&B music production where most of the work, other than the vocals, is
programmed. The benefit of this style of production is that the resources available to you are
virtually limitless. You are not necessarily subject to what the performer can give you in
terms of acoustic sounds. In many ways, the one song at a time approach is much better. Each
song can be addressed, focussed on and finished individually without distraction.

Unfortunately, this is a less efficient approach when recording bands because it may take you
a whole day just to get the sounds right and ready to record. To go through this for each song
is impractical and expensive. It makes more sense to record all of the basic tracks for each
song at once, making adjustments to the sounds for each new song as required.

The issue at the overdubbing stage is that it is also more efficient to rerecord all the bass parts
together, all the guitar parts together, all the keyboard parts together, etc… To keep 10-15
songs fresh in your head and really hone in on the message and feeling for each can be a
difficult task for the producer. If you cannot change gears from one track to the next quickly,
the process can easily turn into a factory mill production. The end result can be that each song
does not stand out as unique against any of the others. We've all heard records where every
song sounds the same, and the CD is just complete blur.

A Better Approach

What's best is not always what's practical. Most budgets today do not allow for a song by
song production style. This is why the Demo stage is so important. If each song has a
properly recorded demo it will be much easier to organize your recording time to get what is
essential for each song. The demo serves as a reminder of the essence of each song and how
it should be presented.

In the demo stage you are forced to focus your attention on one song at a time. The idea is
that you are attempting to dig out the core essence of the song. The process forces you to find
out what makes the song tick. The resultant parts may not have the sound quality of the
professional recording session, but they carry something much more valuable. The feeling,
the vibe, and the truth of what the song is about.

So What's the Upshot?

If the music production process is carefully thought out and worked through step by step,
then the Overdub process is all about capturing the best performance. If the parts are already
sorted out in the Demo stage, you will not be wasting time in the studio trying to create them.
If worked out ahead of time, there will be more time left for experimentation if the inspiration
arises.

One focus of the overdub stage is to create great sounds that makes each performance really
come alive. It is much more difficult to create a unique sound for each instrument in a
tracking session because amps and instruments are often forced into small booths, soundlocks
and closets. Creating a big sound in a small space can be very challenging. Add on the fact
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

that the engineer is trying to record several musicians, sort through headphone mixes,
talkback mikes and get great drum sounds all at once. Not an easy feat, even for a seasoned
professional.

The Overdubbing Process

Preparation is always key in any recording situation. This is easier to consider with large
setups, but is equally important in the overdubbing stage. Preparation does not guarantee that
everything will go perfectly as planned, but it does allow you to adapt more quickly to the
unexpected events that do occur.

It's very easy to overestimate what you can accomplish on any given day of recording. No
matter how well you plan the recording date, there are always things that are out of your
control. If the vocalist comes into the studio with a cold that day, you might find yourself
with a lot of spare time. If they are having problems hitting a certain note, you may walk
away with vocals on one song instead of two.

Here are a few helpful hints to help make your overdubbing sessions go a bit more
smoothly:

1. Know exactly what you are working on that day


2. Have all the resources you need available.
3. Make sure all parts have been rehearsed and that all performance issues have been sorted
out.
4. The overdubbing musician should play their part at the tempo of the song when getting
sounds.
5. Set up a comfortable space for the musician to work in.
6. Always have a plan B, incase everything you planned goes wrong.
7. Never rush through performances in an attempt to complete your goal for the day.
8. Never settle for average performances that will need to be fixed with editing.
9. When you capture the essence of a part, record it everywhere it needs to be without delay.
10. Take regular breaks, especially if there is frustration and confusion in the studio.
11. Always communicate with the musician immediately after a take, even if it's to tell them you
are not sure or need to listen to it again.

A Special Note on Vocals

Recording vocals is perhaps the trickiest of all the overdubbing processes you will undertake.
Because the vocal is typically the primary focal point of a music production, there will often
be added pressure on the quality of their performance. Depending on the personality type of
the artist this can go easily or be a complete nightmare. Some artists will rise to the pressure,
some will collapse under it. How you manage these situations can and will make or break the
project.

There are two sides to the vocal recording process that will help yield the best results. There
is the technical side and the emotional side. The technical side is easy for the most part, but it
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

is easy to overlook some subtle details that may affect the quality of the performance. Here
are a few technical setup tips:

1. Create a comfortable recording area for the vocalist.


2. Keep all cables and equipment neat and as out of the way as possible.
3. Make sure the vocalist has all their needs readily available. Music stand, pencil, lamp, hot
tea and lemon or honey, etc…
4. Position the mic as unobtrusively as possible. Make sure it does not interfere with the lyric
sheet.
5. Mark a clear, comfortable distance from the mic for the performer. Using a pop screen is an
easy way to accomplish this.
6. Make a great headphone mix. They must feed off the energy of the music and must hear
themselves clearly.
7. Make sure they are comfortable, ask often if they need anything.
8. Always communicate IMMEDIATELY after a take. NEVER leave the artist in the room
wondering what is going on.

The Psychological Side

The most unpredictable aspect of recording vocals in the studio is what it will bring up
emotionally for the artist in the recording studio. I have seen everything from downright
panic, convulsions, vomiting and total self destruction to complete one take performances
that blow you away. People will always show their true colors when under pressure. This is
why, it is critically important to create a bond of trust when working with the artist through
the production process.

If you want to produce music for a living, the best advise I could give you is to study human
psychology. It's not your job, as a producer, to be the therapist. It's your job to channel their
personalities and issues into quality performances. Sometimes that means being a hard ass,
sometimes that means giving them a shoulder to ball their eyes out. The goal here is to get
great performances. How you respond to these situations will do more to make or break a
production than you could possibly imagine.

Here are a few tips to help set the stage for quality performances:

1. Create a comfortable recording environment for the artist.


2. Be sensitive to the artist's needs and look for any signs of discomfort. Address them
immediately.
3. Talk about the song before recording. Bring them into the feeling that inspired the song in
the first place.
4. Never overwork a vocalist. If they are not feeling it, move on to something else or take a
break to refocus their energy.
5. Allow the artist to express their frustrations between takes. These are usually blocks that
inhibit better performances, acknowledge them, talk about it, and get them out of the way.
6. Always respect the artist the same way you want to be respected.
7. Don't over dwell on technical issues like pitch and timing. These problems are usually due to
trying too hard or an inability to feel the music through the headphones.
8. If all else fails, do something radically different. Have them perform with a handheld mic in
front of speakers if necessary.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

9. The performance is always primary over the sound quality.

Sometimes, it's the unorthodox approach that works best in the studio. We all want to get the
highest quality vocal sound in our performances. If that quality comes at the price of a
compromised performance, then it is all for naught. NOBODY buys records because of the
quality of the mic used in a recording. They buy a record because the performance on that
recording speaks to them. It's an intangible quality that cannot be defined in terms of
frequencies and dynamic range. You just know it when you hear it.

 Editing the Recorded Work


Through every stage of the recording process it is imperative that the editing music work be
addressed as soon after recording as possible. Editing, left undone will compromise any
future overdubbing and ultimately, compromise the song. The process of editing is one that is
fraught with critical decisions. Over-editing can lead to cold, lifeless performances. Under-
editing can leave your song sounding unfocussed and sloppy. In this article I want to address
the editing process and how to make the best decisions for your music productions.

What Does it Mean to Edit Music?

Before we take a closer look at the editing process, let's start by defining what editing is.
Computer technology and software development has blown the doors open for what is
possible when when it comes to editing music. Today, we are doing things with audio that
was inconceivable a mere 20 years ago. To put this all in perspective, let's start with a little
history…

Analog Tape Editing (The Analog Era)

The concept of editing was not even an option to the audio engineer until the 50's when
analog tape entered the recording industry. From 1908 until the mid 50's, all recordings were
literally cut directly to lacquer. A lacquer is a softer version of the vinyl disc. A lacquer was
used to record a performance and later to create the stampers that physically pressed vinyl
discs for commercial release. A lacquer disc was good for one recording. No editing! At this
point all recordings were mono, and musicians performed in the same recording space
together. No room for mistakes.

Analog tape ushered in the world of editing music. If the first half of one take was great and
the second half of another take was great, the two performances could be spliced together
with a razor blade and some splicing tape. These simple rough edits changed the recording
process, because difficult to perform sections of a song could be recorded over and over until
a suitable take was achieved. That take could then be edited into the rest of the song. The
Beatles were famous for this type of editing work in the studio under the brilliant guidance of
Sir George Martin.

As analog tape recording technology developed, the ability to punch in on a performance


would also redefine the way performances were recorded on multitrack tape machines. If a
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

vocalist had difficulty singing a particular lyric or melody, the line could be rerecorded over
and over again on the same track until the desired result was achieved. By the 70's, this was a
standard production procedure.

As track counts increased, it was common to record many vocal performances of the same
song on different tracks and selectively choose the best performances, section by section, line
by line, word by word, and syllable by syllable. Using a process called bouncing, the best of
the best could be recorded onto another track and serve as the "compiled" master take, called
a "comp" for short.

Sampling

In the 80's, sampling started to take over as the preferred method for editing music. If a
performance in the first chorus of a song was better than the subsequent choruses, the part
could easily be sampled, or recorded to another tape machine, and "flown in" to the other
choruses. This greatly simplified the recording process for background vocals that were
difficult to perform and require many tracks to capture. Rather than having the vocalists
record every section of the song with the same part, it was much easier to record it well, once,
and then "fly" it to the other sections of the song where it was needed.

Digital Editing

In the late 80's and early 90's digital recording technology forever changed the quality and
detail of editing music. Once a sample was loaded, it could also be adapted in terms of pitch
and timing. Although many of the tools used were crude by today's standards and had very
little input in terms of visual editing, they were quite effective if the editor had good ears.
Digital processing, eliminated many of the physical and technical issues associated with
analog processing technology.

Non Destructive Editing

Enter computers… Once professional audio recording with personal computers, entered the
recording studio for real in the mid 90's, the world of editing music non destructively was
born. The biggest issue of all tape based recording was that it was all destructive. Once you
hit record, there was no undo button to get you back where you were. I often blame the lack
of hair on my head on the destructive recording I did throughout the 80's and 90's!

The ability to save and store a virtual infinite number of performances, takes, and overdubs
allowed them to be edited in a way never possible. Multiple takes could easily be copied,
pasted and moved around at will without ever affecting the original recorded performance.
Each kick and snare hit of a drum performance could be perfectly matched up to a click if
desired. The delicate timing of a guitar solo could be moved around with incredible accuracy
until the perfect feeling was achieved.

Pitch Processing

Throughout most of the history of professional recording, pitch and timing were always
subject to the ability of the performer. If you listen closely to many of the great artists of the
50's and 60's and 70's you may be horrified to find how "off " pitch many of the vocal
performances were by comparison to today's standards.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Singing perfectly in pitch is not the deciding factor in the quality of an artist. The era of
multitrack recording, overdubbing and editing music led many down artists down the trail of
trying to "perfect" their performances. This led to torturous sessions where parts were
sometimes recorded over and over hundreds of times. Although sampling technology allowed
some of this to be less work for the artist, the time consuming and tedious work just shifted to
the producer and engineer.

In 1997, a processor created by Antares, called Auto-Tune, forever changed the way people
recorded vocals. The difficulties of recording in the studio and having perfect pitch while
monitoring through headphones was alleviated. The producer could focus more on the
performance and attitude rather than the pitch being perfect. Once the best performance was
achieved, the pitch could be corrected to taste with a minimum of effort.

Editing Music Defined

Although I have avoided it to this point, editing music must be defined as any process that
alters the original performance. This includes, but is not limited to, splicing, punching, flying,
comping, sampling, pitch correction, stretching and compressing, cut-copy-pasting, and any
other method used to alter the tempo, timing and pitch of a performance.

To Edit or Not to Edit, That is the Question…

The process of editing music has raised ethical questions in the minds of many artists and
consumers. Many feel that if you cannot actually perform your song in a live setting with the
same quality of performance as the recording, then you are merely a product of technology
and not a true artist. It's important to note that the vast majority of these artists do not record
live to stereo. They all use some form of editing technology. It's all a matter of where you
draw the line…

Many artists, today, use editing technology to create art that stretches the boundaries of what
is possible in acoustic only recordings. They are creating something new that can be as
compelling and artful as any acoustically recorded performance. To summarily dismiss these
artists because they are not "natural" amounts to a form of prohibition of the art of music.
Any restriction on any art form is completely unacceptable.

For those that disagree, let me be clear. Do what you do, the way you want to do it, nobody's
stopping you… If you create something that is worth while, people will buy it and you will
have a career. Never blame technology for lack of sales or success, use what technology suits
the type of music you make and take responsibility for the quality of your own work.

The Recording and Editing Music Process

There is not one single method of recording and editing music that will work with every artist
and every situation. There are many factors that lead down the trail of making the best
decisions. The amount of editing necessary will be based on many factors that have more to
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

do with the ability of the artist to perform well in a recording studio situation than it does the
level of their talent.

Start With a Good Recording

When entering any kind of recording situation, you never really know what you are going to
get. A maze of issues can arise including inadequate monitoring, uncomfortable recording
environment, psychological issues, physical issues, equipment issues, time issues, etc… The
ability to minimize and control the effect of these problems in the studio will go a long way
in determining how much editing work is necessary.

In my personal experience, preparation, communication and making the artist comfortable are
my priorities. A producer or engineer cannot control how and artist will respond to a
recording situation. They have very limited control over the artist's abilities. They do,
however, have control in adapting the recording environment to give the artist whatever they
need to feel comfortable. Very few artists perform better in stressful, encumbered situations.

Always put the artist in the best situation to succeed. Make sure they are comfortable with
everything before you begin recording. Keep a close eye and notice if they are feeling
stressed or uncomfortable. Address it immediately before it takes over the session. Never
accept mediocre performances with the idea that you can edit it into something. When editing
music, there are only so many factors under your control. Attitude, energy and feeling cannot
be edited into a performance consistently.

Before you start editing, take a good listen through the song, section by section and make
sure that you have enough of what you need to start the editing music process. If you find a
particular section of the song is weak by comparison to the others. It may be worth recording
a few extra takes with a more concentrated effort in that area if the attention is needed. Once
you have a performer in the right frame of mind, it is better to capture them with as many
takes as you can. It is much more difficult to come back a day or a week later and capture the
exact same feel.

The Process of Editing Music

The amount of work editing music you will do is dependent on the quality of the
performances you have captured. The better the performances, the less editing work will be
necessary. If you focus on the quality of performances first then the editing work will be a
breeze. All the editing work that follows should take on the 3 step process that follows.
Depending on how well the recording stage went, it may not be necessary to do all 3 steps. It
is a process, that will keep you from diving in too deep, too quickly.

The 3 steps for editing music outlined here can be approached in 2 basic ways. One way is to
work section by section with each step going as deep as is necessary. The other is to address
the song as a whole and work each step in the context of the big picture. I prefer the later
approach, because it helps to prevent you from going down the rabbit hole of over-editing
and keeps the song in perspective.

The following process will be explained using the example of editing a vocal performance.
Because vocal editing typically requires the most attention, the example should be easy to
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

understand. If understood, the same process can easily be66 followed using any instrument or
performance when editing music.

Step 1: General Editing

General editing work involves determining what works best on a global basis. If you have 3
vocal performances, start by determining which of the 3 is best overall. That will be the take
you build the rest of the editing work off of. Now determine if there are better performances
for sections of the song in the other takes. It may be that the 2nd take has a better bridge
section performance than the best overall take.

Continue this process, section by section till you have the, overall, best of the best
performances. Once you are done, listen to the general edits you have made to determine if
the performance sounds coherent and believable. You may need to match levels between
edits so your decisions are not swayed by technical differences. Take note of sections that
need more detailed work before continuing on to the next stage of the editing music process.

To help you with the assessment process, it may be worth making a spreadsheet that has the
lyrics in one column and a separate column for each take. Click here for an example copy.
Write out the lyric line by line in the first column and use the columns to the right for each
take.

You can use a grading system (A, B, C, D), a numbered system (1-10), check and x, or
whatever system works for you. I prefer simple check marks for what works and an X for
unusable lines. With the right artist, I sometimes do this as I am recording. This way I can see
quickly if a certain line or section of the song needs more work before deciding to edit.

Step 2: Medium Editing

Only enter this stage of the editing music process if there are lingering issues with the
General Editing step. Through the course of making the general edits you find that there are
certain sections, phrases or words that need a bit more attention. If you have made notes
about each section take them out and start addressing them one by one.

Start by grabbing whole phrases if possible. Look for attitude and feel instead of perfect pitch
when gauging the quality of a performance. A little pitch correction on an otherwise good
performance will sound much better than a perfectly pitched average performance. If you
need to steal a word of two from another take, make sure that the timing and feel of the edit
sounds natural.

Remember that you can copy and paste the same performances from later or earlier sections
of the song. This is necessary if none of the existing performances in that section of the song
work. Make sure that the timing and melody are the same or at least work. Use the previous
performance as a gauge when matching up the timing.

I always try to avoid using the same exact performance in more than one section if possible.
Sometimes it is better to grab an alternate performance from another section if there is a good
one available. This way you can keep the subtle differences that occur from one section to the
next. This will add a sense that the song is more of a "live" performance.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Step 3: Fine Editing

Before you start to get into the detailed fine editing music process, it is worth a fresh listen to
the overall song. When you listen, focus on the whole song and not on the vocal part.
Listening to the same part over and over can start to take you into the world of minutia and
away from what is most important, the overall feeling. Too many musicians, engineers and
producers get so caught up in the small details, they forget that that larger picture is also
being effected.

The end result of this behavior can turn a vibrant performance into a plastic surgery case
where everything sounds perfect, but somehow seems wrong. Remember that part of the
music experience are the subtle "imperfections" that make the song believable. Once you
have assessed this situation, you can then turn to the finer details that trim everything out just
so.

Pick your spots. Start with the most obvious problem areas and work from there. Try to avoid
the, "start from the beginning" approach where you edit the crap out of everything. It;s
important to keep the perspective of the whole song in mind. Certain songs may require this
type of "bully" editing as part of the driving message of the song. If you have a hip hop track
that's all about being the greatest ever, heavy editing may be a necessary part of achieving
that effect.

 Music Mixing
The Music Mixing Mindset

The art of music mixing is by far the most elusive and difficult part of the music production
process to comprehend. Of all the engineering skills one could learn, mixing audio is by far
the most difficult to master. That's why, in the professional audio engineering world, it is by
far the highest paying job. The record companies are well aware of this critical part of the
music production process and will pay a premium for engineers that do it well.

I am often humored by home recording enthusiasts, musicians and students of engineering


when they fail to understand why their mixes don't measure up to what they hear on CDs. To
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

give an analogy that may put this in perspective, let's say that you are a guitar player who
idolizes Jeff Beck. You've been playing guitar for 1 year and can't understand why your
guitar playing is not as good as Jeff Beck's. Mixing is as much of an art as guitar playing. It
requires a lot of patience, knowledge, and practice.

In this article, I want to give you some insights that will help correct your approach to music
mixing. Without the right mindset, you will be embarking on a journey with no map and no
idea of where you are going. Mixing is not about processing, tricks, effects or EQ. It is all
about understanding how we perceive sound, and how to capture that essence in a pair of
speakers.

Zen and the Art of Music Mixing

The art of music mixing is very much the path of the zen master. The more present you are
when you mix, the more quickly you will work and the less you will fall prey to the trappings
that come from over processing. You will only do what is necessary, no more, no less. The
biggest problem I see today with music mixing is that the mindset for mixing is completely
wrong. It's easy to get caught up using compressors, equalizers and effects processing on
everything without even listening to see how it affects the whole production and the message
of the song.

There are some basic rules I always use when mixing music. What's great about these basic
rules, is how simple the concepts are. Essentially, your frame of mind, when music mixing,
will take you much farther than any plugin ever will. Playing Jeff Beck's guitar will never
make you play like Jeff Beck. Understanding how Jeff Beck approaches guitar playing will
not either, but it will at least send you off in the right direction.

A Humbling Perspective on Sound

The sense of hearing is one of five physical senses we have as human beings. For those who
have all five functioning properly, the most predominant sense is sight. Our ability to see
something has the greatest impact on our lives. Think about all the things we say, "You've got
to see it to believe it" or "seeing is believing". We are a "visionary" or have "foresight". We
want to "look" somebody in the eyes to see if they are lying to us. It is the sense we trust
most.

By contrast, sound is lest trustworthy. We use phrases like, "That's just hearsay", "we'll play
it by ear" or you should be "seen and not heard". The term "phony" was coined with the
invention of telephones. It implied a lack of trust with the person on the other side of the
phone because you could look them in the eye to judge if they were lying to you. In general,
our innate measure of sound is not a very trusting or positive one.

The truth is that sound is secondary to sight. Sound adds meaning and feeling to what we see.
It forewarns us of what to look for as we are out in the world. This understanding is very
important. To put it simply, everything we have heard throughout the existence of mankind is
related to something we can see or at least feel. It is a fundamental part of the design of our
brain.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Even with the invention of synthesis, sampling and processing technologies, the neurological
programming remains. We still have the ability to visualize what we hear. Once you
understand this fundamental design, you will start to "look" at your music instead of listen to
it. You will start to become conscious of the unconscious programming of the listening
audience. Your mixes will start to sound good on all speakers, not just on the ones in your
studio. Most engineers call this "imaging". Learning the skill of imaging for your music is a
process that requires a lot of listening, practice and a basic understanding of acoustics.

How We Hear

There are two basic aspects of hearing, the physical and the psychological. There is loads of
information on how the hearing mechanism works and, while this is important, it is more or
less easy to understand. What I want to focus on here is the part that isn't as obvious or
understandable, the psychological.

It's All About Contrast

The programming of all of our senses has been primarily developed for one purpose, survival.
Our sight allows us to see things that are in front of us, like a truck passing through an
intersection. Our hearing forewarns of things we cannot see, like a car racing from around the
corner. The survival mechanism focuses on one basic principle, what is changing in front of
us. What is it in contrast to the environment.

If you are sitting in a room and hear the air conditioning turn on, you will notice it. After a
short time, your conscious awareness will shift to something else in the room that is
changing. You will forget about the AC because it stays the same. A soon as it turns off,
however, you will notice it again because it has changed.

This analogy is one of the most basic principles to understand when music mixing. In order
for something to stand out, it must be changing in a meaningful way. In other words, what
you want the listener to focus on must somehow contrast the environment of the rest of the
music. This basic principle works hand in hand with another key element of the way we
perceive sound.

One Thing At A Time

As much as we all like to believe that we can multitask, study after study continues to show
that it is impossible to effectively focus on more than one thing at a time. This is perhaps the
biggest reason why most peoples mixes sound like crap. They are trying to get you to focus
on everything in the mix all at the same time.

If you've ever been in a situation where two or more people are trying to talk to you at the
same time, you can understand why mixing music with this same principle in mind does not
work. Your immediate reaction to such a situation typically is to step back and say, "wait a
minute, one person at a time…" To approach your mixing in this way is to make one of the
most basic music mixing errors. Remember, a song is essentially a story, that can only be told
by one person or instrument at a time. The rest, must support that message without extended
conflict.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

A Deeper Understanding of Music Mixing

Music mixing is very much like moving into a house. The furniture and personal items you
bring in will determine how inviting your house will be to your guests, the listeners. Anyone
that plans to move into a new house or apartment will typically become aware of what other
peoples places look like. The layout of the house, the positioning of the furniture, the size of
the TV, etc… You will notice things you like and things that you don't about each place you
go into.

If you extend these natural skills of curiosity to music, your job must then become to look at
the music mixes you like and try to figure out how they got to be that way. You may find
yourself studying the "mansions" of the music industry in your quest. There is nothing wrong
with that. In fact, it is critically important to study the best of the best in order to absorb the
highest level of the art.

Transforming Your Music Mixing Approach

Even though you may not be working with the same quality of songs, performances, and
equipment, you can still achieve very similar results. It's just like the reality show where the
host comes in and makes the crappy beat up studio apartment look like a great place to live.
The interior decorator studies mansions, and great decorating through magazines and seeing
great houses whenever possible. They study the intricate details that make a space convey the
feeling that is appropriate to the purpose of the room being decorated.

What I am attempting to do here is to perform the music mixing equivalent of an intervention


much like the many makeover shows we see on reality TV. Every decision you make in a mix
directly affects the feeling and message of the lyric and song. Are you using bright EQ in a
song that is about depression? Are you adding too much low end to an upbeat fun song? You
must become sensitive to the technical aspects of what makes a song work, while being
sensitive to the feeling that results.

Study mixes of songs that you love. Grab a pad of paper and start by writing down what you
think the song is about. What is the prevailing sentiment, depression, love, jealousy, is it a
party track or inspirational in nature? Next, you can write down every instrument that you
hear in the mix. On a scale of 1-10 (10 being loudest) how loud is each instrument or
element. Note where each instrument is panned in the speakers.

What effects do you hear, delays, reverb, chorusing or flanging. Close your eyes and try to
see the music. Where does each instrument in the mix sound like it is coming from. Is it far
away or close up front. Is it loud or low, clean or distorted. Any adjective you can use to
describe what you hear. If you notice certain types of effects, list them as best as you can. By
taking on this practice, what you are doing is creating maps for how music is mixed.

The Importance of Music Mixing Maps

I cannot overstate how important the process of studying mixes and making maps is to
becoming a good mix engineer. The purpose of this approach is not to duplicate everything
you hear in other mixes. The purpose is to create templates for the production style so that
you can get most of the mix done in an efficient manner. Once you have built a good
foundation, you can get creative to make the song unique.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Every style of music has certain music mixing principles that are fundamental to making in
work. You can make a dance mix with a small kick and bass sound that is low in level
relative to the other instruments, but no DJ will ever play it in the club. The more you
understand, from an engineering perspective, what makes a particular type of music tick, the
quicker you will be able to build that strong foundation from which to build a great mix.

The Music Mixing Process

Music mixing is akin to moving into a new house or apartment. Your furniture and
belongings represent all the individual performances that you have recorded in a song. Your
job is to situate those performances in a manner much like you would the furniture you are
moving to your new home. If done right, it is something that you can enjoy for years to come.
Whether the results are good or not depends on the decisions you make along the way.

Getting Started

Because every song is unique it will require creative decisions that are impossible to
formulate. I can't tell you what color to use on the walls, where to place the furniture in the
room, what types of shades or blinds you use on your windows. These are creative choices,
you have to make based on the layout and square footage of the house. What I do hope to
accomplish here is to give you a fundamental process that underlies every decision you make.
This way, every decision you make will come from a fundamentally sound place.

To carry the moving analogy further, let's look at the fundamental process of music mixing.
There are many things in this process that are subject to interpretation. If you want to open all
the boxes with a sawzall, that's up to you. The beauty of mixing is that , unlike moving, at
least you can always hit the undo button if you butcher something! The process laid out here
is not entirely a linear way of working through a mix. You may find that you have to take
steps back in this process quite often in order to find you way to a great mix. When you step
back to make changes, continue to follow the outlined steps in order.

The Fundamental Music Mixing Process

1. Levels And Panning


2. Subtractive Eq And Editing
--Adjust Levels--
3. Compression
--Adjust Levels--
4. Effects Processing
--Adjust Levels--
5. Shaping Eq
--Adjust Levels--
6. Grouping Instruments
7. Automating Levels
8. Printing The Final Mix

Lets dive right in with the first step in the music mixing process.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Moving In (Levels and Panning)

Imagine yourself moving into a new house. You bring in furniture, loads of boxes full of
clothing, books, pots and pans, etc… You then go about the process of unpacking all of your
boxes, taking the packing blankets off of your furniture and starting to organize your
belongings. Boxes marked Kitchen will go into the kitchen. Boxes marked bedroom go to the
bedroom. Boxes marked office, go into the office, etc… Move tracks that work together in
your mix next to each. This will keep you from moving around too much in your mix
window.

Imagine now that each box and piece of furniture is an audio recording that is part of your
mix. The elements of your song that run throughout, drums, bass, guitars, and vocals are the
furniture. The added tracks that fill out the rest of the recording are the pots and pans, lamps,
books, paintings etc…

This is the Level and Panning phase of the music mixing process. You should carefully place
sounds, like your furniture, in a way that is complimentary to the song. If the focus of a room
is the TV, then all of the furniture in that space should be pointing towards the TV and
arranged in such a way that you can enjoy watching it.

The mindset of levels and panning has to do with the relative distance (levels) and the
placement in the room (panning) to the TV. If the TV represents the lead vocal in your mix,
the same care should be taken to make sure the other instruments compliment the placement
of the lead vocal. If they get in the way, the attention to the vocal will be lost.

Start with the big stuff first and move it into place. There is a reason that most engineers start
with Drums and Bass first before moving on to other instruments. Even engineer's that start
with the vocals first (a top down mix) usually go straight to drums and bass after getting a
vocal sound.

Take Out the Garbage (Subtractive EQ and Editing)

Once you have finished unpacking and placing your furniture you will be left with piles of
empty boxes and packing materials that no longer serve a purpose. You may even realize that
you have extra stuff you don't really need anymore. This will become evident as you continue
the process.

The idea here is to strip away what is not necessary from the audio tracks. This includes, but
is not limited to, filtering off low frequency rumble or hiss, eliminating or muting
performances that don't work or cloud the production, applying subtractive EQ to recordings
that are muddy of indistinct and editing out areas of regions where no music is present.

This phase of music mixing process yields enormous benefits. It allows you to better enjoy
the details of the individual performances. By removing the clutter, space will be created that
gives you the flexibility and room to shape the sounds any way you like.

After using applying this process, adjust levels to compensate for the changes that you have
made. This is critical because removing the garbage from your tracks will mean that other
tracks are less covered up and may be louder in the mix. The tracks you apply subtractive EQ
to may also sound lower in the mix and need to be raised.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Packing It In (Compression)

There is no perfect analogy of compression to the move into your new house. Compression
directly affects the perceived size and density of a sound. The closest analogy I can give is
based on how much furniture you have, and how big all of it is. The more furniture, the more
closely packed each piece may need to be placed. You may also decide to put a smaller couch
in the TV room if that is what the room allows.

Compression is by far the most misunderstood form of processing for the novice or
inexperienced professional in the music mixing process. The fact that it is difficult to hear for
most has to do with the fact that most don't really know what to listen for. If you look at what
compressors do in our everyday world though, the picture should be a little clearer.

In general, compressors make things smaller and more dense. Pressing on the cap of a can of
compressed air will show you the effects of compression. If you apply this same principle to
audio, the track you apply the compression to will also become smaller and more dense. The
sound emanating from the speaker will also be projected more forcefully similar to the way
the compressed air escapes the can.

Compressors can serve many different purposes when music mixing and it's very important to
know what these basic uses are and when you need them. Never apply compression to a
sound unless it serves a specific purpose. If you don't achieve the desired effect you are
looking for, leave it out.

The Primary Functions Of A Compressor Are As Follows:

1. Even Out A Performance


2. Add Presence To A Performance
3. Control The Perceived Sustain And Groove Of A Performance
4. Add Aggressiveness To A Performance
5. Shrink The Size Of A Performance

All of these forms of compression have very specific purposes in music mixing. They allow
the sound to moved around in the speakers mostly front to back but also up and down if used
within specific frequency areas. After the subtractive EQ is done, compression can serve
many of the purposes additive EQ will serve in a mix but with an added benefit. EQ adds
volume to given frequency areas, compressors add density. One makes the track bigger, the
other more dense and powerful.

Compression, like any other form of processing, requires that you zoom your attention out to
the whole mix and adjust levels after processing. Notice how the adjustments you have made
affect the other tracks in the mix. The fundamental idea here is to always make sure all the
individual performances are working together in a mix.

Size Matters (Effects Processing)

The size of your mix is defined by reverbs and effects processing in the music mixing
process. To carry this out through the moving analogy, the larger the house, the bigger the
furniture that goes into it can be. Most confuse size with frequencies, particularly with the
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

amount to low end. This could not be further from the truth. Size is a function of the
perceived 3 dimensional space, EQ is primarily a 2 dimensional tool.

Like the bigger house, the size of the space you select determines how big you can make the
individual performances in your mix. Another common misconception is that the size of a
space is determined by reverb time. Actually, it's determined by the amount of pre-delay.

Think about what size space is appropriate for the type of music you are mixing. Classical
music does not generally sound great when placed in a small space. A punk rock record will
be a huge mess if placed in a very large reverent space. These decisions are mostly based on
the musical style of the song. An aggressive song generally needs to be dryer in order for it to
sound "in your face". A slower song requires more reverb and effects to fill in the empty
space.

The Benefit Of Effects Processing Before Additive EQ

Unless there are any pressing needs for additive EQ I usually like to add effects before
shaping frequencies. The use of reverbs, delays and modulation effects like flanging in music
mixing can usually do more to help get the sound you are looking for. Effects processors add
Tone to a sound, not EQ. They also allow you to separate instruments by helping to create a 3
dimensional space in the speakers. When used properly, you can get a lot of perceived size
out of your mix without overloading frequency areas in the mix.

Let's take a quick look at some examples:

1. Tone: The tone of a harsh bright vocal can rarely be satisfactorily fixed with EQ. Using a
very short, warm sounding, reverb or early reflections program will instantly add body to the
voice. Throw a longer warm reverb like a hall program on top and you will have
accomplished the majority of the warming you want without losing the presence of the voice.
The same concept can be applied to a dull sounding voice by using a bright reverb to add
presence.

2. Space: Short room programs are a great way to add depth and space to a sound. How close
the dry original sound is depends on the amount of pre-delay. Pre-delay is the amount of time
before the onset of reverb. The longer the delay, the larger the perceived space. The wet dry
balance determines how far back in the speakers the dry sound is from you when using a
reverb with no pre-delay. Using longer pre-delays will make the reverb sink back behind the
speakers.

3. Spread: A chorus effect panned in stereo will widen and thin out a sound that is too dense.
It can also spread a sound outside of the speakers to the left and right by panning the dry
signal to one side and the mono chorus effect to the other. The modulation effect can be
hidden by minimizing the depth and speed of the chorus.

Effects processing can also be used to group instruments together or separate them from each
other. Using the same short room program on all the rhythm section instruments is a great
way to make them sound like they are performing together. Using a unique effect for the
vocal, that no other instrument shares, will make it stand out from the other vocals and
instruments. Always adjust levels after adding effects.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Shaping the Mix (Additive EQ)

With all your best efforts to set the tone, depth and balances of your mix, there is almost
always some work that remains. This is where the EQ is best used. I have rarely been
satisfied with mixes that are primarily EQ driven. Music mixing is a process better started by
creating the tone, depth and balance of your mix first using the tools we have already spoken
of. The EQ is a great tool for cleaning up what could not be accomplished by those tools.

The main problem with EQ is that when you add it to one track in your mix, it will cover up
something else that needs the same frequency area. Looking back to the moving analogy, if
your furniture is too big, there will be no room for you to walk around it and use it. If you put
a huge lampshade on the lamp next to the couch, you may find that it gets in the way of your
head and blocks the view to that side of the room.

More than any other form of processing, you need to be conscious of where you add EQ into
a mix. Decisions must be made regarding which instruments are dominant in the mix and
require certain frequency areas the most. For example a bass guitar needs low midrange
frequencies more than a kick drum does to sound good. Scooping low mids out of the kick
may allow you to add them to the bass to achieve the warmth you are looking for.

The concept of give and take with frequencies with music mixing is a crucial one to
understand. The individual instruments are like puzzle pieces that must fit together in order to
form the whole picture. Remember that any form of additive EQ will require an adjustment of
levels throughout the mix. If you don't listen to how the EQ you added affects the other
instruments in the mix, it will soon fall apart like a house of cards.

Grouping Instruments

There are many forms of grouping in music mixing. The most common one used is Mix
Grouping. The idea is that you can change the level or mute state of all members of the group
by changing the level or mute state of any one member. It is usually not beneficial to enable a
mix group until your balances and sounds are very close to what you are looking for.
Otherwise, you may find yourself unwittingly changing balances to the other members of the
group.

The other type is Audio Grouping where all the members of the group are bussed into a
single stereo track so that they can be processed together in their stereo mix form. A group
buss is a form of combining amplifier where signals are combined into what is called a mix
stem or submix before being sent to the master stereo mix. This concept serves many
valuable purposes in the music mixing process.

1. Allows groups of instruments to be processed together.


2. Allows for the easy creation of mix stems for remixing.
3. Allows you to easily create simple variations of a mix.

Typically, mix stems are divided into four categories:

1. Drums and percussion with effects


2. All melodic/harmonic music instruments with effects
3. Backing Vocals with effects
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

4. Lead Vocals with effects

These four mix stems can then be sent to the Master Fader for the final mix output. They can
be selectively soloed or muted for the creation of A Capella mixes, Instrumental Mixes, TV
Mixes (mix minus Lead Vocal) etc… It also allows quick and easy creation of "vocal up" or
"vocal down" versions. The individual mix stems can also be processed to make the sound
more coherent.

Mix Automation

Adjusting levels after every level of processing is a necessary step in the music mixing
process. At some point in your mix, however, you will need to automate those fader levels to
accommodate changes from section to section in the song. Additionally, automation can help
enhance the dynamics of the song by pushing and pulling levels as needed. Why is this
necessary?

The modern music production process does not typically have all musicians performing the
entirety of a production all at one recording session. Songs are typically recorded in stages as
outlined here in the Music Production Process. As a result, the musicians that layer on
overdubs can only respond to what has already been recorded. Even with the guidance of a
good producer, the musician will subtly push or pull in places that may not entirely support
the parts that will be layered on later.

The end result is that the dynamics of each performance will need to be adjusted based on
what else is going on at the same time in the song. The fundamental purpose of automation in
music mixing allows for the adjustment of these inconsistencies. The ability to move things
in and out of your attention in this way helps to cover up the fact that the performances were
recorded separately.

When done properly, the song will sound like a complete performance and will take on a life
of its own. In the music industry this is called "making a record". It is the final step of the
music mixing process that takes all the individual performances and weaves them together
into the final product. It is an absolutely essential process for the production to sound
complete.

The 4 Step Basic Process For Applying Mix Automation:

1. General Levels
2. Section To Section Levels
3. Weaving Performances Together
4. Fine Tuning

Always start with the big picture, make sure the general levels work well. If you cannot get
good balances then you still have more processing work to do. Once the sounds and general
levels are good you will still find that certain parts are loud in some sections and soft in
others. Make these adjustments section to section. Take careful note of how your automation
affects everything else in the mix and adjust accordingly.

The next step is to weave the performances together so that the instruments you need to hear
most prominently are not masked by other performances. When raising the level of one
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

instrument, I usually look to pull down the level of another instrument that occupies a similar
space or frequencies. This simple idea will keep you from crushing the whole mix just
because you wanted the guitar solo to be louder.

The fine tuning stage involves making sure that all the subtle nuances of a performance are
heard. This is typically done most with lead vocal rides. Riding in the subtle details of the
lead vocal will help to focus listener's attention on that performance. It's important not to do
this with every instrument. If you applied the same technique to every instrument, you may
find too many tracks fighting for attention from the listener.

Printing the Final Mix

Printing the mix is the last step in the music mixing process. Music mixing "inside the box"
has made this stage far less stressful than in times past. Since the era of multitrack recording
in the 60's, mixing has progressively become more complicated. The stress of getting it right
the first time was much higher because of the difficulty to restore the exact same sound at a
later date.

A 48 track mix in Pro Tools will come back in a few seconds after loading the session file.
This allows a mix to be revisited at any time with the full expectation that everything will be
exactly as it was. To restore a 48 track mix on an analog console with an average amount of
external gear is at least a 4-5 hour process, even with very detailed notes.

Ironically, the biggest benefit of computer based music mixing is also the biggest problem.
Because it's so easy to bring the mix back up, the mixing process can go on forever. This can
easily lead to a final mix that's been stripped of all its character and uniqueness. At some
point you have to let it go and move on to creating new music.

When you have finally decided to commit to printing, make sure you print the individual mix
stems. These stems can come in very handy for many reasons. They can be sent out to
remixers who will want the groupings of instruments to be isolated for sampling purposes.
Because all the automation and effects are burned into the stems, the remixer won't have to
spend time trying to recreate the vibe of the original mix.

They are also handy to have around in case the mastering engineer has issues with your
mixes. The mix stems can easily be put together in the mastering computer where drums can
be processed separately from the vocals and other instruments. Almost any variation of a mix
for rehearsals, live performances, TV performances etc… can be created from these four
simple stems.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

 Audio Mastering
The art of mastering audio has developed immensely since its start in the early 1900's. Up
until the creation of the analog tape machine, all performances were captured directly to a
form of vinyl disc called a lacquer. Once cut, the disc was processed to create what is called a
metal "stamper" used to press the melted vinyl into the actual discs played on a turntable.

Mastering, by technical definition, is the actual process of creating the stampers that are used
to press the vinyl discs. The mastering process has evolved over the years to follow the
changes in commercially released technology from the original 78 rpm discs. These changes
include 33 1/3 and 45 rpm discs, audio cassettes, 8 track tapes, CD's, mini discs and mp3s.
Each emerging technology presented new options to the consumer and new challenges to the
mastering engineer.

In the professional recording studios, the emergence of the analog tape machine in the 1950s
changed the way records were made forever. The process of recording to analog tape
removed many technical limitations of recording directly to a lacquer and added an important
new job, the transfer engineer. The transfer engineer's job is what we commonly call today,
the mastering engineer.

The Transfer Engineer and Pre-Mastering

The job of a transfer engineer was to take the analog tape master and transfer it to the lacquer
so that the metal stampers could be created. This extra step in the process took a load of
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

pressure off of the recording engineer who could focus primarily on capturing the
performance and not have to worry about whether it would cut well to the lacquer. Overly
dynamic performances or excessive bass frequencies, that would normally destroy the
lacquer, could be more easily dealt with in the transfer process.

Soon, the job of the transfer engineer would become an art-form of its own. Technology
would quickly develop to accommodate the issues faced in the transfer process. As this
technology developed the term pre-mastering entered the lexicon of the audio world. Pre-
mastering is the preparation process before the actual mastering of the stampers takes place.

Special control rooms and consoles were created to aid the process. The addition precision
equalizers and high end compressors helped to address the increasing demand for sonically
superior records. The loudness wars began with the pressure to make every song louder than
all the others on the radio. Pre mastering audio became the bridge from the recording studio
to the consumer and a critical part of the music industry.

The Process Of Mastering Audio

The Process of mastering audio involves a series of steps that have not changed very much
over the decades. What has changed is the tools used, the medium worked with and the end
product that is released to the public. While the mediums have evolved and the number of
ways we can master audio have increased, the basic steps remain. Let's review those step one
by one and show how they have developed over the years.

1. Prepare The Master Mixes


2. Transfer
3. Set The Song Order
4. Edit
5. Set The Space Between Songs
6. Processing
7. Levels vPQ and ID Coding
8. Dithering
9. Create The Final Production Master

Although the order in which some of these steps are taken has changed through the decades,
each step must still be carefully considered. The development of digital technology, in
particular, has increased the options of mastering audio exponentially. Today, mastering
engineers can do things that were not even conceivable just a few decades ago. Let's take a
quick look at each step in the process.

Prepare The Master Mixes

The final mixes must be brought into the mastering studio in some format. Since the start of
multitrack recording in the 1960s, this format was always analog tape. In the late 80s, many
final mixes were recorded onto digital tape in RDAT or Reel to Reel format. As computer
technology developed through the 90s, a data disc or hard drive became a suitable medium to
bring to the mastering engineer. With the development of the internet, FTP would also
become an acceptable method for supplying the final masters.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

Whatever the delivery medium, the client must present it in a format that the mastering
engineer is capable of working with. The mastering engineer is responsible to manage the
masters with a careful discerning ear. They must determine that the masters are suitable for
processing and decide the best method of transfer.

Analog tape masters must have a tone reel, used to align the tape machine electronics, in
order for the masters to be accurately transferred. Digital tapes must be carefully viewed for
error counts, dropouts and clocking issues that may degrade the master. Computer based
mixes must be examined for sample rate, bit depth and file format to determine if the best
quality format has been presented.

The Transfer Process

The transfer process, for mastering audio, has been greatly simplified over recent years as a
majority of final mixes are presented as digital audio files on a hard drive. This has largely
negated the need for analog to digital conversion. Since the 1980s the conversion from analog
to digital was seen as the weakest link in the mastering process. Because this technology has
received an enormous amount of attention over the decades, it is not uncommon for digital
files to be converted to analog for processing before being transferred back into the mastering
program for processing.

Mastering audio for vinyl was a simple matter of organizing the final mixes onto 2 large reels
representing side A and B. Any editing or spacing between songs would need to be done with
a razor blade and splicing tape before being sent to the mastering console for processing and
leveling. Once the processing and levels was determined for each song, the final masters
would be transferred to the lacquer, in real time, one side at a time.

For CD and downloadable releases, analog tapes may be processed first, using analog
compressors and equalizers, before the conversion process to digital. The decision to process
first before transfer would be based entirely on which method sounded best. The sample rate,
bit depth, quality of A/D converters and digital clocking source would all be careful decisions
to best preserve the original quality of the final mixes.

Setting The Song Order

Many mastering engineers will import songs in the order in which they would appear on the
CD. Sometimes there is a specific reason to import them in a different order due to the media
they are being transferred from or specific analog processing that works well only for
selected tracks. Once imported into the mastering program, changes of song order can easily
be made without affecting any other level of processing or editing.

In the days of vinyl records, the song order was a carefully weighed decision. Because there
are 2 sides to a vinyl disc, a decision must be made for each song to determine if it belongs
on side A or side B. Songs can be divided up in a variety of ways that capture a certain vibe
or feeling for each side. The total running time for each side also weighs in greatly and can
affect the audio quality if not divided equally. The more audio there is on one side of a vinyl
disc, the less deep each groove can be cut and the lower the quality will be.

When mastering audio for CD the song order should be focused on the overall flow of the
entire CD. The CD must start with a strong song but not necessarily the single that would be
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

released radio play and promotion. If you focus all the best songs early, the listener may not
ever make it through the CD. Even though many will listen on an mp3 player and only keep
what they like, weaving a coherent flow of songs will eventually draw a fan into enjoying all
the songs on the CD, not just the singles.

Editing

Once the masters are transferred, the files will need to be edited so that the start and end of
each song is clean. There is usually a short breath of space left in at the beginning of a song,
with a fade-in, to smooth in the transition from silence. End edits involve getting rid of extra
noises and chasing the ending with a fade-out to conclude the song naturally.

When preparing your final mixes for mastering it is always best to supply mixes that have
extra room at the head and tail of each song. This way the mastering engineer has something
to work with. It is not uncommon for mixes to be presented to the mastering engineer with
the heads and tails clipped. This leads to extra editing work for the mastering engineer who
then has to find a way to make the start and end sound natural.

Setting The Space Between Songs

The space between songs will define the flow of the record from beginning to end. When
mastering audio, the producer and artist will help to define when the entry of the next song
sounds natural. You may need a longer space after a hard hitting track if the next song is
lighter in feel. Conversely, coming out of a softer song you may want the space to be short if
you want the next song to have more impact. Many dance records line up the next song to
start on a virtual downbeat as if the tempo from the previous song had continued through the
space in between.

Processing

Mastering audio can also involve a bit of processing when called for. The motto of the
mastering engineer when processing is always the same, do no harm. Processing generally
comes in only 2 forms even though those 2 forms can serve a large variety of purposes. The 2
forms of processing are compressors and equalizers.

Compressors serve an enormous number of purposes when mastering audio. A compressor


when used lightly can add overall level and power to a mix . In the form of a peak limiter can
be used to control peak levels that allow the overall gain of the song to be increased. In the
form of a multi-band compressor it can be used to strengthen a frequency area that is
deficient in the mix.

Equalizers also serve many purposes in mastering audio. An EQ can be used to subtly shape a
frequency area of a mix to add clarity and depth. It can also be used to filter out low
frequencies that keep a mix sounding muddy or lacking in punch. A notch filter may be
employed to remove a troublesome frequency in a mix.

Levels

The next step in the mastering audio process is to make sure that the overall levels from song
to song are even. This is not as easy as it may seem on the surface. The frequency content,
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

density of frequencies and amount of compression can lead to uneven balances that require a
good ear to get right. Additionally, a fade in or fade out on one song can skew the perceived
level of the next. The difference between perceived level and actual level can easily lead to
bad decisions if only looking at the meters for reference.

The use of sonic maximizers can come in handy here. A sonic maximizer is a form of limiter
that controls transient peak signals and allows all signals below the threshold to be raised up
by the same amount of gain reduction. Although the design and options vary from plugin to
plugin, the essential mission is the same, make it as loud as you can without destroying or
distorting the mix. Today, it is the most commonly used, and abused, tool to get perceived
loudness from a mix when mastering audio.

PQ Coding And ID Tags

The PQ coding and ID tagging process allows CD Text, ISRC codes, UPC/EAN and Copy
Protection data to be entered into the instructional data of a CD or downloadable file. ID
tagging allows downloaded digital audio files to be identified in terms of song name, artist,
songwriter, date recorded, musical style, etc… The tagging can also allow for ISRC and
UPC/EAN coding so that sales and radio play can be tracked by the owner of the recordings.

Dithering

A great way to preserve the quality of higher resolution masters is to apply a process called
dithering. Dithering is a process that involves adding low level random noise to the audio
when lowering the bit depth from 24 bit to 16 bit as required for CD mastering. The added
randomness helps preserve the sense of depth in a mix that is normally found with higher bit
depth masters. It is always the very last step of the mastering audio process before printing
the final production master.

Creating the Final Production Master

The final stage of the mastering audio process is to burn the final production master. The
final product of the mastering session can be a burned PMCD or a DDP file. PMCD stands
for Pre Mastered CD which is formatted specifically for the manufacturing plant and used to
create what is called a glass master. A high quality disc burner, and CD media is an absolute
necessity to keep the error count low.

The DDP format (Disc Description Protocol) is a data file that contains all of the necessary
information for the creation of the glass master. The DDP file is saved directly to your hard
drive can usually be uploaded to the manufacturing plant's web site. A DDP file is more
reliable and convenient than the PMCD and has become more widely accepted.

The Glass master is a glass disc with a thin film layered on it. The data from the DDP or
PMCD is burned into the film with a laser that creates the microscopic pits and lands that are
part of the physical creation of the audio CDs you buy in a store. From the glass master,
stampers are created to press the CD discs in a very similar manner as done with vinyl
records. This process creates a Red Book standard disc sometimes called a CD+G. CD's
burned with a computer, by comparison, are burned optically and can have significantly
higher error counts.
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org
PROPERTY OF TENSTRINGS MUSIC INSTITUTE NIGERIA – www.tenstrings.org

You might also like