Audio Pre & Post Production in Films
Audio Pre & Post Production in Films
• Development. The script is written and drafted into a workable blueprint for a
film.
• Pre-production. Preparations are made for the shoot, in which cast and crew are
hired, locations are selected, and sets are built.
• Production. The raw elements for the finished film are recorded.
• Post-production. The film is edited; music tracks (and songs) are composed,
performed and recorded; sound effects are designed and recorded; and any other
computer-graphic 'visual' effects are digitally added, and the film is fully
completed.
• Sales and distribution. The film is screened for potential buyers (distributors), is
picked up by a distributor and reaches its theater and/or dvd audience.
Development
This is the stage where an idea is fleshed out into a viable script. The producer of the
movie will find a story, which may come from books, plays, other films, true stories,
original ideas, etc. Once the theme, or underlying message, has been identified, a
synopsis will be prepared. This is followed by a step outline, which breaks the story
down into one-paragraph scenes, concentrating on the dramatic structure. Next, a
treatment is prepared. This is a 25 to 30 page description of the story, its mood and
characters, with little dialog and stage direction, often containing drawings to help
visualize the key points.
The screenplay is then written over a period of several months, and may be rewritten
several times to improve the dramatization, clarity, structure, characters, dialogue, and
overall style. However, producers often skip the previous steps and develop submitted
screenplays which are assessed through a process called script coverage. A film
distributor should be contacted at an early stage to assess the likely market and potential
financial success of the film. Hollywood distributors will adopt a hard-headed business
approach and consider factors such as the film genre, the target audience, the historical
success of similar films, the actors who might appear in the film and the potential
directors of the film. All these factors imply a certain appeal of the film to a possible
audience and hence the number of "bums on seats" during the theatrical release. Not all
films make a profit from the theatrical release alone, therefore DVD sales and worldwide
distribution rights need to be taken into account.
The movie pitch is then prepared and presented to potential financiers. If the pitch is
successful and the movie is given the "green light", then financial backing is offered,
typically from a major film studio, film council or independent investors. A deal is
negotiated and contracts are signed.
Pre-production
In pre-production, the movie is designed and planned. The production company is created
and a production office established. The production is storyboarded and visualized with
the help of illustrators and concept artists. A production budget will also be drawn up to
cost the film.
The producer will hire a crew. The nature of the film, and the budget, determine the size
and type of crew used during filmmaking. Many Hollywood blockbusters employ a cast
and crew of thousands while a low-budget, independent film may be made by a skeleton
crew of eight or nine. Typical crew positions include
• The director is primarily responsible for the acting in the movie and managing the
creative elements.
• The assistant director (AD) manages the shooting schedule and logistics of the
production, among other tasks.
• The casting director finds actors for the parts in the script. This normally requires
an audition by the actor. Lead actors are carefully chosen and are often based on
the actor's reputation or "star power."
• The location manager finds and manages the film locations. Most pictures are
shot in the predictable environment of a studio sound stage but occasionally
outdoor sequences will call for filming on location.
• The production manager manages the production budget and production schedule.
He or she also reports on behalf of the production office to the studio executives
or financiers of the film.
• The director of photography (DP or DOP) or cinematographer creates the
photography of the film. He or she cooperates with the director, director of
audiography (DOA) and AD.
• The art director manages the art department, which makes production sets,
costumes and provides makeup & hair styling services.
• The production designer creates the look and feel of the production sets and
props, working with the art director to create these elements.
• The storyboard artist creates visual images to help the director and production
designer communicate their ideas to the production team.
• The production sound mixer manages the audio experience during the production
stage of a film. He or she cooperates with the director, DOP, and AD.
• The sound designer creates new sounds and enhances the aural feel of the film
with the help of foley artists.
• The composer creates new music for the film.
• The choreographer creates and coordinates the movement and dance - typically
for musicals. Some films also credit a fight choreographer.
Production
In production the movie is created and shot. More crew will be recruited at this stage,
such as the property master, script supervisor, assistant directors, stills photographer,
picture editor, and sound editors. These are just the most common roles in filmmaking;
the production office will be free to create any unique blend of roles to suit a particular
film.
A typical day's shooting begins with an assistant director following the shooting schedule
for the day. The film set is constructed and the props made ready. The lighting is rigged
and the camera and sound recording equipment are set up. At the same time, the actors
are wardrobed in their costumes and attend the hair and make-up departments.
The actors rehearse their scripts and blocking with the director. The picture and sound
crews then rehearse with the actors. Finally, the action is shot in as many takes as the
director wishes.
Each take of a shot follows a slating procedure and is marked on a clapperboard, which
helps the editor keep track of the takes in post-production. The clapperboard records the
scene, take, director, director of photography, date, and name of the film written on the
front, and is displayed for the camera. The clapperboard also serves the necessary
function of providing a marker to sync up the film and the sound take. Sound is recorded
on a separate apparatus from the film and they must be synched up in post-production.
The director will then decide if the take was acceptable or not. The script supervisor and
the sound and camera teams log the take on their respective report sheets. Every report
sheet records important technical notes on each take.
When shooting is finished for the scene, the director declares a "wrap." The crew will
"strike," or dismantle, the set for that scene. The director approves the next day's shooting
schedule and a daily progress report is sent to the production office. This includes the
report sheets from continuity, sound, and camera teams. Call sheets are distributed to the
cast and crew to tell them when and where to turn up the next shooting day.
For productions using traditional photographic film, the unprocessed negative of the day's
takes are sent to the film laboratory for processing overnight. Once processed, they return
from the laboratory as dailies or rushes (film positives) and are viewed in the evening by
the director, above the line crew, and, sometimes, the cast. For productions using digital
technologies, shots are downloaded and organized on a computer for display as dailies.
When the entire film is in the can, or in the completion of the production phase, the
production office normally arranges a wrap party to thank all the cast and crew for their
efforts.
Post-production
Here the film is assembled by the film editor. The modern use of video in the filmmaking
process has resulted in two workflow variants: one using entirely film, and the other
using a mixture of film and video.
In the film workflow, the original camera film (negative) is developed and copied to a
one-light workprint (positive) for editing with a mechanical editing machine. An edge
code is recorded onto film to locate the position of picture frames. Since the development
of non-linear editing systems such as Avid, Quantel or Final Cut Pro, the film workflow
is used by very few productions.
In the video workflow, the original camera negative is developed and telecined to video
for editing with computer editing software. A timecode is recorded onto video tape to
locate the position of picture frames. Production sound is also synced up to the video
picture frames during this process.
The first job of the film editor is to build a rough cut taken from sequences (or scenes)
based on individual "takes" (shots). The purpose of the rough cut is to select and order
the best shots. The next step is to create a fine cut by getting all the shots to flow
smoothly in a seamless story. Trimming, the process of shortening scenes by a few
minutes, seconds, or even frames, is done during this phase. After the fine cut has been
screened and approved by the director and producer, the picture is "locked," meaning no
further changes are made. Next, the editor creates a negative cut list (using edge code) or
an edit decision list (using timecode) either manually or automatically. These edit lists
identify the source and the picture frame of each shot in the fine cut.
Once the picture is locked, the film passes out of the hands of the editor to the sound
department to build up the sound track. The voice recordings are synchronised and the
final sound mix is created. The sound mix combines sound effects, background sounds,
ADR, dialogue, walla, and music.
The sound track and picture are combined together, resulting in a low quality answer
print of the movie. There are now two possible workflows to create the high quality
release print depending on the recording medium:
1. In the film workflow, the cut list that describes the film-based answer print is used
to cut the original colour negative (OCN) and create a colour timed copy called
the colour master positive or interpositive print. For all subsequent steps this
effectively becomes the master copy. The next step is to create a one-light copy
called the colour duplicate negative or internegative. It is from this that many
copies of the final theatrical release print are made. Copying from the
internegative is much simpler than copying from the interpositive directly because
it is a one-light process; it also reduces wear-and-tear on the interpositive print.
2. In the video workflow, the edit decision list that describes the video-based answer
print is used to edit the original colour tape (OCT) and create a high quality
colour master tape. For all subsequent steps this effectively becomes the master
copy. The next step uses a film recorder to read the colour master tape and copy
each video frame directly to film to create the final theatrical release print.
Finally the film is previewed, normally by the target audience, and any feedback may
result in further shooting or edits to the film.
Distribution
This is the final stage, where the movie is released to cinemas or, occasionally, to DVD,
VCD or VHS (though VHS tapes are less common now that more people own DVD
players). The movie is duplicated as required for theatrical distribution. Press kits,
posters, and other advertising materials are published and the movie is advertised.
The movie will usually be launched with a launch party, press releases, interviews with
the press, showings of the film at a press preview, and/or at film festivals. It is also
common to create a website to accompany the movie. The movie will play at selected
cinemas and the DVD is typically released a few months later. The distribution rights for
the film and DVD are also usually sold for worldwide distribution. Any profits are
divided between the distributor and the production company.
c) Breaking down all the sounds into Digetic and Non-digetic (On Screen &
Off Screen) in the sound script according to situation, scenes and shots :
Digetic sounds are those sounds which you see happening in the visual
itself, for example if you see a school bus passing in the visual, obviously
you’ll be hearing sound of a bus passing with some children playing,
making noise etc. but if you try and put some offscreen sounds like school
bell ringing, general ambience of school, children playing, fighting, other
distant vehicles passing etc. it will make lot of difference and you can
actually feel the sound surrounding the audience and even if we switch off
the visual , still we can make out whatever action is happening. These kind
of sounds which are not heard onscreen but can be felt are called offscreen
or non-digetic sounds. These sounds help in building a sound stage and
thus contribute a lot in overall sound design of the film.
Take: The length of film exposed between each start and stop of the
camera. Thus, a shot that goes on for a long time without an edit is
called a "long take." During filming the same piece of action may
be filmed from the same camera setup several times (e.g., trying
for different emotions on the part of the actors); each time is called
a take.
Shot: A take, in part or in its entirety, that is used in the final edited
version of the film. In a finished film we refer to a piece of the
film between two edits as a shot. Whereas an edit can take the
story to a different time or a different place, the action within a
shot is spatially and temporally continuous. We can therefore
think of a shot as a "piece of time."
Shots are described by distance from the subject (ECU, CU, MCU,
MS, MLS, LS, ELS), by camera angle (low, high, eye-level), by
content (two-shot, three-shot, reaction shot, establishing shot), and
by any camera movement (pan, track, dolly, crane, tilt). The
average feature film contains between 400 and 1,000 shots.
Extreme long shot (ELS): The camera is very far away from the
subject, giving us a broad perspective. Often used to create an
"establishing shot," setting up a new scene.
Moving Shot: Produced when the camera moves. When the camera
remains fixed but swivels horizontal-ly, it is called a pan;
when it swivels vertically, it is a tilt. When the camera
itself travels horizontally, it is a tracking shot. When the
camera travels in closer to a subject or away from a subject,
it is called a dolly shot. When the camera travels vertically,
it is a crane or boom shot. More info:
Hand-Held Shot: The camera operator carries the camera while filming the
action; this has become possible over the last thirty years
with the invention of lighter cameras. Can be used with a
"Steadicam" system, a hydraulic harness device that allows
the movement to be kept very smooth, almost as smooth as
a dolly or crane shot. Usually, however, hand-held shots
are used for their lack of smoothness, to give the
impression of the point of view of a person walking--for
greater naturalism or to create suspense.
Zoom Shot: Technically not a moving shot because the camera itself
does not move, the zoom is made by the zoom lens, which
has variable focal length (see Section IV). The zoom
became a popular technique in the Sixties. On screen a
zoom-in resembles a dolly-in, but its telephoto optics as it
moves in on the subject differ from the more realistic,
dynamic look that a dolly or hand-held shot retains.
Pan Shot: The view sweeps from left to right or from right to left.
Differs from the tracking shot in that the camera is not
mounted on a movable object but stays fixed. It pans on a
horizontal axis (short for "panorama"). In a Flash Pan or
Zip Pan the movement is very rapid, so that the filmed
action on the screen appears as only a blurred movement.
Tilt: Like a pan, but the camera tilts up or down along a vertical
plane.
Long Take: A shot that lasts a long time (as distinguished from a long
shot, where "long" refers to camera distance.
Mise-En-Scene: A theoretical term coming from the French, meaning, more
or less, "staging." In general, concerns everything within a
shot as opposed to the editing of shots; includes camera
movement, set design, props, direction of the actors,
composition of formal elements within the frame, lighting,
and so on. In film theory Mise-En-Scene is one of the two
major categories of film analysis; "Montage" (Editing) is
the other.
Cross-Cutting: Cutting back and forth between shots from two(or more)
scenes or locales. This alternation suggests that both
actions are occurring simultaneously.
Establishing Shot: Also called a master shot. A long shot usually at the beginning of
a scene, to establish the spatial relationships of the characters, actions, and spaces
depicted in subsequent closer shots.
Match Cut: A transition that involves a direct cut from one shot to
another, which "matches" it in action, subject matter, or
actual composition. This kind of transition is commonly
used to follow a character as s/he moves or appears to
move continuously. Film continuity is often dependent on
match cutting. Match cutting can also be used in a montage
sequence, to show a similar activity occurring over a
passage of time.
Montage: (1) Editing; putting together shots and creating a "film reality."
Scene: A portion of the film in which all of the action occurs in the
same place and in the same time span. A scene may be
composed of any number of shots.
Splice: The physical point at which two shots are joined by glue or
tape during editing. A machine called a splicer aids in
creating a splice.
III. TRANSITIONS
Dissolve: The end of one shot merges slowly into the beginning of
the next; as the second shot becomes increasingly distinct,
the first slowly disappears. Traditional way of moving
from sequence to sequence.
Aspect Ratio: The proportions of the frame, the ratio of the width to the
height of the image area. The traditional aspect ratio for 35
mm. film is 1.33:1 and is known as Academy Aperture.
For wide-screen processes such as Cinemascope, the aspect
ratio may range from 1.65:1 to 2.55:1. All film gauges are
wider than they are high, a factor affecting formal
composition within the frame.
Film Stock: The "raw," unexposed film that is loaded into the camera
for shooting. Film stock can be color or black-and-white,
"fast" or "slow."
Normal
Rack Focus: When the zone of sharp focus changes from foreground to
background (or vice-versa) within a single shot. The
viewer's attention is thus drawn from one plane to another.
Soft Focus: The image is softened by diffusing the light and reducing
the sharpness of the lens.
Superimposition: When two different shots are printed onto the same strip of
film. Every dissolve contains a brief superimposition.
V. MISCELLANEOUS TERMS
Sync sound (synchronized sound recording) refers to sound recorded at the time of the
filming of movies, and has been widely used in U.S. movies since the birth of sound
movies. The first animated film in which sync sound was used is Walt Disney's
Steamboat Willie. The characters and the boat dance in time with the music, and the gags
are sound-related.
In Hong Kong, sync sound was not widely used until the 1990s, as the generally noisy
environment and lower production budgets made such a method impractical.
Most Bollywood films from the 60s onwards do not employ this technique and for that
very reason the recent film Lagaan was noted for its use of Sync sound. The common
practice in Bollywood is to 'dub' over the dialogues at the Post-Production Stage.
Although the very first Indian talkie Alam Ara released in 1931 saw the very first use of
Sync Sound in India, and since then Indian films were regularly shot in Sync Sound till
the 60's with the silent Mitchell Camera, with the arrival of the Arri 2C, a noisy but more
practical camera particularly for outdoor shoots, 'dubbing' became the norm and was
never reversed. [1]
Use of this technique has increased in recent times with development in film techniques
and instrumentations like Arri Blimp camera.This system has been used in Hindi movie
jodhaa akbar(2008).
b) Going on location : the very first step for sync sound for a sound designer
is to go on location with the director to find out how noisy or quite the
location is depending upon what he\she would be taking the decision of
whether to shoot at that location or not or which particular sequences to
dub
c) Choosing microphones/ recorder: depending upon number of characters,
location, dialogue overlapping, post pro formats (i.e. mono, stereo or
surround), sound designer is supposed to pick up all the required
equipment like a single or a multitrack location recorder like NAGRA or
PD – 6. If the sound engineer is doing the pilot (ref track), then a single
track recorder will do, otherwise he will be using a multitrack recorder
with shotgun mics and lapels for dialogues and room tones pick up and
special stereo microphones for ambiences and other effects recording.
Also depending upon the type of location, the sound engineer is supposed
to carry sufficient amount of pro quality cables, boom rods, windshields
ectc.
6.
7.
8.
9.
10.
After the audiographer has recorded all the sounds in his recorder, he is supposed
to take the backup of all the data for post production either on a hard disk, DVD
ram, cd, flash drive whichever medium is easily available and efficient too. These
days digital hard disk location recorders like cantar , deva and PD-6 provide
internal hard disks along with DVD RAM and flash card support , so one can
simply copy his data on a flash card or DVD RAM and straightaway transfer it on
a DAW for track laying and post production. Here are some pictures of location
recorders available these days.
Transferring data: Depending upon which DAW (Pro tools or Nuendo) you are
using, sound is first transferred to the internal HDD of the machine you are
working on. Its highly recommended to keep the audio data in separate partition
and video in separate to avoid confusion and maintain efficiency as well. After
copying all the data to the hard drive, an audio session is created in the DAW
depending upon various attributes like sample rate, bit depth, time code and frame
rate to lock the video with the picture etc. If you have got the metadata
(information embedded inside the audio files like scene number, shot number,
sample rate, bit depth, number of channels, mono or stereo etc) then importing
files according to that also. After all the audio data has been imported and a
session is made, all the tracks are organized as Dialogues, Ambiences and FX to
avoid the confusion and further sound editing, cleaning etc. Video data is also
imported along with the pilot audio to sync all the tracks which is a telecine
version of actual shot data, we will be discussing about telecine and reverse
telecine later.
6) Learning about film formats :
Introduction to PAL and NTSC: In the 1950s, when the Western European countries were
planning to establish colour television, they were faced with the problem that the already
existing American NTSC standard wouldn't fit the 50 Hz AC frequency of the European
power grids. In addition to that NTSC demonstrated several weaknesses, including colour
tone shifting under less-than-ideal transmission conditions. For these reasons the
development of the SECAM and PAL standards began. The goal was to provide a colour
TV standard with a picture frequency of 50 fields per second (50 hertz), and sporting a
better colour picture than NTSC.
PAL was developed by Walter Bruch at Telefunken in Germany. The format was first
unveiled in 1963, with the first broadcasts beginning in the United Kingdom and
Germany in 1967.[1]
Telefunken was later bought by the French electronics manufacturer Thomson. Thomson
also bought the Compagnie Générale de Télévision where Henri de France developed
SECAM, historically the first European colour television standard. Thomson nowadays
also co-owns the RCA brand for consumer electronics products, which created the NTSC
colour TV standard before Thomson became involved.
The term "PAL" is often used informally to refer to a 625-line/50 Hz (576i, principally
European) television system, and to differentiate from a 525-line/60 Hz (480i, principally
North American/Central American/Japanese) "NTSC" system. Accordingly, DVDs are
labelled as either "PAL" or "NTSC" (referring informally to the line count and frame
rate) even though technically the European discs do not have PAL composite colour. This
usage may lead readers to believe that PAL defines image resolution, even though it
doesn't. The PAL colour system can be used in conjunction with any resolution and frame
rate, and various such combinations exist. NTSC, by contrast does define the video line
and frame format.
Telecine : Telecine is the same root as in 'cinema'; also "tele-seen".) is the process of
transferring motion picture film into electronic form. The term is also used to refer to the
equipment used in the process.
Reverse telecine : reverse telecine in simple words is the process of converting the
electronic form of motion picture into celluloid form i.e transferring the electronic form
to physical film.
ADR is recorded during an ADR session. An actor, usually the original actor on set, is
called to a sound studio equipped with video playback equipment and sound playback
and recording equipment. The actor wears headphones and is shown the film of the line
that must be replaced, and often he or she will be played the production sound recording.
The film is then projected several times, and the actor attempts to re-perform the line
while watching the image on the screen, while an ADR Recordist records the
performances. Several takes are made, and based on the quality of the performance and
sync, one is selected and edited by an ADR Editor for use in the film.
Sometimes, a different actor is used from the actual actor on set. One famous example is
the Star Wars character Darth Vader, portrayed by David Prowse. In postproduction, his
voice was replaced with that of James Earl Jones. Many fans therefore think of Jones as
"the man who played Darth Vader" rather than Prowse.
There are variations of the ADR process. ADR does not have to be recorded in a studio,
but can be recorded on location, with mobile equipment; this process was pioneered by
Matthew Wood of Skywalker Sound for Star Wars Episode I: The Phantom Menace.
ADR can also be recorded without showing the actor the image they must match, but
only by having him listen to the performance. This process was used for years at
Universal Studios.
An alternative method, called "rythmo band” (or "lip-sync band") was historically used in
Canada and France. This band provides a more precise guide for the actors, directors and
technicians and can be used to complement the traditional headphone method. The band
is actually a clear 35 mm film leader on which is written, in India ink, the dialogue and
numerous indications for the actor (laughs, cries, length of syllables, mouth sounds,
mouth openings and closings, etc.). The lip-sync band is projected in studio and scrolls in
perfect synchronization with the picture. Thanks to the high efficiency of the lip-sync
band, the number of retakes are reduced, resulting in a substantial savings in recording
time (as much as 50% compared to headphones-only recording).
Historically, the preparation of the lip-sync band is a long, tedious and complex process
involving a series of specialists in an old fashioned manual production line. Until
recently, such constraints have prevented this technique from being adopted
internationally, particularly in the United States.
Advanced software technology has been able to digitally reproduce the rythmo-band
output in a fraction of the time. This technology is being adapted in all markets and is
proven to reduce the amount of studio time and number of takes required for actors to
achieve accurate synchronization.
Using the traditional ADR technique (headphones and video) actors can average 10-12
lines per hour. Using the newer digital rythmo-band technologies, actors can output from
35-50 lines per hour, and much more with experience. Studio output with multiple actors
can therefore reach 2-4 hundred lines per hour. dubStudio has pioneered the digital
rythmo-band technology and has been used by several large dubbing studios.
ADR can usually be used to redub singing. This technique was used by, among many
others, Billy Boyd and Viggo Mortensen in The Lord of the Rings.[citation needed]
Adding or replacing non-vocal sounds, such as sound effects, is the task of a foley
artist
8) Foley recording :
Intro to foley : Foley - is a term used to describe a process of creating sound effects for
enhancing a soundtrack of a film, video or multimedia work. The term foley is also used
to describe a place, such as foley-stage or foley-studio, where the foley process takes
place. "Foley" gets its name from Jack Donovan Foley, a sound editor at Universal
Studios.
Foley-studio, usually consists of one big theatre with a desk inside or at least two
separate rooms where at least one serves as a control booth or a recording room and the
other room is used as a foley-stage where the sound effects are actually created. The
separation of foley-stage and the recording room is very desirable as some foley effects,
such as smashing watermelons, can become very messy, however it works far more
effectively for communication purposes to have the room as one rather than separated, as
this allows the editor, mixer, and technician to be more involved in the creation process
such as mic positioning, prop- and surface-preparation.
The need of replacing or enhancing sounds in a film production arises from the fact that,
very often, the original sounds captured during shooting are obstructed by noise or not
convincing enough to underscore and supplement the visual effect or action. For
example, fist-fighting scenes in an action movie are usually staged by the stunt actors and
therefore, such scenes do not have any sounds that are normally associated with fist
fighting. It is therefore necessary to enhance such fist-fighting scenes with artificially
created sounds that mimic fist fighting.
People that work in a foley-studio are sometimes called foley artists or foley-technicians
Same process as dubbing but this time recording live FX while watching the visual and
taking care of sync and other factors as explained above.
Music production for a film is done by Music Producer or Music director who is
responsible for full musical soundtrack of the film. There can be a single person handling
all the music for the film or there are many composers working on a single film. Music
composer is the person who is responsible for creating melodies, Music
arranger\programmer is the person who handles all the musical arrangements like
keyboards, percussions and other instruments and puts them together to create a kind of
background for the main melody, Lyricist is the guy who writes lyrics for the song,
Music recordist\mixing engineer is the guy who does all the recording, editing, mixing of
the music tracks whether it’s a song or a background score. Background scoring is
different from songs as it only contains a musical arrangement without any lyrics. So, as
soon as the main theme or composition is composed, the music composer gives a dummy
or scratch composition to his arranger and arranger does all the musical parts to build the
complete song. These days musical arrangements are done mostly using MIDI music
technology. Now when music arrangements are ready, music director calls all the
musicians in a studio, if all of them are not available at the same day or time, then music
is recorded on a multitrack system parts by parts. After all the musical score and songs
are recorded, the music engineer sits with each and every track and edits and cleans it to
suit the film. Afterwards music is mixed and mastered either in stereo for audio CD’s or
in 5.1 surround for theatre releases.
10) Sound design :
What passes for "great sound" in films today is too often merely loud sound. High
fidelity recordings of gunshots and explosions, and well fabricated alien creature
vocalizations do not constitute great sound design. A well-orchestrated and recorded
piece of musical score has minimal value if it hasn’t been integrated into the film as a
whole. Giving the actors plenty of things to say in every scene isn’t necessarily doing
them, their characters, or the movie a favor. Sound, musical and otherwise, has value
when it is part of a continuum, when it changes over time, has dynamics, and resonates
with other sound and with other sensory experiences.
What I propose is that the way for a filmmaker to take advantage of sound is not simply
to make it possible to record good sound on the set, or simply to hire a talented sound
designer/composer to fabricate sounds, but rather to design the film with sound in mind,
to allow sound’s contributions to influence creative decisions in the other crafts. Films as
different from "Star Wars" as "Citizen Kane," "Raging Bull," "Eraserhead," "The
Elephant Man," "Never Cry Wolf" and "Once Upon A Time In The West" were
thoroughly "sound designed," though no sound designer was credited on most of them.
Does every film want, or need, to be like Star Wars or Apocalypse Now? Absolutely
not. But lots of films could benefit from those models. Sidney Lumet said in an interview
that he had been amazed at what Francis Coppola and Walter Murch had been able to
accomplish in the mix of "Apocalypse Now." Well, what was great about that mix began
long before anybody got near a dubbing stage. In fact, it began with the script, and with
Coppola’s inclination to give the characters in "Apocalypse" the opportunity to listen to
the world around them.
Many directors who like to think they appreciate sound still have a pretty narrow idea of
the potential for sound in storytelling. The generally accepted view is that it’s useful to
have "good" sound in order to enhance the visuals and root the images in a kind of
temporal reality. But that isn’t collaboration, it’s slavery. And the product it yields is
bound to be less complex and interesting than it would be if sound could somehow be set
free to be an active player in the process. Only when each craft influences every other
craft does the movie begin to take on a life of it’s own.
What follows is a list of some of the bleak realities faced by those of us who work in
film sound, and some suggestions for improving the situation.
Pre-Production
If a script has lots of references in it to specific sounds, we might be tempted to jump to
the conclusion that it is a sound-friendly script. But this isn’t necessarily the case. The
degree to which sound is eventually able to participate in storytelling will be more
determined by the use of time, space, and point of view in the story than by how often the
script mentions actual sounds. Most of the great sound sequences in films are "pov"
sequences. The photography, the blocking of actors, the production design, art direction,
editing, and dialogue have been set up such that we, the audience, are experiencing the
action more or less through the point of view of one, or more, of the characters in the
sequence. Since what we see and hear is being filtered through their consciousness, what
they hear can give us lots of information about who they are and what they are feeling.
Figuring out how to use pov, as well as how to use acoustic space and the element of
time, should begin with the writer. Some writers naturally think in these terms, most
don’t. And it is almost never taught in film writing courses.
Serious consideration of the way sound will be used in the story is typically left up to the
director. Unfortunately, most directors have only the vaguest notions of how to use
sound because they haven’t been taught it either. In virtually all film schools sound is
taught as if it were simply a tedious and mystifying series of technical operations, a
necessary evil on the way to doing the fun stuff.
Production
On the set, virtually every aspect of the sound crew’s work is dominated by the needs of
the camera crew. The locations for shooting have been chosen by the Director, DP, and
Production Designer long before anyone concerned with sound has been hired. The sets
are typically built with little or no concern for, or even awareness of, the implications for
sound. The lights buzz, the generator truck is parked way too close. The floor or ground
could easily be padded to dull the sound of footsteps when feet aren’t in the shot, but
there isn’t enough time. The shots are usually composed, blocked, and lit with very little
effort toward helping either the location sound crew or the post production crew take
advantage of the range of dramatic potential inherent in the situation. In nearly all cases,
visual criteria determine which shots will be printed and used. Any moment not
containing something visually fascinating is quickly trimmed away.
There is rarely any discussion, for example, of what should be heard rather than seen. If
several of our characters are talking in a bar, maybe one of them should be over in a dark
corner. We hear his voice, but we don’t see him. He punctuates the few things he says
with the sound of a bottle he rolls back and forth on the table in front of him. Finally he
puts a note in the bottle and rolls it across the floor of the dark bar. It comes to a stop at
the feet of the characters we see. This approach could be played for comedy, drama, or
some of both as it might have been in "Once Upon A Time In The West." Either way,
sound is making a contribution. The use of sound will strongly influence the way the
scene is set up. Starving the eye will inevitably bring the ear, and therefore the
imagination, more into play.
Post Production
Finally, in post, sound cautiously creeps out of the closet and attempts meekly to assert
itself, usually in the form of a composer and a supervising sound editor. The composer is
given four or five weeks to produce seventy to ninety minutes of great music. The
supervising sound editor is given ten to fifteen weeks to—smooth out the production
dialog—spot, record, and edit ADR—and try to wedge a few specific sound effects into
sequences that were never designed to use them, being careful to cover every possible
option the Director might want because there "isn’t any time" for the Director to make
choices before the mix. Meanwhile, the film is being continuously re-edited. The Editor
and Director, desperately grasping for some way to improve what they have, are
meticulously making adjustments, mostly consisting of a few frames, which result in the
music, sound effects, and dialog editing departments having to spend a high percentage
of the precious time they have left trying to fix all the holes caused by new picture
changes.
The dismal environment surrounding the recording of ADR is in some ways symbolic of
the secondary role of sound. Everyone acknowledges that production dialog is almost
always superior in performance quality to ADR. Most directors and actors despise the
process of doing ADR. Everyone goes into ADR sessions assuming that the product will
be inferior to what was recorded on the set, except that it will be intelligible, whereas the
set recording (in most cases where ADR is needed) was covered with noise and/or is
distorted.
This lousy attitude about the possibility of getting anything wonderful out of an ADR
session turns, of course, into a self fulfilling prophecy. Essentially no effort is typically
put into giving the ADR recording experience the level of excitement, energy, and
exploration that characterized the film set when the cameras were rolling. The result is
that ADR performances almost always lack the "life" of the original. They’re more-or-
less in sync, and they’re intelligible. Why not record ADR on location, in real-world
places which will inspire the actors and provide realistic acoustics? That would be taking
ADR seriously. like so many other sound-centered activities in movies, ADR is treated as
basically a technical operation, to be gotten past as quickly and cheaply as possible.
It seems to me that one element of writing for movies stands above all others in terms
of making the eventual movie as "cinematic" as possible: establishing point of view.
The audience experiences the action through its identification with characters. The
writing needs to lay the ground work for setting up pov before the actors, cameras,
microphones, and editors come into play. Each of these can obviously enhance the
element of pov, but the script should contain the blueprint.
Let’s say we are writing a story about a guy who, as a boy, loved visiting his father at
the steel mill where he worked. The boy grows up and seems to be pretty happy with his
life as a lawyer, far from the mill. But he has troubling, ambiguous nightmares that
eventually lead him to go back to the town where he lived as a boy in an attempt to find
the source of the bad dreams.
The description above doesn’t say anything specific about the possible use of sound in
this story, but I have chosen basic story elements which hold vast potential for sound.
First, it will be natural to tell the story more-or-less through the pov of our central
character. But that’s not all. A steel mill gives us a huge palette for sound. Most
importantly, it is a place which we can manipulate to produce a set of sounds which
range from banal to exciting to frightening to weird to comforting to ugly to beautiful.
The place can therefore become a character, and have its own voice, with a range of
"emotions" and "moods." And the sounds of the mill can resonate with a wide variety
of elements elsewhere in the story. None of this good stuff is likely to happen unless we
write, shoot, and edit the story in a way that allows it to happen.
The element of dream in the story swings a door wide open to sound as a collaborator.
In a dream sequence we as film makers have even more latitude than usual to modulate
sound to serve our story, and to make connections between the sounds in the dream and
the sounds in the world for which the dream is supplying clues. Likewise, the "time
border" between the "little boy" period and the "grown-up" period offers us lots of
opportunities to compare and contrast the two worlds, and his perception of them. Over
a transition from one period to the other, one or more sounds can go through a
metamorphosis. Maybe as our guy daydreams about his childhood, the rhythmic clank of
a metal shear in the mill changes into the click clack of the railroad car taking him back
to his home town. Any sound, in itself, only has so much intrinsic appeal or value. On
the other hand, when a sound changes over time in response to elements in the larger
story, its power and richness grow exponentially.
In recent years there has been a trend, which may be in insidious influence of bad
television, toward non-stop dialog in films The wise old maxim that it’s better to say it
with action than words seems to have lost some ground. Quentin Tarantino has made
some excellent films which depend heavily on dialog, but he’s incorporated scenes
which use dialog sparsely as well.
There is a phenomenon in movie making that my friends and I sometimes call the
"100% theory." Each department-head on a film, unless otherwise instructed, tends to
assume that it is 100% his or her job to make the movie work. The result is often a
logjam of uncoordinated visual and aural product, each craft competing for attention, and
often adding up to little more than noise unless the director and editor do their jobs
extremely well.
Dialogue is one of the areas where this inclination toward density is at its worst. On top
of production dialog, the trend is to add as much ADR as can be wedged into a scene.
Eventually, all the space not occupied by actual words is filled with grunts, groans, and
breathing (supposedly in an effort to "keep the character alive"). Finally the track is
saved (sometimes) from being a self parody only by the fact that there is so much other
sound happening simultaneously that at least some of the added dialog is masked. If your
intention is to pack your film with wall-to-wall clever dialog, maybe you should consider
doing a play
The contrast between a sound heard at a distance, and that same sound heard close-up
can be a very powerful element. If our guy and an old friend are walking toward the mill,
and they hear, from several blocks away, the sounds of the machines filling the
neighborhood, there will be a powerful contrast when they arrive at the mill gate. As a
former production sound mixer, if a director had ever told me that a scene was to be shot
a few blocks away from the mill set in order to establish how powerfully the sounds of
the mill hit the surrounding neighborhood, I probably would have gone straight into a
coma after kissing his feet. Directors essentially never base their decisions about where
to shoot a scene on the need for sound to make a story contribution. Why not?
It’s wonderful when a movie gives you the sense that you really know the places in it.
That each place is alive, has character and moods. A great actor will find ways to use the
place in which he finds himself in order to reveal more about the person he plays. We
need to hear the sounds that place makes in order to know it. We need to hear the actor’s
voice reverberating there. And when he is quiet we need to hear the way that place will
be without him.
Let’s assume we as film makers want to take sound seriously, and that the first issues
have already been addressed:
1) The desire exists to tell the story more-or-less through the point of view of one or
more of the characters.
2) Locations have been chosen, and sets designed which don’t rule out sound as a
player, and in fact, encourage it.
Here are some ways to tease the eye, and thereby invite the ear to the party:
Slow Motion
Raging Bull and Taxi Driver contain some obvious, and some very subtle uses of slow
motion. Some of it is barely perceptible. But it always seems to put us into a dream-
space, and tell us that something odd, and not very wholesome, is happening.
Whenever we as an audience are put into a visual "space" in which we are encouraged
to "feel" rather than "think," what comes into our ears can inform those feelings and
magnify them.
We, the film makers, are all sitting around a table in pre-production, brainstorming
about how to manufacture the most delectable bait possible, and how to make it seem like
it isn’t bait at all. (Aren’t the most interesting stories always told by guys who have to be
begged to tell them?) We know that we want to sometimes use the camera to withhold
information, to tease, or to put it more bluntly: to seduce. The most compelling method
of seduction is inevitably going to involve sound as well.
Ideally, the unconscious dialog in the minds of the audience should be something like:
"What I’m seeing isn’t giving me enough information. What I’m hearing is ambiguous,
too. But the combination of the two seems to be pointing in the direction of a vaguely
familiar container into which I can pour my experience and make something I never
before quite imagined." Isn’t it obvious that the microphone plays just as important a role
in setting up this performance as does the camera?
Walter Murch, film editor and sound designer, uses lots of unconventional techniques.
One of them is to spend a certain period of his picture editing time not listening to the
sound at all. He watches and edits the visual images without hearing the sync sound
which was recorded as those images were photographed. This approach can ironically be
a great boon to the use of sound in the movie. If the editor can imagine the sound
(musical or otherwise) which might eventually accompany a scene, rather than listen to
the rough, dis-continuous, often annoying sync track, then the cutting will be more likely
to leave room for those beats in which sound other than dialog will eventually make its
contribution.
Sound’s Talents
Music, dialogue, and sound effects can each do any of the following jobs, and many
more:
At any given moment in a film, sound is likely to be doing several of these things at
once.
But sound, if it’s any good, also has a life of its own, beyond these utilitarian functions.
And its ability to be good and useful to the story, and powerful, beautiful and alive will
be determined by the state of the ocean in which it swims, the film. Try as you may to
paste sound onto a predetermined structure, the result will almost always fall short of
your hopes. But if you encourage the sounds of the characters, the things, and the places
in your film to inform your decisions in all the other film crafts, then your movie may just
grow to have a voice beyond anything you might have dreamed.
This dream has been a difficult one to realize, and in fact has made little headway since
the early 1970s. The term sound designer has come to be associated simply with using
specialized equipment to make "special" sound effects. On "THX-1138" and "The
Conversation" Walter Murch was the Sound Designer in the fullest sense of the word.
The fact hat he was also a Picture Editor on "The Conversation" and "Apocalypse Now"
put him in a position to shape those films in ways that allowed them to use sound in an
organic and powerful way. No other sound designers on major American films have had
that kind of opportunity.
So, the dream of giving sound equal status to image is deferred. Someday the Industry
may appreciate and foster the model established by Murch. Until then, whether you cut
the dialog, write the script, record music, perform foley, edit the film, direct the film or
do any one of a hundred other jobs, anybody who shapes sound, edits sound, or even
considers sound when making a creative decision in another craft is, at least in a limited
sense, designing sound for the movie, and designing the movie for sound.
Re recording is also called final mix of the film. Re recording is actually the last stage of
a film sound production. When everything is done including track laying, sound design,
music scoring etc. All the data comes to a final mix engineer who mixes and masters the
film in either stereo, DTS, Dolby digital 5.1 surround etc whichever format the
producer\director wants to gets his audio done. Lets assume that the film is going to be
mixed in dolby digital 5.1 surround, so the basic preparations includes importing all the
data e.g dialogues, music, sfx, ambiences etc into different sessions and premixing them
in a single 5.1 stem (in simple words a group) . like there will be a separate single stem of
dialogues, music, fx and ambiences. Now these individual stems are imported in a master
session where everything is mixed properly and the mix is taken in a final master 5.1
stem which is then taken on a MOD (Magnetic Optical Disk) for mastering. Then this
final mastered MOD is sent for optical transfer on a sound negative which later is
processed with picture negative to get a positive release print which is called Married
print of the film which we see in theatres.
Post Production Workflow
It takes many hours of footage to make a typical two-hour movie. Many scenes will be
eliminated, or cut, in order to get the very best shots possible. Sound is recorded in
various locations and using different formats before, during, and after the filming and
special effects are added after filming, too.
The post production workflow revolves around combining, or mixing, all these pieces
into one polished, beautifully executed picture. The post production workflow may take
longer than the actual filming process did.
Other motion picture productions – television shows and commercials, corporate and
music videos – follow the same basic post production workflow although on a smaller
scale.
It is during the post production workflow that the film editor works with the director to
pick and choose the best scenes possible or when to reshoot if necessary. Special effects
are added during the post production workflow.
The audio aspects of post production workflow become mixed with the video imagery
during this time. Sound and image are linked and reviewed to make sure the timing is
appropriate and that sound quality and volume are compatible with the filmed sequences.
The music director may have been at work on the score during the entire time of filming
but it's during the post production workflow that the score and the story come together.
Three aspects of sound are combined at this stage of production – the dialog, score, and
special sound and Foley effects. The precise timing required to mix the sound of these
three tracks into the visual imagery is a meticulous and time-consuming process but it is
vital to get this mixed as flawlessly as possible.
Once a final cut has been developed, the post production workflow turns to a larger
audience for feedback. The production will be screened by a targeted audience and
further shooting or editing will be made based upon feedback given by the target
audience.
Distribution and marketing are the final stages in the post production workflow involved
with making a motion picture. In the case of a movie, a premiere showing and party is
often a major publicity event accompanied by fanfare, press kits,posters, TV advertising,
reviews, and usually a website.
The very last stage in the post production workflow is the release of the production in the
form of DVD for sale to the general public. The DVD release often comes several months
after the movie's cinematic release.