Audio mixing explained

Audio mixing is the process by which multiple sounds are combined into one or more audio channels. In the process, a source's volume level, frequency content, dynamics, and panoramic position are manipulated or enhanced. This practical, aesthetic, or otherwise creative treatment is done in order to produce a finished version that is appealing to listeners.

Audio mixing is practiced for music, film, television and live sound. The process is generally carried out by a mixing engineer operating a mixing console or digital audio workstation.

Recorded music

See main article: Audio mixing (recorded music). Before the introduction of multitrack recording, all the sounds and effects that were to be part of a recording were mixed together at one time during a live performance. If the sound blend was not satisfactory, or if one musician made a mistake, the selection had to be performed over until the desired balance and performance was obtained. However, with the introduction of multitrack recording, the production phase of a modern recording has radically changed into one that generally involves three stages: recording, overdubbing, and mixdown.[1]

Film and television

During production dialogue recording of actors is done by a person variously known as location sound mixer, production sound or some similar designation. That person is a department head with a crew consisting of a boom operator and sometimes a cable person.

Audio mixing for film and television is a process during the post-production stage of a moving image program by which a multitude of recorded sounds are combined. In the editing process, the source's signal level, frequency content, dynamics, and panoramic position are commonly manipulated and effects added. In video production, this is called sweetening.

The process takes place on a mixing stage, typically in a studio or purpose-built theater, once the picture elements are edited into a final version. Normally the engineers will mix four main audio elements called stems: speech (dialogue, ADR, voice-overs, etc.), ambience (or atmosphere), sound effects, and music. As multi machine synchronization became available, filmmakers were able to split elements into multiple reels. With the advent of digital workstations and growing complexity, track counts in excess of 100 became common.

Dialogue intelligibility

Since the 2010s, critics and members of the audience have reported that dialogue in films tends to be increasingly more difficult to understand than in older films, to the point where viewers need to rely on subtitles to understand what is being said. Ben Pearson of SlashFilm attributed this to a combination of factors, only some of which can be addressed through audio mixing:[2]

Live sound

See main article: Live sound mixing. Live sound mixing is the process of electrically blending together multiple sound sources at a live event using a mixing console. Sounds used include those from instruments, voices, and pre-recorded material. Individual sources may be equalised and routed to effect processors to ultimately be amplified and reproduced via loudspeakers.[3] The live sound engineer balances the various audio sources in a way that best suits the needs of the event.[4]

Further reading

Notes and References

  1. Book: Huber, David Miles . 2001 . Modern Recording Techniques . 321 . Focal Press . 0-240-80456-2 . registration .
  2. Web site: Pearson. Ben. 2021-11-30. Here's Why Movie Dialogue Has Gotten More Difficult To Understand (And Three Ways To Fix It). 2021-12-06. SlashFilm.com. en-US.
  3. Web site: Leonard Audio Institute . Mixing Principles . 2013-01-03.
  4. Web site: Tim Crosby . How Live Sound Engineering Works . 28 April 2008 . . 2013-03-03.