top of page

The Loudness War  (July 2018)


The Quest for Loudness

Undoubtedly, you have encountered the proclamation “More is better,” perhaps regarding the size of a flat screen TV, computer RAM, signal-to-noise ratio—you name it.  You have also encountered situations where “More is better” isn’t true (snow in one’s driveway comes to mind).


The recording industry hasn’t been immune to “More is better.”  There has been a long history of deliberately increasing loudness, based on the assumption that louder tracks will be more likely to capture listeners’ attention and thereby increase sales.  By the 1920s, for example, some labels were producing louder records to take advantage of gramophones that had no volume knob.  (According to multiple sources, the expression “put a sock in it” first appeared in 1919 and referred to shoving a sock into a gramophone’s horn to reduce the volume of an overly loud record.)  Radio stations sought louder records to avoid the appearance of “dead air” during soft passages, fearing that listeners would bypass the station while searching for something to listen to.  With the advent of 45s in the 1940s, it was believed that louder records would grab the attention not only of listeners, but also that radio station program managers and managers of establishments with jukeboxes would be more likely to incorporate them into their playlists and jukeboxes, respectively.  With compilation LPs, artists and record producers began insisting that their songs be remastered to achieve a volume level comparable to the other tracks, lest they be overlooked by listeners.  And with television commercials, the more-loudness-is-better tenet is blatantly obvious (making the Mute button one of the most useful inventions ever).


Interestingly, after all this pursuit of loudness, no study has established a correlation between loudness and sales.  But the lack of a demonstrable correlation hasn’t dampened the quest.


Loudness wasn’t much of an issue for listeners until the mid-1990s, since they controlled the Volume knob (or socks).  But that changed after the introduction of CDs.  Unlike records, where excessive recording volume caused the tonearm to skip during playback, digital recordings weren’t constrained by playback system mechanics.  CDs started to get noticeably louder in the early 1990s by increasing amplitude toward digital systems’ limits.  In the mid-1990s, CDs began appearing where the amplitude of quieter portions of audio tracks was increased relative to louder portions by means of audio compression, thereby increasing the average loudness throughout the track.


The Downsides of Using Audio Compression to Increase Loudness

Using audio compression to alter loudness certainly has its uses, as described in the article Audio Compression.  Music that has been compressed to produce a more uniform loudness plays better in noisy environments such as dance clubs or sports arenas.  Unfortunately, compression solely for loudness’ sake, unless intentional given a musical style like heavy metal, has significant disadvantages.


The first downside is what happens with the limited crest factor (difference between the track’s peak amplitude and the root-mean-square [RMS] “average”) and reduction in amplitude variability.  With music playing at similar volume throughout the piece, it has lost dynamics—changes in volume intended by the composer and performer.  This has been described by various writers as loss of shading, depth, clarity, and punch, or as tinny, lifeless, shrill, denuded, and emotionless—to the point of ruining the music altogether.


To illustrate this, consider Maurice Ravel’s Bolero, the waveform for which appears in Figure 1.  During its 15 minutes, it begins very quietly, slightly increases volume at 2:00 and 4:15, noticeably increases volume at 6:30, 10:30, and 13:00, and reaches its climax beginning about 14:20 (the labels at the top of the display mark 30-second increments).

Figure 1.  Bolero (Ravel), uncompressed.


With compression applied (Figure 2), the volume increases are observable at the same points, but each increment is smaller than in the uncompressed version.  This is particularly apparent during the last 40 seconds:  While the climax is louder than what preceded it, it’s not louder by much and has lost its dramatic impact.

Figure 2.  Bolero (Ravel), compressed.


Figure 3 depicts what’s left of Bolero when the music is severely compressed (don’t try this at home, boys and girls!).  Volume increases are apparent in only three places instead of the original six, and there is virtually no difference in peak amplitude from about 10:30 to the end of the piece.  The climax has been obliterated.

Figure 3.  Bolero (Ravel), severely compressed.


The second downside is distortion.  In Figures 1-3, the maximum amplitude remained within the digital system’s range, regardless of the amount of compression applied.  Unfortunately, the quest for loudness has at times ignored the absolute limits of digital systems.  Adding more gain pushes wave peaks beyond the limits, resulting in clipping and noticeable, unpleasant distortion (see Figure 4, where wave peaks and troughs are flattened against the top and bottom of the graphs in several places).  For unknown reasons, some labels find it acceptable to release commercial recordings with distortion included.  (It should be noted that clipping is sometimes employed by composers or performers to intentionally produce distortion; it is one effect used to create “fuzz” for guitars, bass guitars, and other instruments.)

Figure 4.  Clipped waves.


The third downside is that the prevalence of compressed music in commercial releases has been occurring for such a long time that newer generations of listeners aren’t aware of how good a recording could sound if it hadn’t been compressed or forced beyond the digital system’s limits.


Beware of “Digitally Remastered”

A lot of music re-released in digital form bears the prominent label “Digitally Remastered,” implying that this somehow makes the newer version better than the original.  This could indeed be true if the remastering involved softening notes that were inadvertently played or sung too loudly as captured in the original mix, adding gain to short passages that were played or sung too softly, removing pervasive noise, perhaps adding a bit of reverb to provide a greater perception of sonic depth, etc.


But “Digitally Remastered” could also reflect the intent to increase loudness throughout the track by applying audio compression.  If so, the track is subject to the negative consequences described above, and the remastered digital version may sound worse than the original analogue version.  The same risk exists with albums that have been re-released on vinyl, if the re-released version was made from a compressed digital master rather than the original tape master.


Turning of the Tide in the Loudness War

Between the mid-1990s and mid-2010s, growing reactions were voiced by sound engineers, artists, audiophiles, and occasionally recording industry managers in interviews, articles, and online campaigns.  Loudness and the use of compression had gotten out of hand.  Continually loud recordings lacked any meaningful crest factor that would have enriched the music, listening to continually loud music was fatiguing, and deliberate inclusion of distortion should never have approached the norm.


An interesting turn of events occurred with the 2008 release of Metallica’s album Death Magnetic.  While the album was criticized for its excessive compression, the same songs with significantly less compression had been released a year earlier in the game “Guitar Hero III:  Legends of Rock.”  Fans could hear the difference between the two versions, and thousands signed an online petition for the band to release a non-compressed version in digital media.


Audiologists have chimed in, expressing their concerns that the loudness of post-1995 albums increases the risk of damage to listeners’ hearing, especially that of children.  It’s not just that the peak volume is loud, but rather that the volume is continuously near peak.


The “war on loudness” has had some effect.  In response to listener complaints, radio stations attenuate loud music to remove volume conflicts with advertisements and speech.  By the middle of the first decade of this century, record labels had begun favoring mixes with less compression for release.  Of special note, the mixing engineer on Daft Punk’s 2013 album Random Access Memories opted for a less compressed version, and the album won several Grammy Awards, including Best Engineered Album, Non-Classical.  Recognizing that continually adjusting volume to compensate for loudness variations among different songs is inconvenient for customers if not outright annoying, most online music services now perform loudness normalization.  The volume of all songs offered is comparable.  A byproduct of normalization (in some cases intentional) is that any appeal based solely on loudness is negated, and as a result severely compressed songs sound insubstantial.  This may reduce the self-inflicted market pressure to severely compress music.


Analyses of crest factors and dynamic ranges over time suggest that the loudness trend may have peaked around 2005 and has subsequently reversed itself, as a definite increase in crest factor has been observed in albums produced after 2005.  It would appear that momentum is favoring the war on loudness rather than the quest for loudness.


Back to Paul’s Blog & Contents

bottom of page