I was really happy to see a link to the Loudness War wikipedia page was on Hacker News and getting a lot of engagement. While one guy literally didn’t even think about reading the article and spent a good 250 words complaining about the train announcement noise level, there were still plenty of good and thought provoking comments.
One centered around the normalization of YouTube and Spotify killing the need for the loudest tracks possible. More and more artists are introducing dynamic range into their tracks at the expense of perceived loudness because…well…it doesn’t really matter anymore. While there are definitely still tricks on normalized services to increase perceived loudness, it’s just not a large factor anymore.
The goal of audio normalization is to make the perceived loudness of tracks the same. Even though two tracks are the same decibel level, one can sound materially louder than the other. The loudness of tracks is now best measured in LUFS. This stands for Loudness Unit Full Scale. dBs measure the amount of air pressure displaced by a sound, while LUFS is the measure of consistent volume from the sound.
For example, a super “loud” 30hz sine wave (sub bass) isn’t perceived as very loud by our ears, but it displaces a SHIT TON of air. You can feel it in your bones. However if you were to displace the same amount of air with a 3000hz sine wave (relatively high tone compared to bass. Like a high note in a piano piano) you’d probably hurt yourself. The dB’s of those two sounds are the same, but the perceived loudness is worlds apart.
Producers and engineers have been mixing tracks to sound louder for decades. They do this by reducing dynamic range, amplifying certain frequencies, and reducing others.
However, since this practice is dying we’re seeing more and more dynamic range in tracks. I think this is cool. Dynamic range is the range in volumes of different sounds in a track. For example, a small cymbal in the background compared to a snare drum or a lead. If the cymbal is super quiet compared to the lead that’s a lot of dynamic range. If the volume isn’t that different then it has little dynamic range.
The other comment that really interested me talked about how listening environments have changed significantly since the war started. Many people now listen to music in relatively loud environments compared to their homes. Places that music just didn’t used to be listened to like on public transit, streets, in open offices, grocery stores, etc… We’re listening to music in louder environments and most headphones don’t have great isolation.
This makes it hard for listeners to hear sounds in songs with a lot of dynamic range. There’s so much environment noise when listening to a track, a quiet cymbal in the background every eight bars is missed. When producers test their songs louder environments they realize that listeners aren’t going to be able to hear that pluck sound, or cymbal crash, or hi hat. The solution is to compress the audio. This brings the loud peaks of sounds down, and raises the lower volume sounds to be closer to the louder parts of the track. Now, when you’re walking down I5 with your phone up to your ear listening to Drake you can hear Bo1da’s hi-hat.
I don’t have a strong opinion on the loudness except that I feel it got very out of control with how large of an influence Soundcloud used to have with EDM. Soundcloud uses an algorithm to compress and increase the loudness of tracks added to its platform and many users figured out how to exploit that. Songs on Soundcloud are significantly louder than on every other platform. If you notice yourself having to consistently adjust the volume on Soundcloud this is why.
I could talk about this for hours but I have to cut myself off. Ultimately I think LUFS normalization is good for the industry, but I believe the algorithm should be an OSS standard used and supported by all platforms. Hell even adding this shit in a club would be amazing for a sound guy I bet. I’m not sure how real time it is though 🤔.