At around 11 or 12 years old, I recognised that I had a choice to make about whether to go down the respective rabbit holes of computing or of music. I chose the former, which has worked out well for my career prospects, but it’s been at the expense of my experience with the latter.

I have a decent ear for music - I know what music I like, I know what sounds good and what doesn’t, and I have grade 1- or 2-level experience with the piano - but up until recently I had no experience whatsoever with how competent professionals made music or managed audio in the modern day. Eventually, after having been inspired by listening to notable drum & bass artists for the last couple of years, I shelled out for the fully-featured version of Ableton Live.

“Logarithmic” is more than just a fancy word

Once of the first things I learned about sounds in school is that the pitch of a tone is proportional to its frequency. Double the frequency, and you double the pitch. Lower-pitched sounds have a lower frequency (and larger wavelength), and higher-pitched sounds have a higher frequency (and smaller wavelength).

Lately I’ve come to realise that this fact alone affects the properties of sound in more subtle ways. It means that lower-pitched sounds like basslines are quite sensitive to small changes in frequency, whereas higher-pitched sounds like hats or synths are more robust and more solidly occupy their “audible location” on the frequency spectrum.

Writing this out sounds a bit weird and nebulous, so let me explain.

I’ve often heard warnings that bass tones can end up sounding “muddy” if you don’t treat them in the right way. This is one of those intuitive terms that I don’t really understand very well as an outsider. However, over time I’ve learned that “muddy” essentially means “not as well-defined”, or equivalently, “smeared across the sound spectrum”. The tones can start to interfere with and smother other sounds that are in close proximity to them in frequency.

The reason why this happens with bass tones, as far as I’m able to deduce, is related to the range of the frequency spectrum that they occupy. Say you have a C3 note at 131Hz (maybe not strictly bass, but this is just an example). If you go up an octave to C4, the tone becomes 262Hz - a difference of 131Hz, obviously. Each of the other 7 notes in-between will sit on a specific frequency, and you’ll get a variance of 1 entire octave (8 notes) over 131Hz.

If instead you consider G7 at 3136Hz and G8 at 6272Hz, the 8 notes of the octave are now spread over a 3136Hz frequency interval. This means that “there’s a lot more Hz” between each note. As a result, a frequency shift of 5Hz, for example, will produce a more noticeable pitch shift in a lower note than it will in a higher note, because the notes are more tightly packed together at the low end of the frequency spectrum.

The corollary of all of this is that small changes in parameters (like changing a bassline tone by 2Hz) will be much more audible in low-pitched tones than high-pitched tones. Low-pitched tones are more sensitive to sources of frequency interference than high-pitched tones are. Consequently, you have to be very precise with how you set up instruments used for a bassline, because small changes or inaccuracies will have bigger consequences on the quality of the sound. Conversely, you can throw a lot more crap at higher-pitched instruments, and so potentially provide a lot more variation in the types of sounds you can create.