Frequency, Pitch, and Notes: The Building Blocks of Building Blocks

Intro:

Similar to the relationship between video games and programming the experience of listening to, and even to some degree writing music is built on layers of abstraction from 'lower level' type concepts. Part of being a competent composer and musician is to understand the lower level concepts, and be able to work at various levels of abstraction. At the highest level of just listening you're not immediately concerned with things like form. Moving one layer down you may be focused on overall form, but not be focused on specific chord progressions. From there layers move down from chord progressions->chord voicings(inversion)->voice leading->counter point(voice independence)->intervals->motifs/melodies->notes->pitch->frequency->silence.

Starting with Silence: (you will want to have MSPaint or some other drawing program open for this part)

We begin this topic with a single point (0, 0). From that point we'll extend a line going up as the Y axis and give it a range of 0 to 100; calling it percent loudness. Back at the point we'll extend a line to the right going out to infinity as the X axis; calling it frequency(Hertz or Hz). At this moment we will not use it to describe any specific frequency, but all frequencies from audible to inaudible and to an infinite precision. The first adjustment we need to make with our graph is to trim it down so that instead of dealing with all frequencies we are only dealing with those within our range of hearing.

Starting with our initial point (0, 0) we will slide that point to the right along the X axis until it reaches 30Hz. On the other side towards infinity we will make a copy of our Y axis that intersects the X axis at a point that will be labeled 20,000Hz. The reason for these adjustments is to bring out graph into the range of average human hearing. Many people, especially adults, won't be able to hear above 15,000Hz, but younger ears can generally hear higher. At this point it's worth making a key observation about what it means to make music.

Ignoring ethnic and cultural considerations that may shape specific practices, the act of making music is about "sculpting" the naturally occurring noise of the world. Some would argue that a purely scientific definition of music would be something along the lines of "sound organized in time," and to a degree that is correct.

Frequency:

Going back to our graph it should look something like a box without a lid with the frequency axis at the bottom, and a loudness axis on either side closing in the frequency range. Obviously, when the box is empty it represents silence. Even if we draw a single point in the box it would still be considered silence; so how do we fill the box with sound? If we drew a line going from 50% on one Y axis to 50% on the other it would represent an impulse which gives us sound, but it's a purely momentary instance of all possible frequencies happening at the same time (not very musical). We'll need a much more elaborate method to create sound from our box.

As some may already be screaming at their monitors as they read this, sound is not stationary and frequency itself is not a static measurement, but a measure of change per second. So if we were to imagine a small hole at a point in the frequency axis of our graph (let's say 100Hz), and then imagine moving a wooden dowel in and out of that hole at a rate of 100 times per second we will suddenly find that our box is producing sound because it's the change in loudness that we hear.

This isn't a full-proof analogy as it's not entirely accurate to how sound is actually produced, but for now it's sufficient. The important take away from the topic so far is that we have a range of audible frequencies that represent observable changes in loudness at any given rate. Now, if we were to imagine an infinite number of dowels along the infinitely finite frequency axis all changing at their given rates at varying loudness what we would hear is noise. Therefore, we need to continue narrowing down the frequency range so that what we are able observe are distinct pitches that we can later describe and organize into a music system.

Pitch:

Just as an infinite range of frequencies is not useful or practical because not all of them are audible, an infinitely finite range of audible frequencies infinitely unmanageable for use in a musical system. We need a mechanism that allows us to quantize the frequency range into components that can be controlled by whatever means we have available (human voice, analog/digital instruments, etc.). For that purpose we have the concept of pitch.

In western music you've probably seen alphanumeric symbols such as A1 or C4. These are used to identify specific pitches, but by themselves they don't mean much. There's a kind of encoding that happens under the hood of these symbols, and it's usually described in contemporary methodologies as 'A4 = 440Hz'. The letter describes the note name (which will be discussed shortly), the number describes the range, and 440Hz describes what frequency is assigned to the symbol. Whether it's formally described or not, most musical systems will have some similar feature.

For now, let's focus on creating our own system of pitch as an exercise in understanding how the concept of pitch can be used to create a base 10 scale. Back to our graph and our 100Hz dowel let's assign that a pitch symbol, 0. Then we'll go up to 110Hz and label that 1. From there well go up like that until we reach 9 = 190Hz. Effectively, what we've done is quantize the frequency range into parts that we can use in a musical system, and that we can expand on in several ways. For example, if we wanted to have more notes available in our scale we could include symbols like 1.5 to include 115Hz, or we can extend the range by doing something like B1.0 = 210Hz.

Obviously, here we've just inverted the western symbols, but any system can have any kind of symbols that meets its needs. The main function of pitch is to quantize the frequency range into a finite set that can be used within a musical system. However, describing how those pitches relate to each other is done via notes/note names.

Notes:

Pitch is a fantastic way to quantize and identify frequencies, but the symbols can get quite verbose if your system includes a wide range of unique pitches. Western music uses a cyclical system of octaves (12 notes) using repeated note names (A, A#/Bb, B, C, C#/Db, D, D#/Eb, E, F, F#/Gb, G, G#, Ab), and octave indexes (-2 through 8). This allows a wide range of frequencies, short pitch names, and note names that describe the relationships between pitches even if not in the same octave.

That last benefit is probably the most important in regard to composition because it allows composers to stack pitches in a predictable way as long as they understand how note names relate. However, a downside is that it can force specific hierarchies because the underlying pitches are quantized in a way that emphasizes specific relationships. Some have attempted to overcome this limitation by introducing micro-tones, but creating a notation system to accommodate that many unique notes has yet to be agreed upon and formalized. That being said, any individual can build a unique system that meets their needs.

Conclusion:

For now I've glossed over the specifics of frequency because those will be covered in more detail when discussing sound design and mixing/mastering. For now, the important take away is that music systems use pitch/pitch symbols as a way to quantize the audible frequency range, and note names to describe how those pitches relate. This isn't limited to the western system as any music system will inevitably go through the process as they develop techniques and instruments.

Previous
Previous

Motifs and Melodies: The Power of a Single Voice

Next
Next

Modern Philosophy of Classical Traditions: Theory, History, and Musicianship