Audio frequency, or simply frequency, is a measure of the number of times a periodic pattern such as sound vibration occurs per second.
Frequency is an important characteristic of sound because it shapes how humans perceive it.
For example, we can distinguish between low-frequency and high-frequency sounds and are sensitive to frequencies in the middle range.
If a sound has too much energy in the higher frequencies, our ears may not be able to pick up on lower frequencies, resulting in a harsh tone. Similarly, if too much energy is concentrated in the lower frequencies, our ears may not be able to discern the higher frequencies.
Understanding the basic principle of frequency helps musicians and audio engineers produce better music mixes. Music recorded at incorrect levels or with poor instrument placement can result in mixes that are muddy sounding and lack clarity. Selecting instruments and samples based on their frequency spectrum—or tone—is essential for producing balanced mixes that draw out each instrument’s own unique characteristics and blend them together with all other elements of a track. Additionally, mastering engineers use equalization (EQ) processes to control and shape these frequencies into an identifiable mix that exhibits clarity at every level while still maintaining overall balance.
What is Audio Frequency?
Audio frequency is the rate at which the sound waves oscillate or vibrate at a given moment in time. It is measured in Hertz (Hz). Audio frequency affects the tonal quality and timbre of a sound. It is an important factor in the production of music as it determines how the different elements of a song sound. In this article, we’ll go over what audio frequency is and why it matters for music.
Audio frequency, also referred to as Hertz (Hz), is the range of sound frequency which is audible to a human ear. Audio frequency starts at 20 Hz and ends at 20,000 Hz (20 kHz). This range of sound frequency constitutes what we refer to as “the audible spectrum”. The further down the audible spectrum we go, the more bass-like sounds become; while the further up we go on the spectrum, the more treble-like sounds become.
It important to note that not all audio has equal levels across all frequencies — even when referring to recordings with flat response — because of numerous physical reasons. For instance, a bass guitar might generally be louder than a violin in a mix though equally panned left and right in a stereo mix because bass instruments generate lower frequencies which humans can hear better than higher frequencies.
Therefore, it’s important for music producers and sound engineers aliketo understand this concept if they intend on creating music or mixing audio professionally. Dynamic EQs are commonly used during music production workflows to percisely sculpt out any unwanted peaks throughout various frequency regions according to desired musical goals. Additionally Compressors can be used alongside EQs for other tasks such as increasing perceived volume levels within Mixes and Matering sessions.
Audio frequency is an important aspect of sound and music production, as it determines the pitch and range of a sound. A frequency is related to how fast something vibrates – the higher the number, the faster it vibrates. It’s measured in hertz (Hz).
The human ear typically recognizes frequencies between 20 Hz and 20,000 Hz (or 20 kHz). Most musical instruments produce sounds within this range. However, not all sounds are audible to humans; some frequencies are too low or too high for our ears to detect.
Audio signals can be divided into frequency ranges :
-Sub-bass: 0–20 Hz (also known as infrasonic or ultrasonic). This includes frequencies we cannot hear but which digital recording equipment detects, enabling us to manipulate them to generate unique sound effects.
-Bass: 20–250 Hz (low frequencies)
-Low mid: 250–500 Hz
-Midrange: 500–4 kHz (this range contains most harmonic content of vocal and natural instruments)
-High mid: 4 – 8 kHz
-Upper treble/presence: 8 – 16 kHz (allows clarity in individual voice parts or instrumentation)
-Super treble/airband: 16 -20kHz (creates high end and openness).
How Does Audio Frequency Affect Music?
The frequency of a sound is an important factor in determining how a musical work will sound. Audio frequency is a measure of the range of frequencies that humans can perceive through sound. It is typically expressed in hertz and can have a major impact on how a song sounds. In this article, we will explore how audio frequency affects music and why it matters when producing music.
Low frequencies make music feel heavier because they carry the low-end energy present in many instruments. Low frequencies can be felt as a physical sensation with headphones, speakers and even noise-cancelling headphones. The range of audio frequencies we listen to is between 20 Hz and 20,000 Hz, but in general, most people tend to perceive sounds in a narrower range between 50 Hz to 10 kHz.
Low Frequency Ranges
The lower range of audible sound lies anywhere below 100 Hz and is made up of bass notes — lower octaves of frequency created by instruments such as bass guitars, double basses, drums and pianos. These are felt more than heard because they tend to vibrate your ear canal which causes its own sensation that adds power and fullness to a mix. Many songs have low-end frequencies between 50 – 70 Hz for added heft in the presence stage.
High Frequency Ranges
The higher spectral range lies above 4 kHz and produces clearer or brighter sounds from instruments like cymbals, bells ringing or higher notes from pianos or keyboards. High frequency ranges produce higher pitch pitches than lower frequency sounds – think about how much clearer a church bell sounds compared to thunder! Your ears can hear up to 16 kHz or 18 kHz, but anything above 8 kWh is referred to as the “ultra high frequency” range (UHF). It helps isolate certain breaths or details from instruments mixed very close together that would otherwise get lost under one another at normal listening levels.
Mid frequencies tend to contain the most important elements in a track, such as the primary melody, lead and background instruments. In vocal recordings, the mid-range contains the all-important human voice. Between 250Hz and 4,000Hz, you will find the mid sections of your mix.
The same way that you can use EQ to cut out certain frequencies to make room for other elements in your mix, you can also use it to boost or reduce any of these midrange frequencies to better suit your musical needs. Boosting or reducing specific frequencies within this range can give tracks more presence or make them “sink” into their surroundings, respectively. It’s helpful when mixing a song that contains several melodic parts or multiple busy instruments playing at a similar frequency range; this allows you to focus on what is important while still maintaining a balanced sound.
In addition to adjusting individual frequencies in the mid section of your mix, it can also be advantageous (under certain circumstances) to use an equalizer plugin that adds presence or clarity to every frequency within this range (e.g., Aphex Aural Exciter). By doing so, you will be able to capitalize on all of those mid-range harmonics and create a more rounded overall soundscape with better definition between different instrumental components and elements located within this frequency range.
High frequencies, or treble, are found in the right channel of a stereo mix and consist of the highest audible sounds (above 2,000 Hz). A balance of high frequencies alongside mid-range and low frequencies often leads to a clearer sonic image. They are responsible for brightening up a track and give clarity to higher register instruments like cymbals and woodwinds.
In mixes with too much high-frequency content, instruments can start to sound harsh on your ears. To avoid this, try reducing certain frequencies in the high-end spectrum. Using subtle filters around 10 kHz will reduce the harshness while making sure you don’t lose any of that ‘shine’ from percussion or strings.
Too little treble can cause songs to lose definition in the higher octaves of instruments like guitar or piano. EQ is often used to subtly introduce more highs by raising certain frequencies around 4-10 kHz for added clarity if needed. This helps bring out individual elements in a mix without causing them to sound piercingly harsh on your ears. Subtly boosting high frequencies around 6 dB can make all the difference! To add more texture or ambience to a song, wider reverb tails with mostly high frequency content can be used as well; this gives rise to airy or dreamy effects that sit nicely above percussion tracks and other sounds in the mix.
In conclusion, audio frequency is an essential component of music production and proper sound engineering. It is the measure of sound pressure over time, which generates the variations of pitch that are necessary to create music. Its range determines the range of notes heard by the human ear in a given piece of music and its definition can vary from one instrument to another. Understanding how this component works allows musicians, engineers and producers to get the best possible sound out of their recordings. With careful consideration given to a track’s frequency balance as it is being produced, it can give a song the clarity, texture and range necessary for great sounding music. It is one piece to completing any professional-grade production.
I'm Joost Nusselder, the founder of Neaera and a content marketer, dad, and love trying out new equipment with guitar at the heart of my passion, and together with my team, I've been creating in-depth blog articles since 2020 to help loyal readers with recording and guitar tips.
Check me out on Youtube where I try out all of this gear:Subscribe