Audio Signal: What Is It And How Does It Work?

by Joost Nusselder | Updated on:  May 3, 2022

Always the latest guitar gear & tricks?

Subscribe to THE newsletter for aspiring guitarists

We'll only use your email address for our newsletter and respect your privacy

hi there I love creating free content full of tips for my readers, you. I don't accept paid sponsorships, my opinion is my own, but if you find my recommendations helpful and you end up buying something you like through one of my links, I could earn a commission at no extra cost to you. Learn more

How does it do that? How does the audio get from source to speaker so you can hear it?

An audio signal is an electrical representation of sound in the audio frequency range of 20 to 20,000 Hz. They can be synthesized directly, or originate at a microphone or instrument pickup transducer. Signal flow is the path from source to speaker, where the audio signal is converted into sound.

Let’s look at what an audio signal is and HOW it works. I’ll also discuss the different types of signal flow and how to set up a signal flow for a home audio system.

What is an audio signal

Understanding Audio Signal Processing

What is Audio Signal Processing?

Have you ever wondered how your favorite songs come together? Well, it’s all thanks to audio signal processing! Audio signal processing is the process of converting sound into digital formats, manipulating sound frequencies, and adding effects to create the perfect song. It’s used in recording studios, on PCs and laptops, and even on specialized recording equipment.

Getting Started with Audio Signal Processing

If you’re interested in learning more about audio signal processing, Warren Koontz’s Introduction to Audio Signal Processing is the perfect place to start. It covers the basics of sound and analog audio signals, sampling and quantizing to digital audio signals, time and frequency domain processing, and even specific applications like equalizer design, effect generation, and file compression.

Learn Audio Signal Processing with MATLAB

The best part about this book is that it comes with examples and exercises that use MATLAB scripts and functions. This means you can process audio in real time on your own PC and get a better understanding of how audio signal processing works.

About the Author

Warren Koontz is a professor emeritus at the Rochester Institute of Technology. He has a B.S. from the University of Maryland, a M.S. from the Massachusetts Institute of Technology, and a Ph.D. from Purdue University, all in electrical engineering. He spent over 30 years at Bell Laboratories developing digital transmission systems, and after retiring, he joined the faculty at RIT to help create an Audio Engineering Technology option. Koontz has continued his research in the field of audio engineering and has published and presented results of his research.

The Science Behind Alternating Currents

What is AC?

Alternating Currents (AC) are like the wild child of electricity – they don’t stay in one place and they’re always changing. Unlike Direct Current (DC) which just flows in one direction, AC is constantly switching between positive and negative. This is why it’s used in audio signals – it can recreate complex sounds with accuracy.

How Does it Work?

AC audio signals are modulated to match the pitch of the sound being reproduced, just like sound waves alternate between high and low pressure. This is done by changing two values – frequency and amplitude.

  • Frequency: How often the signal changes from positive to negative.
  • Amplitude: The level or volume of the signal, measured in decibels.

Why is AC so Great?

AC is like the superhero of electricity – it can do things that other forms of electricity can’t. It can take complex sounds and turn them into electrical signals, and then turn them back into sound again. It’s like magic, but with science!

What is Signal Flow?

The Basics

Signal flow is like a game of telephone, but with sound. It’s the journey a sound takes from its source to your ears. It could be a short trip, like when you’re listening to your favorite tunes on your home stereo. Or it could be a long, winding journey, like when you’re in a recording studio with all the bells and whistles.

The Nitty Gritty

When it comes to signal flow, there are a lot of stops along the way. The sound might pass through a mixing console, external audio equipment, and even different rooms. It’s like a big ol’ audio relay race!

The Benefits

The beauty of signal flow is that it can help make your sound better. It can help you control the volume, add effects, and even make sure the sound is going to the right place. So, if you’re looking to get the most out of your audio, then you’ll want to get to know signal flow.

Understanding Audio Signals

What are Audio Signals?

Audio signals are like the language of your speakers. They’re the ones that tell your speakers what to say and how loud to say it. They’re the ones that make your music sound awesome, your movies sound intense, and your podcasts sound like a professional recording.

What Parameters Characterize Audio Signals?

Audio signals can be characterized by a few different parameters:

  • Bandwidth: This is the range of frequencies that the signal can carry.
  • Nominal Level: This is the average level of the signal.
  • Power Level in Decibels (dB): This is the measure of the signal’s strength relative to a reference level.
  • Voltage Level: This is the measure of the signal’s strength relative to the impedance of the signal path.

What are the Different Levels of Audio Signals?

Audio signals come in different levels depending on the application. Here’s a quick rundown of the most common levels:

  • Line Level: This is the standard level for professional mixing consoles.
  • Consumer Level: This is a lower level than line level and is used for consumer audio equipment.
  • Mic Level: This is the lowest level and is used for microphones.

What Does All This Mean?

In a nutshell, audio signals are like the language of your speakers. They tell your speakers what to say, how loud to say it, and how to make your music, movies, and podcasts sound awesome. So if you want your audio to sound its best, you need to understand the different parameters and levels of audio signals.

What is Digital Audio?

What is it?

Digital audio is the digital form of an audio signal. It’s used in all sorts of audio plug-ins and digital audio workstation (DAW) software. Basically, it’s the information that passes through the DAW from an audio track to a plug-in and out a hardware output.

How is it Transported?

Digital audio can be sent over a variety of cables, including:

  • Optical fiber
  • Coaxial
  • Twisted pair

Plus, a line code and communication protocol are applied to render a digital signal for a transmission medium. Some of the most popular digital audio transports include:

  • ADAT
  • TDIF
  • S/PDIF
  • AES3
  • MADI
  • Audio over Ethernet
  • Audio over IP

So What Does All That Mean?

In layman’s terms, digital audio is a way of sending audio signals over cables and through the air. It’s used in all sorts of audio plug-ins and digital audio workstation (DAW) software. So, if you’re a musician, producer, or audio engineer, chances are you’ve used digital audio at some point in your career.

Manipulating Audio Signals

What is Signal Processing?

Signal processing is a way of taking an audio signal, like a sound, and changing it in some way. It’s like taking a sound, plugging it into a computer, and then using a bunch of knobs and dials to make it sound different.

What Can You Do With Signal Processing?

Signal processing can be used to do all sorts of cool things with sound. Here are some of the possibilities:

  • High or low frequencies can be filtered out.
  • Certain frequencies can be emphasized or minimized with an equalizer.
  • Harmonic overtones can be added with distortion.
  • Amplitude can be controlled with a compressor.
  • Musical effects like reverb, chorus, and delay can be added.
  • The overall level of the signal can be adjusted with a fader or amplifier.
  • Multiple signals can be combined with a mixer.

What Does All This Mean?

In a nutshell, signal processing is a way to take a sound and make it sound totally different. You can make it louder or softer, add effects, or even combine multiple sounds into one. It’s like having a sonic playground to play in!

What is Transduction?

The Basics

Transduction is the process of converting sound into electrical signals. In other words, it’s the process of turning sound waves into 0s and 1s. It’s like a magical bridge between the physical and digital worlds.

The Players

There are two main players in the transduction game:

  • Microphones: These transducers take sound waves and turn them into electrical signals.
  • Speakers: These transducers take electrical signals and turn them into sound waves.

The Types

When it comes to transduction, there are two main types of audio signals: analog and digital. Analog is the original sound wave, while digital is the 0s and 1s version.

The Process

The process of transduction is pretty simple. First, a sound wave is encountered by a microphone capsule. This capsule then converts the mechanical energy of the vibration into an electrical current. This current is then amplified and converted into a digital signal. Finally, this digital signal is converted back into a sound wave by a speaker.

The Funky Science

Our ears also transduce sound into electrical signals, but these are auditory signals, not audio signals. Auditory signals are for hearing, while audio signals are for technology.

So there you have it – a quick and easy guide to transduction. Now you can impress your friends with your knowledge of the magical process of turning sound waves into 0s and 1s!

Understanding the Decibel Scale

What is a Decibel?

When you look at a signal meter, you’re looking at decibel information. Decibels measure the loudness or amplitude of sound. It’s a logarithmic scale, not a linear one, which means it can measure a huge range of sound power levels. The human ear is an amazing device that can detect the sound of a pin dropping close by, as well as the roar of a jet engine in the distance.

Noise Measurement Units

When you measure noise levels with a sound level meter, you measure the intensity of noise in decibel units (dB). A sound meter uses a display with a decibel range and resolution to approximate to the ear’s dynamic range. It would be difficult to manufacture a sound level meter that had a linear performance, so a logarithmic scale is used, using 10 as the base.

Decibel Levels of Common Sounds

Here’s a list of decibel levels of common sounds:

  • Near-total silence — 0 dB
  • A whisper — 15 dB
  • A library — 45 dB
  • Normal conversation — 60 dB
  • Toilet flushing — 75–85 dB
  • Noisy restaurant — 90 dB
  • Peak noise on a hospital ward — 100 dB
  • Baby crying — 110 dB
  • Jet engine — 120 dB
  • Porsche 911 Carrera RSR Turbo 2.1–138 dB
  • Balloon popping — 157 dB

Types of Decibels

When it comes to audio, there are several types of decibels:

  • SPL (Sound Pressure Levels): measures real-world (non-signal) sounds, measured with a specialized SPL meter.
  • dBFS (Decibels Full Scale): how digital signal levels are measured in the world of 0s and 1s, where maximum signal strength =0 on the meter.
  • dBV (Decibels Volt): mainly used in analog equipment or digital software that emulates analog gear. VU meters register average audio levels, as opposed to peak meters, which show only the loudest momentary peak signals. In the early days of analog audio, magnetic tape wasn’t capable of recording as much audio signal compared to magnetic tape produced decades later, so it became acceptable to record over 0 depending on the tape being used, up to +3 or +6 or even higher.

Understanding Audio Formats

What is an Audio Format?

When you record audio, you need to decide how it will be stored. This means choosing the right audio format, bit depth, and sample rate. It’s like picking the right camera settings for a photo. You can choose a JPEG quality (low, medium, high) or record the maximum amount of detail in a RAW file.

Audio formats are like image formats – .png, .tif, .jpg, .bmp, .svg – but for sound. An audio format defines how much data is used to represent the audio, whether it’s compressed or not, and what type of data is used.

Uncompressed Audio

When it comes to audio production, you’ll usually want to stick with uncompressed audio. That way, you can control how the audio is distributed. Even if you’re using a platform like Vimeo, YouTube, or Spotify, you’ll want to master the audio in an uncompressed format first.

Compressed Audio

If you’re working with music, you may need to compress the audio file if it’s too big for the distribution platform. For example, Distrokid only accepts files up to 1GB. So if your song is really long, you’ll have to compress it.

The most common file formats for producing music are WAV and FLAC. FLAC is a lossless compression format, which is better than mp3s. Spotify recommends using AAC format.

Exporting Audio

When you’re exporting audio as part of a video, you’ll usually have a few presets to choose from (e.g. YouTube, Vimeo, Mobile, Web, Apple Pro Res.). The audio will get compressed along with the video based on your export settings.

If you have a use case that doesn’t fit the presets, you can do some extra research online to figure out the best settings.

File Size Comparison

Here’s a comparison of file sizes across different audio formats:

  • WAV: Large
  • FLAC: Medium
  • MP3: Small

So, there you have it! Now you know all about audio formats.

What is Bit Depth?

Bit depth is a technical term that’s used to describe the dynamic resolution of a sound’s waveform. It’s a bit like the number of decimal places used to represent the entire audio file, and it’s a key factor in determining the overall quality and resolution of a sound.

The Basics of Bit Depth

Bit depth is all about the range of values used to represent the loudest and quietest signals that can be recorded in a digital medium. Here’s a quick rundown of the basics:

  • Bit depth values represent the dynamic resolution of a sound’s waveform.
  • Bit depth also defines the overall number of decimal places for all the 0s and 1s used to represent the entire audio file.
  • The most common bit depth standards are 16-bit and 24-bit. The more bits used, the larger the sound file, and the higher the quality or resolution it will be.
  • CD audio is defined as a 16-bit medium, whereas DVDs can play 16, 20 or 24 bit audio.

Bit Depth as a Creative Parameter

Bit depth isn’t just a technical term – it can also be used as a creative parameter. For instance, there’s a whole genre of electronic music called Chiptune which emulates the way audio sounded when played on earlier generations of computers with 8-bit processors.

So if you’re looking to add a bit of lo-fi flavor to your sound, bit depth is definitely something to consider. Just remember that the more bits used, the larger the sound file and the higher the quality or resolution it will be.


Now you know all about the audio signal as a REPRESENTATION of sound as a signal in the form of electrical or mechanical vibrations. It’s how we hear music and how we record it. It’s how we share it with others and how we enjoy it on our devices.

So, don’t be afraid to get started with it and have some fun!

I'm Joost Nusselder, the founder of Neaera and a content marketer, dad, and love trying out new equipment with guitar at the heart of my passion, and together with my team, I've been creating in-depth blog articles since 2020 to help loyal readers with recording and guitar tips.

Check me out on Youtube where I try out all of this gear:

Microphone gain vs volume Subscribe