Noise Gate Pt. 5

So we’re up to here so far. If you haven’t been following the blog up to this point, WELCOME! We’ll be building the introductory template for JUCE here on the Noise Gate plugin. You can find it on JUCE’s website here. You can follow that, or you can go back and read all my nonsense as I stumble through the tutorial.

Today we will be implementing the start of the processBlock, the spot where all the magic happens. To begin with, we’ll have to have a look at Audio Programming, how it works, and why so much Audio Programming information is super outdated. Not saying mine won’t also be outdated, but it least it will be in one place.

Screen Shot 2019-05-07 at 9.24.52 pm.png

The processBlock. The mother of all shit happening in this audio program. Before we start writing in what we’ll be doing, let’s discuss the processBlock a big, talking our way through the documentation.

The Documentation : Renders the next block.

The Translation : How audio programming works.

Audio Programming

WELL. Here’s the crux of audio programming. It requires a short and convoluted foray into computer programming, so here we go. I’m going to try and find a better overview rather than my ramshackled explanation. I fell into the master’s video, TIMURRRRR

So Timur talks about how audio programming is different to most other C++ in that it’s real-time, and it’s lock free. When coding, there are many many many instructions being completed by the computer, and anything that’s not a straight line could take a highly variable amount of time. C++ in itself has no understanding of what audio is, which is why we lean on 3rd party libraries like JUCE and Maximillian to do the heavy lifting for us.

So the main point of audio. Representing the waveform displacement of air as numbers between -1 - 1. All of our music is just between these two figures. How? Because there may be MILLIONS of numbers between these, depending on the decimal places you go to. So technically, we’re looking at -1.0f and 1.0f as our possible numbers. As the sound causes displacement and analog gear creates differing voltage to follow the same shape, so too does digital audio. Now we use our sample rate to measure this as fast as we can based on some mathematics known as the Nyquist theory. All of this stuff can be better explained elsewhere, so i’m just flying through it.

Let’s go with this guy, he looks pretty smrt.

Moving on.

To avoid getting 44100 callbacks per second (or 48000,88000,96000 or 192000 ((or others))) we process in larger audio buffers, taking in a big chunk of samples as an array of audio floats. This is referred to as the Buffer size.

Here’s the bit where Timur is talking about block sizes

So the idea in audio programming is that the audio is being sent to and from the soundcard and your plugin/software. So all of our programming is done inside these audio buffers, and we’re sending them to the soundcard, which then asks for the next block and there’s this constant to and fro to ensure that the correct information is being passed in REAL TIME (the spooky bit)
Making audio, whenever you need it, as fast as you can possibly make it.

The actual calling of audio information happens in the processBlock function!

When this method is called, the buffer contains a number of channels which is at least as great as the maximum number of input and output channels that this processor is using. It will be filled with the processor's input data and should be replaced with the processor's output. This is to be transferred to the soundcard as I said earlier, so we need to stay in the correct format.

The number of samples in these buffers is NOT guaranteed to be the same for every callback, and may be more or less than the estimated value given to prepareToPlay(). Your code must be able to cope with variable-sized blocks, or you're going to get clicks and crashes!

If you have implemented the getBypassParameter method, then you need to check the value of this parameter in this callback and bypass your processing if the parameter has a non-zero value.

Screen Shot 2019-05-07 at 11.05.41 pm.png

So this runs the function processBlock, which takes an AudioBuffer (a JUCE class) and a MidiBuffer (though we won’t be using that at this stage).

Screen Shot 2019-05-07 at 11.13.53 pm.png

Let’s first chat about auto. Apparently it was a pile of shit until recently.

I heard that here. An auto variable understands what it is from inference. It’s a neat type!

In these lines of code, we’ll be separating the sidechain buffer from the main IO buffer for separate processing in subsequent steps. Again, we don’t want to be re-writing over the same buffer multiple times, nor do we want to try and access the information together as one lot, so instead we’re separating with getBusBuffer().

Screen Shot 2019-05-07 at 11.37.57 pm.png

We’re using these to separate out the sidechain from the input/output main delivery system. However in the tutorial, we’re looking for a definition of getBusBuffer that takes 3 arguments. We should be getting a definition that has a buffer, a boolean and a bus index to seperate them out. Searching that out gives us…

Screen Shot 2019-05-07 at 11.41.46 pm.png

So we now have two separate busses through which to put our audio code, so we can measure each one and apply the value a little later.

Diggin’ through the documentation has also revealed these functions to me;

Screen Shot 2019-05-07 at 11.17.22 pm.png

So we could have also used these, unless I am mistaken. For now we’ll stick with the tutorial, but I might come back and implement these too.

The next lines of code are for our algorithm that’s coming up,where we’ll need access to those variables we made earlier in the .h.

Screen Shot 2019-05-10 at 8.00.40 pm.png

Looking back to the original alpha and threshold,

Screen Shot 2019-05-07 at 11.33.54 pm.png

We declared these in the private header file as AudioParameterFloat Pointers. Therefore, we need to use a special function to get the value of these AudioParameters. Looking at the JUCE API for AudioParameterFloat we find…

Screen Shot 2019-05-07 at 11.35.04 pm.png

Which we can use to check out the values at the time. Now because we’re running this in our process block, we will be calling this each time we look for audio, which should keep our parameters correct, even if we change them in the middle of a block. I’m fairly sure that we have to make copies of these to use them in multiple places, as the C++ 11 standard dictates some “data race” that happens if you try and access the same memory in multiple places. So here we are duplicating the alpha as a new variable named alphaCopy to ensure that while the variable is the same, we’ll be accessing alphaCopy whenever we need alpha, because all of this is written on the audio thread, so it won’t be accessing alpha in multiple spots at once.

God that’s a lot of information for right now. Check out the video, watch it the whole way through, and then maybe watch a few more just to get your head around how angry people are about real-time audio and how difficult it is.

And for those of you that have done these ones and want some more to go on with, I’m having a look at the Audio Developer Conference 2017, as their entire conference is online. I believe there are 4 pathways and I intend to watch them all and nod like I understand what all these geniuses are saying.





Comment