# Noise Gate Pt. 6

We’re moving on with our Noise Gate plugin today to finish the bulk of our Audio Processing. We’ll then have quite a bit of GUI to do to tidy it all up and then we’ll either look at expanding, OR we’ll take a step down JUCE tutorial lane, bite off another tutorial and chew that till the grounds and chunky and we need to a toothpick to pull all of the malloc calls out from between our teeth.

You can find the previous work completed on this project here, so you can follow step by step!

I’ll be adapting the tutorial from the main JUCE tutorial page, so if you are looking to skip my explanations of each line of code, go here;

https://docs.juce.com/master/tutorial_plugin_examples.html

We’re a few lines inside the processBlock now, and hitting our first for loop in the project.

This is a line of code that we’ll see really, really often in audio programming, as it’ll iterate through each of our samples and each of our channels and such to write to each sample before sending through the audio buffer to the processor. If you’re looking to learn more about that, check out Blog 5, for a detailed explanation of the audio buffer/programming workflow of what’s actually happening in the processBlock.

Allocating memory for our buffer in samples.

You can find the blog related to that here;

www.scowlingowlsound.com/blog

So our for loop runs through, starting from 0, to the buffer.getNumSamples.

What is buffer.getNumSamples? In this, we’re checking the memory limits of the allocated memory we just made in the last step here. buffer.getNumSamples also asks us to check out a couple of other related functions while we are at it.

There’s some advice from the JUCE’ean overlords to check out getReadPointer and getWritePointer. So let’s do that.

The methods getReadPointer and getWritePointer return pointers to the data inside a buffer, these pointers can be used in C++ (and C) like arrays. These functions let us replace stuff by using the whole buffer, but instead we usually write the buffer sample by sample. Because we are using the pointers in this way we must assure that our index is never is out of bounds of the buffer. The ScopedPointer class implements a pointer that is automatically deleted when the program leaves scope so we don't have to worry about the pointer being accessed in some illegal way (for example when the object it points to is already deleted). We’ll come across them from time to time as well.

We are also iterating through the buffer, so that each time the loop closes, we ++j (add 1 to j) until we hit the limit of our buffer, at buffer.getNumSamples. The outer loop will process the individual samples in the audio buffer block while the inner loops will process the channels in an interleaved manner. This allows us to keep the same state for each channel in a single sample processing.

Moving back to the tutorial, JUCE says;

[8]: For each channel in the sidechain, we add the signals together and divide by the number of sidechain channels in order to sum the signal to mono.
[9]: Next we calculate the low-pass coefficient from the alpha parameter and the sidechain mono signal using the formula y[i] = ((1 - alpha) * sidechain) + (alpha * y[i - 1]).

So, let’s deal with the sidechain first. The sidechain will be controlling our source, comparing it’s volume against our threshold to determine the gating behaviour.

# The Mono Sidechain

First up we’ll need a variable to convert our sidechain into a mono signal. We could have just specified in our input requirements that the sidechain must be mono, but we’re looking to make things really easy for the user. As they should not have to think about things, summing to mono is pretty easy for us. It’s stereo-izing things that’s the really hard one to do. Maybe we’ll have a go at that soon!

So for each channel in the sidechain, we’re using getReadPointer, that function from earlier, taking in two arguments, i and j. These two arguments will be one of the implementations in the documentation, however we can already take a guess at what they will be. Seeing as we’ve just used i as part of the sidechain channel for loop, let’s assume that it’s related to the channel number, as seeing as we decided a little earlier that we’d use j four our sample, we can try and write out in english what this function is doing.

Because we don’t know how many channels our sidechain will have, it’ll now copy the same sample for each channel, landing in dual mono, or mono. Hooray! The only other part to explain is the usage of += in our code

                mixedSamples += sideChainInput.getReadPointer (i) [j];

This just means

mixedSamples = mixedSamples + sideChainInput.getReadPointer (i) [j]

So that they can continue adding properly.

# The Clear Function

I’m not entirely sure why the tutorial doesn’t include a clear function, so we’ll throw one in and test it out. It might help, it might not. Whenever the processBlock starts, if we don’t initialise the buffer and clear it out, we could end up spitting really bad garbage out of our speakers. I’ve found through my research that it seems like best practice to clear it out. So let’s take a look.

Looking into the documentation for our AudioBuffer class shows us this clear function.

We can implement this like so;

buffer.clear (j, 0, buffer.getNumSamples());

However for now I will have it commented out, just until we get a function up and running where we can test if it’s really necessary or not. Seeing as most of the time we’re culling the audio, I don’t know at this stage if it’s required, or if it just feels like a hangover of reading and hearing about how it should be cleared for circular buffers and what not.

# Static Cast

We’re getting there! We’re most of the way through the functionality at this stage, and we’re almost set up to do some maths to calculate the threshold, and then move on. Hopefully you’re enjoying the musings, the wine ages well and the coffee cools nicely.

        mixedSamples /= static_cast<float> (sideChainInput.getNumChannels());

So what in the world is static_cast? It’s actually not too freaky (if I understand it correctly) in that it just allows division into floats. Because we are dealing with channels which will be integers and samples which is a double, we’ll use static_cast<float> to complete for an answer in float to finish off our mono-ing function.

(a /= b) can be written as (a = a / b)
https://www.geeksforgeeks.org/operators-c-c/

So writing this out again we’ve got

mixedSamples = mixedSamples / sideChainInput.getnumchannels (can be a float!)

So if there are two channels, we’ll be adding the two samples together and then dividing by two. If there’s one we’ll be dividing by one. So we end up with one which is our mono channel in the end mmmkay?

# The Spooky Maths Bit

We’re up to one of those bits in the audio process. The JUCE tutorial is very helpful with an explanation;

## [9]: Next we calculate the low-pass coefficient from the alpha parameter and the sidechain mono signal using the formula y[i] = ((1 - alpha) * sidechain) + (alpha * y[i - 1]).

Not entirely sure what that means? I know right. I’m scouring the internet to find what exactly this algorithm is. Now I’m not particularly well versed in DSP (okay, I have no idea at this stage), but I’m working at it. I’ve got to go through some maths textbooks to work all this shit out (I’m a Psychology and Music Major for chrissakes!) but we’ll see how we go.

I’ve been reading the Wikipedia page for a Low Pass Filter, as I was told by the tutorial that that is what we’re trying to accomplish.

https://en.wikipedia.org/wiki/Low-pass_filter

One day, this will all make sense to me. For now, I just sort of leave it on my computer to make me look smrt to my students. When I’m not making sic beats.

Reading a tiny bit further, a kind soul has refactored this nightmare (somehow) into some pseudocode that we can use to build our C++

So now we’re getting somewhere. I’ve not the faintest way that this whole thing has been discretized or whatnot, but hey, maybe I’ll learn at some point during this blog. We’re currently building an Infinite Impulse Response filter.

That is, the change from one filter output to the next is proportional to the difference between the previous output and the next input. This exponential smoothing property matches the exponential decay seen in the continuous-time system. As expected, as the time constant increases, the discrete-time smoothing parameter {\displaystyle \scriptstyle \alpha } decreases, and the output samples respond more slowly to a change in the input samples, the system has more inertia. This filter is an infinite-impulse-response (IIR) single-pole low-pass filter.

Back to JUCE’s algorithm.

[9]:  y[i] = ((1 - alpha) * sidechain) + (alpha * y[i - 1]).

and we’re plotting that in Xcode with;

lowPassCoeff = (alphaCopy * lowPassCoeff) + ((1.0f - alphaCopy) * mixedSamples); 

y = lowPassCoeff

sidechain = mixedSamples

I feel like they rearranged it just for fun.

lowPassCoeff = ((1.0f - alphaCopy) * mixedSamples) + (alphaCopy * lowPassCoeff); 

This form will also work.

We’re almost there! We’ve just got to compare the signal level to the threshold, otherwise we write 0’s to our output buffer, and send that on to get some god damn piece of quiet in this house. Thanks so much for reading!

Resources for this one;

https://github.com/passivist/GRNLR/wiki/Playing-the-Buffer

Cool granular project. I haven’t actually built it yet, but I’ll definitely be giving it ago. Anyone that uses Supercollider is a friend of mine. Also, he had some great explanations I borrowed.

The Audio Programmer builds an IIR plugin. I couldn’t actually get this to work for some reason when I last tried it. but it builds on some of the concepts here with IIR filters (obviously). Much of the code is pre-built, and you can just download a file, but I think it’s best to start on some of his earlier videos.