Noise Gate Pt. 8  We’re puttering along pretty smoothly at this point, with the ability to actually use our Noise Gate finally coming to fruition. On the JUCE “tutorial” side, this is where it ends.      

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     It’s all up to US now! So what! I bet we can build a purty looking GUI.  If you’re looking for the Project up to this point, you can find it here.  This is where we’ll diverge once more from the way we regularly do things, by creating a basic GUI that doesn’t take a lot of work.  There are two ways to look at generating GUI in JUCE (that I know of).  There’s a great talk on it here;        </iframe>" data-provider-name="YouTube"         Julian talks all about the wacky and wonderful things you can do with the GUI in this, and in part 2, which i’ll link below. And you can learn how to make a house here;   https://docs.juce.com/master/tutorial_graphics_class.html   I don’t know exactly why you’d do this, but hey, what do I know.  For this, you can also look at Martin Robinson’s book, How to do everything in JUCE except audio, or “JUCE for Beginners”.   Much of the way we make GUI in larger projects are with the sort of things Julian talks about here. We’ll see if we can make it in two different ways, the original, basic way, and the more advanced, purtier way.  We start with the basic way.  The Basic Way  If we go to the Documentation and go to  GenericAudioProcesorEditor , this one does what it says it does on the box. It creates;   A type of UI component that displays the parameters of an   AudioProcessor   as a simple list of sliders, combo boxes and switches.  More...    So we can use this to take anything that we added to the AudioProcessor, and spit out a basic enough GUI to check functionality. While we can definitely add to this, it’s a good enough start to check that everything works the way that we want it to. For instance, in our version currently, we don’t actually have any control over any of the levels. They are set and are not able to be changed in most instances (some DAWs give you a generic audio editor by default, but I’m in logic and tools so it won’t currently do that work for me. This is the way the tutorial does their graphics.      

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     For debugging, this is probably the way you want to start. In our instance, I don’t think anybody would be particularly interested in leaving it here. I’d like to look into creating a custom editor, to learn how to map sliders and such so that we can control it, and change all the colours and such ourselves!  The Better Way  Our graphics start here in the paint function. This will draw shit all over the screen.   We’ve got g.fillAll (the g is for gRaPHics), and  it will paint the entire window. A good way to start, as the JUCE GUI is layered, meaning we can paint, and then paint something else, somewhere else, in another colour (or the same colour) if we want to.   We’re painting our “getLookAndFeel.findColour” which has to do with a specific sort of template code that you can set. getLookAndFeel is super useful to use when painting lots of elements, as it lets you essentially set your own defaults for buttons, text and whatnot, and simply refer to the template rather than have to re-write the RGB or whatever colour code you use each time. Which can be a pain if you’ve got difficult functions, unlike me, Mr. Primary Colours.  Once we’ve painted everything, we change our colour with setColour (and set it to the JUCE white)  Then we change our Font size to 15. I’d LOVE to show you how to implement a font at some point, when I work out the bestway to do it.   Then we drawFittedText, which is text that fits inside a particular textbox. If you’ve used Adumbo Indesign, this’ll help you to imagine it. We don’t have to drawFittedText, as there are plenty of other ways to write text detailed in the documentation here.   https://docs.juce.com/master/classGraphics.html#a41c5a930dfc9b8cdd8c8a464f7e11b46   So that’s our basic model. We’ll get rid of this crud so we can start anew.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Here’s the reason that our GUI looks the way it does. If you’ve got the proper version of JUCE, you won’t get that splash screen that fades up for the first few seconds when opening, but I’m too cheap to splash out for it at this stage. My oh my I can’t wait till I can get it though!   Looking up graphics in the documentation .  This code says that we paint our editor with a particular sequence (g)     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     We can use the Graphics object to make lines, change colours, fill the screen, draw shapes and much more.  The simplest use of this is to redraw some stuff ourselves. Delete everything within the brackets so your Paint function looks like so;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Painting Graphics to the Object  Looking in our PluginEditor.h, we have;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     With that, we can see that we have declared a paint function (the one we’re using) and a resized function, which we aren’t really using, but we’re abstracting it nonetheless. If we check out the paint component…     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     This has more to do with the actual computer than it does our plugin, as it’s how the plugin is actually conveyed. We also have some cool functions in here like Mouse Enter and keyPressed and what not, we’ll explore that as soon as we get it all working.  As we’ve got paint and resized going, let’s use paint for now.  We start in our paint method by writing g. (as this was the parameter from the Graphics pointer in the documentation). We can actually use any of these Public Member functions from the Graphics class, it really depends how we’d like our text to look.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     To stick as close as we can to the basic version, I’m going to go with drawFittedText, just like the original text, which takes a string, the area, the justification, and as a 1 line string.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     When that’s completed, with getLocalBounds setting my draw size to the entire Plugin window, and then being centred in the middle of that, it should read as follows.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


    

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Now our plugin looks like this;  Pretty gorgeous, no? Now we could fix the text, make it white, change the background and do any number of crazy things. Again, this is just one way to write text. You can probably imagine how we want to build our GUI now. You’re probably thinking that you’d just use some photoshop PNG’s to paint the whole look of the GUI, and that’s true too, but these are the fundamentals, and it’s also a pretty robust form of the GUI. You can get away with a lot more than you think with this!      To finish up this part, I’m going to get it to match my regular sort of branding and colours, with the Yellow background, with black and white font.  Colours and Branding  If we Command+Ctrl click on colour, we’ll get taken to the JUCE documentation of their implementation of Colour. There are a number of different ways to define a colour, whether using RGB or using JUCE’s own colour system (which isn’t half bad!).     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     If we use setColour, then the next actions we take (until we change colour) will be using our colour. I want to fill the background, then write the text, so we’ll setColour to yellow, and then setColour to Black, and then finally setColour to white. It’s somewhat tedious, but hey, we learn this way.  For now, let’s use the JUCE colours for the text, and then I’ll go in and grab the RGB for the yellow if it looks shit.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     For ease of use, I’m going to use fillAll, with the current brush. That means I first set the colour, then I fill all, then I change the colour again if I want to write something else.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


    

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Sweet baby gebus.  So it looks absolutely horrible. I mean we could probably look into making sure the yellow isn’t so awful, and the font is different and better. There’s a lot really…  You can see I’ve just Justification::centredRight, to sort of mixed levels of success. As far as I’m aware, you can’t actually change colour halfway through some text, so this might be the best we have right now.   In terms of justification, it’s another good one to have a look at in the documentation. In terms of using it, you’re probably thinking that using pixels for exact positions is better, and it CAN be, but justifications and getLocalBounds sort of allows us to make it so that if the user resizes the window it still looks good. I’m yet to work out just how much of this kind of positioning is used, though I know splitting the window into boxes and drawing your positioning from that is incredibly popular.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Who knows. We can at least improve it by beefing up the size a bit, changing that yellow and the font.  The Right Yellow, and Font…and Size.  To change our size, we can call a couple of different functions, depending on whether we just want the next text to change (kind of like setColour)  or whether we want the default drawing text to change.    Looking into our graphics class documentation…      

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     We can use this implementation to change the size. Let’s do dat. In this part, I’ve used setFont(60.0f) because setFont takes a float in this implementation. I’ve also moved the Justification to centredLeft and Right, AND changed the size of the actual window to something a little better than a plain ol’ square.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     And here’s the pain code now. I thought I’d try goldenrod, but I’m also going to do the RGB in a moment.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     While that looks a bit better, the colour is really killing me. Jumping into the colour with Control + Command and clicking on Colour, I find what I’m looking for.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     So I can use this by typing;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Now we’re starting to look pretty damn good! All we’ve got to do now is get the spacing right, and the font! We’re so close!     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     I think I will in fact separate out the font element, as it involves diving into the projucer quite a bit to get it all working. Well, you’ve got to drop in a file and set a look and feel and whatnot. But I think to begin with, we’ll look at sliders, and making sure we can send across those values to get something that pretty closely resembles a working, actual plugin.  If you’re diggin’ this, or if you had any errors, shoot me a message and I’ll pretend I can help you (I will try!).  See you next time!  For those of you that are interested, here’s the JUCE pt. 2 of the Graphical User Interfaces with JUCE.         </iframe>" data-provider-name="YouTube"

Comment

The GUI

Makin’ it all look pretty, and giving the user control of the parameters

Comment

      Noise Gate Pt. 7  Lucky number seven! Woohoo! Okay, enough of that. We’ve got to get moving on this noise gate, as it’s approximately a page on the JUCE tutorials yet I’ve managed to expound eight and a half thousand words on this sucker. Let’s see how big-ass we can make this little ass plugin. We’ve still got to do the GUI, so I’ll dump a bajillion words on that, I’m sure.     

  

    
       
      
         
          
             
          
             
                  
             
          
             
          

          

         
      
       
    

  


     If you’re just joining us on this project, boy have you missed out. But you can pretend to follow along with this link. You’ve just got to download the github repository. Don’t clone it or branch it or commit or whatever, as I haven’t the foggiest what that means, and I’m sure it’ll end up with my source control more fucked than the time I tried to use Perforce.  I even tried to get the Github desktop app, and my what a waste that was. I got it all worked out but none of my currently existing repos are in there. So instead of re-writing the others, MEH, i’ll keep doing it the way I meant to for now, and hope that there’s some overarching folder structure I can fit it in to later.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     According to our JUCE tutorial;   [10]: If this coefficient is greater than or equal to the threshold, we reset the sample countdown to the sample rate.   So for some reason they have picked an EXTREMELY CPU intensive version. Or rather, of course they have, like most bloody audio tutorials. 'I’m going to see how another noise gate works.   How does this one actually f*&king work?  So we set our mono sidechain and our alpha to be our lowpass coefficient. Then if our coefficient is higher than the threshold, we print the incoming samples to our outgoing buffer to be passed to the engine. If it’s lower, we literally just keep printing 0’s to the buffer.     

  

    
       
      
         
          
             
                  
             
          

          
           
              s  
           
          

         
      
       
    

  


     ARGGG WHY SHOW ME A CRAP WAY TO DO THIS.   For  each channel;  We getWritePointer(channel, sample), and write it to;  IF the sampleCountDown is greater than 0, then get the readPointer  OR  If not, we write 0’s to the buffer. (resulting in silence)  The ? and : ternary operators let us get this all done in one line, which is rad.  sampleCountDown > 0 ? *mainInputOutput.getReadPointer (i,j) : 0.0f  Here’s a bit from Miller Puckette’s book (or my hackjobbed abbreviation of it) on companders and noise gates. I think it’d be interesting to take this same example and turn this into a compressor. I’d say we’d just do this by multiplying the end result of the writePointer to the inverse of the threshold control, so maybe it’s worth jumping into a bit of DSP knowledge, as I haven’t looked at that yet.  A compander is a tool that amplifies a signal with a variable gain, depending on the signal’s measured amplitude. The term is a contraction of “compressor” and “expander”. A compressor’s gain decreases as the input level increases, so that the dynamic range, that is, the overall variation in signal level, is reduced. An expander does the reverse, increasing the dynamic range. Frequently the gain depends not only on the immediate signal level but on its history; for instance the rate of change might be limited or there might be a time delay.   By using Fourier analysis and resynthesis, we can do companding individu- ally on narrow-band channels.   This technique is useful for removing noise from a recorded sound. We either measure or guess values of the noise floor f[k]. Because of the design of the gain function g[m,k], only amplitudes which are above the noise floor reach the output. Since this is done on narrow frequency bands, it is sometimes possible to remove most of the noise even while the signal itself, in the frequency ranges where it is louder than the noise floor, is mostly preserved.   The technique is also useful as preparation before applying a non-linear operation, such as distortion, to a sound. It is often best to distort only the most salient frequencies of the sound. Subtracting the noise-gated sound from the original then gives a residual signal which can be passed through undistorted.   It’s interesting to see how similar a lot of this stuff is. Though it’s also got a slab of this;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     How Miller Puckette is as good at this as he is at music is mindblowing for me. Fuck he’s so good. I got to see his opera at the Bendigo Festival of Exploratory Music and it was fucking rad.  I also looked for information on how to actually write a noise gate plugin in a more effective way than described here (as JUCE just loved to tell me about how shit their idea was). But having looked through these three books;  Miller Puckette’s book  Understanding Digital Signal Processing  Lazzarini’s bible  It’s good. So good.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     So if someone could enlighten me what better ways there are. That’d be swell.  A Better Solution  There’s a solution on KVR here, that we could use, with a hardcoded version. It’s not quite JUCE, but probably wouldn’t be hard to adapt.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Finally, we’ve got to make sure that as long as we’re writing the write buffer, we’re decrementing sampleCountDown by 1 each sample to make sure it doesn’t get all fucked up.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Not entirely sure what’s the best way to tackle it.  I’ve found Fabian’s original thoughts as the author of the code in terms of what he meant by it being inefficient code, and it’s not as bad as I thought.      
   
     “ My comment there was more referring to repeatedly calling getWritePointer which is unnecessary. Also I think the code should be restructured in a way to calculate beforehand how many samples will be zeroed and then apply this to every channel in bulk (as @jimc mentions). ” 
   
   — Fabian (JUCE) 
 
     Isn’t it easier to just call getArrayOfWritePointers() at the start and work with data[chan][sampleNumber] anyway?  Apparently this is the answer, though I’m not sure how you are supposed to know how many 0’s you’d like to get.  This apparently works by using a SIMD implementation.        </iframe>" data-provider-name="YouTube"         With that, we’re actually finishing up the processing side of what we have!  There’s probably some cool things we could do here. I’d really like to get an Attack and Release envelope working, as at the moment it really hammers down the volume completely once it hits lower than the threshold. We’ll probably be able to fine tune this a smidge once we get the GUI going.  If we load this bad boi up, we get the desired behaviour! It’s loud up to a certain point, and then it totally cuts off, depending on the level of the sidechain.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Let’s make it look less like poo in the next blog.

Comment

Noise Gate Pt. 7

Lucky number seven! Woohoo! Okay, enough of that. We’ve got to get moving on this noise gate, as it’s approximately a page on the JUCE tutorials yet I’ve managed to expound eight and a half thousand words on this sucker. Let’s see how big-ass we can make this little ass plugin. We’ve still got to do the GUI, so I’ll dump a bajillion words on that, I’m sure.

If you’re just joining us on this project, boy have you missed out. But you can pretend to follow along with this link. You’ve just got to download the github repository. Don’t clone it or branch it or commit or whatever, as I haven’t the foggiest what that means, and I’m sure it’ll end up with my source control more fucked than the time I tried to use Perforce.

I even tried to get the Github desktop app, and my what a waste that was. I got it all worked out but none of my currently existing repos are in there. So instead of re-writing the others, MEH, i’ll keep doing it the way I meant to for now, and hope that there’s some overarching folder structure I can fit it in to later.

Screen Shot 2019-05-12 at 9.37.27 pm.png

According to our JUCE tutorial;

[10]: If this coefficient is greater than or equal to the threshold, we reset the sample countdown to the sample rate.

So for some reason they have picked an EXTREMELY CPU intensive version. Or rather, of course they have, like most bloody audio tutorials. 'I’m going to see how another noise gate works.

How does this one actually f*&king work?

So we set our mono sidechain and our alpha to be our lowpass coefficient. Then if our coefficient is higher than the threshold, we print the incoming samples to our outgoing buffer to be passed to the engine. If it’s lower, we literally just keep printing 0’s to the buffer.

s

s

ARGGG WHY SHOW ME A CRAP WAY TO DO THIS.

For each channel;

We getWritePointer(channel, sample), and write it to;

IF the sampleCountDown is greater than 0, then get the readPointer

OR

If not, we write 0’s to the buffer. (resulting in silence)

The ? and : ternary operators let us get this all done in one line, which is rad.

sampleCountDown > 0 ? *mainInputOutput.getReadPointer (i,j) : 0.0f

Here’s a bit from Miller Puckette’s book (or my hackjobbed abbreviation of it) on companders and noise gates. I think it’d be interesting to take this same example and turn this into a compressor. I’d say we’d just do this by multiplying the end result of the writePointer to the inverse of the threshold control, so maybe it’s worth jumping into a bit of DSP knowledge, as I haven’t looked at that yet.

A compander is a tool that amplifies a signal with a variable gain, depending on the signal’s measured amplitude. The term is a contraction of “compressor” and “expander”. A compressor’s gain decreases as the input level increases, so that the dynamic range, that is, the overall variation in signal level, is reduced. An expander does the reverse, increasing the dynamic range. Frequently the gain depends not only on the immediate signal level but on its history; for instance the rate of change might be limited or there might be a time delay.

By using Fourier analysis and resynthesis, we can do companding individu- ally on narrow-band channels.

This technique is useful for removing noise from a recorded sound. We either measure or guess values of the noise floor f[k]. Because of the design of the gain function g[m,k], only amplitudes which are above the noise floor reach the output. Since this is done on narrow frequency bands, it is sometimes possible to remove most of the noise even while the signal itself, in the frequency ranges where it is louder than the noise floor, is mostly preserved.

The technique is also useful as preparation before applying a non-linear operation, such as distortion, to a sound. It is often best to distort only the most salient frequencies of the sound. Subtracting the noise-gated sound from the original then gives a residual signal which can be passed through undistorted.

It’s interesting to see how similar a lot of this stuff is. Though it’s also got a slab of this;

Screen Shot 2019-05-12 at 10.14.39 pm.png

How Miller Puckette is as good at this as he is at music is mindblowing for me. Fuck he’s so good. I got to see his opera at the Bendigo Festival of Exploratory Music and it was fucking rad.

I also looked for information on how to actually write a noise gate plugin in a more effective way than described here (as JUCE just loved to tell me about how shit their idea was). But having looked through these three books;

Miller Puckette’s book

Understanding Digital Signal Processing

Lazzarini’s bible

It’s good. So good.

Screen Shot 2019-05-12 at 10.19.00 pm.png

So if someone could enlighten me what better ways there are. That’d be swell.

A Better Solution

There’s a solution on KVR here, that we could use, with a hardcoded version. It’s not quite JUCE, but probably wouldn’t be hard to adapt.

Screen Shot 2019-05-12 at 10.35.22 pm.png

Finally, we’ve got to make sure that as long as we’re writing the write buffer, we’re decrementing sampleCountDown by 1 each sample to make sure it doesn’t get all fucked up.

Screen Shot 2019-05-12 at 10.15.53 pm.png

Not entirely sure what’s the best way to tackle it.

I’ve found Fabian’s original thoughts as the author of the code in terms of what he meant by it being inefficient code, and it’s not as bad as I thought.

My comment there was more referring to repeatedly calling getWritePointer which is unnecessary. Also I think the code should be restructured in a way to calculate beforehand how many samples will be zeroed and then apply this to every channel in bulk (as @jimc mentions).
— Fabian (JUCE)

Isn’t it easier to just call getArrayOfWritePointers() at the start and work with data[chan][sampleNumber] anyway?

Apparently this is the answer, though I’m not sure how you are supposed to know how many 0’s you’d like to get.

This apparently works by using a SIMD implementation.

With that, we’re actually finishing up the processing side of what we have!

There’s probably some cool things we could do here. I’d really like to get an Attack and Release envelope working, as at the moment it really hammers down the volume completely once it hits lower than the threshold. We’ll probably be able to fine tune this a smidge once we get the GUI going.

If we load this bad boi up, we get the desired behaviour! It’s loud up to a certain point, and then it totally cuts off, depending on the level of the sidechain.

Screen Shot 2019-05-12 at 10.27.27 pm.png

Let’s make it look less like poo in the next blog.

Comment

     

  

    
       
      
         
          
             
          
             
                  
             
          
             
          

          

         
      
       
    

  


     Noise Gate Pt. 6  We’re moving on with our Noise Gate plugin today to finish the bulk of our Audio Processing. We’ll then have quite a bit of GUI to do to tidy it all up and then we’ll either look at expanding, OR we’ll take a step down JUCE tutorial lane, bite off another tutorial and chew that till the grounds and chunky and we need to a toothpick to pull all of the malloc calls out from between our teeth.  You can find the previous work completed on this project here, so you can follow step by step!  I’ll be adapting the tutorial from the main JUCE tutorial page, so if you are looking to skip my explanations of each line of code, go here;   https://docs.juce.com/master/tutorial_plugin_examples.html   We’re a few lines inside the processBlock now, and hitting our first for loop in the project.  This is a line of code that we’ll see really,  really  often in audio programming, as it’ll iterate through each of our samples and each of our channels and such to write to each sample before sending through the audio buffer to the processor. If you’re looking to learn more about that, check out  Blog 5 , for a detailed explanation of the audio buffer/programming workflow of what’s actually happening in the processBlock.     

  

    
       
      
         
          
             
                  
             
          

          
           
              Allocating memory for our buffer in samples.  
           
          

         
      
       
    

  


     You can find the blog related to that here;   www.scowlingowlsound.com/blog   So our for loop runs through, starting from 0, to the buffer.getNumSamples.      

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     What is buffer.getNumSamples? In this, we’re checking the memory limits of the allocated memory we just made in the last step here. buffer.getNumSamples also asks us to check out a couple of other related functions while we are at it.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     There’s some advice from the JUCE’ean overlords to check out getReadPointer and getWritePointer. So let’s do that.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Get readPointer and getWritePointer  The methods  getReadPointer  and  getWritePointer  return pointers to the data inside a buffer, these pointers can be used in C++ (and C) like  arrays . These functions let us replace stuff by using the whole buffer, but instead we usually write the buffer sample by sample. Because we are using the pointers in this way we must assure that our index is never is out of bounds of the buffer. The   ScopedPointer   class implements a pointer that is automatically deleted when the program leaves  scope  so we don't have to worry about the pointer being accessed in some illegal way (for example when the object it points to is already deleted). We’ll come across them from time to time as well.  We are also iterating through the buffer, so that each time the loop closes, we ++j (add 1 to j) until we hit the limit of our buffer, at buffer.getNumSamples. The outer loop will process the individual samples in the audio buffer block while the inner loops will process the channels in an interleaved manner. This allows us to keep the same state for each channel in a single sample processing.  Moving back to the tutorial, JUCE says;   [8]: For each channel in the sidechain, we add the signals together and divide by the number of sidechain channels in order to sum the signal to mono.    [9]: Next we calculate the low-pass coefficient from the alpha parameter and the sidechain mono signal using the formula y[i] = ((1 - alpha) * sidechain) + (alpha * y[i - 1]).   So, let’s deal with the sidechain first. The sidechain will be controlling our source, comparing it’s volume against our threshold to determine the gating behaviour.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     The Mono Sidechain  First up we’ll need a variable to convert our sidechain into a mono signal. We could have just specified in our input requirements that the sidechain must be mono, but we’re looking to make things really easy for the user. As they should not have to think about things, summing to mono is pretty easy for us. It’s stereo-izing things that’s the really hard one to do. Maybe we’ll have a go at that soon!  So for each channel in the sidechain, we’re using getReadPointer, that function from earlier, taking in two arguments, i and j. These two arguments will be one of the implementations in the documentation, however we can already take a guess at what they will be. Seeing as we’ve just used i as part of the sidechain channel  for  loop, let’s assume that it’s related to the channel number, as seeing as we decided a little earlier that we’d use j four our sample, we can try and write out in english what this function is doing.      
   
     “ Make our new variable “mixedSamples” equal to the current channel and current sample of the sidechain input. ” 
   
  
 
     Because we don’t know how many channels our sidechain will have, it’ll now copy the same sample for each channel, landing in dual mono, or mono. Hooray! The only other part to explain is the usage of += in our code                   mixedSamples += sideChainInput.getReadPointer (i) [j];   This just means   mixedSamples = mixedSamples + sideChainInput.getReadPointer (i) [j]   So that they can continue adding properly.  The Clear Function  I’m not entirely sure why the tutorial doesn’t include a clear function, so we’ll throw one in and test it out. It might help, it might not. Whenever the processBlock starts, if we don’t initialise the buffer and clear it out, we could end up spitting really bad garbage out of our speakers. I’ve found through my research that it seems like best practice to clear it out. So let’s take a look.  Looking into the documentation for our AudioBuffer class shows us this clear function.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     We can implement this like so;   buffer.clear (j, 0, buffer.getNumSamples());   However for now I will have it commented out, just until we get a function up and running where we can test if it’s really necessary or not. Seeing as most of the time we’re culling the audio, I don’t know at this stage if it’s required, or if it just feels like a hangover of reading and hearing about how it should be cleared for circular buffers and what not.  Static Cast  We’re getting there! We’re most of the way through the functionality at this stage, and we’re almost set up to do some maths to calculate the threshold, and then move on. Hopefully you’re enjoying the musings, the wine ages well and the coffee cools nicely.           mixedSamples /= static_cast<float> (sideChainInput.getNumChannels());   So what in the world is static_cast? It’s actually not too freaky (if I understand it correctly) in that it just allows division into floats. Because we are dealing with channels which will be integers and samples which is a double, we’ll use static_cast<float> to complete for an answer in float to finish off our mono-ing function.   (a /= b) can be written as (a = a / b)
 https://www.geeksforgeeks.org/operators-c-c/    So writing this out again we’ve got   mixedSamples = mixedSamples / sideChainInput.getnumchannels (can be a float!)  So if there are two channels, we’ll be adding the two samples together and then dividing by two. If there’s one we’ll be dividing by one. So we end up with one which is our mono channel in the end mmmkay?  The Spooky Maths Bit  We’re up to one of  those  bits in the audio process. The JUCE tutorial is very helpful with an explanation;  [9]: Next we calculate the low-pass coefficient from the alpha parameter and the sidechain mono signal using the formula y[i] = ((1 - alpha) * sidechain) + (alpha * y[i - 1]).  Not entirely sure what that means? I know right. I’m scouring the internet to find what exactly this algorithm is. Now I’m not particularly well versed in DSP (okay, I have no idea at this stage), but I’m working at it. I’ve got to go through some maths textbooks to work all this shit out (I’m a Psychology and Music Major for chrissakes!) but we’ll see how we go.   I’ve been reading the Wikipedia page for a Low Pass Filter, as I was told by the tutorial that that is what we’re trying to accomplish.   https://en.wikipedia.org/wiki/Low-pass_filter      

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     One day, this will all make sense to me. For now, I just sort of leave it on my computer to make me look smrt to my students. When I’m not making sic beats.  Reading a tiny bit further, a kind soul has refactored this nightmare (somehow) into some pseudocode that we can use to build our C++     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     So now we’re getting somewhere. I’ve not the faintest way that this whole thing has been discretized or whatnot, but hey, maybe I’ll learn at some point during this blog. We’re currently building an Infinite Impulse Response filter.    That is, the change from one filter output to the next is    proportional    to the difference between the previous output and the next input. This    exponential smoothing    property matches the    exponential    decay seen in the continuous-time system. As expected, as the    time constant    increases, the discrete-time smoothing parameter {\displaystyle \scriptstyle \alpha } decreases, and the output samples respond more slowly to a change in the input samples, the system has more    inertia   . This filter is an    infinite-impulse-response    (IIR) single-pole low-pass filter.   Back to JUCE’s algorithm.  [9]:   y[i] = ((1 - alpha) *    sidechain   ) + (alpha * y[i - 1]) .  and we’re plotting that in Xcode with;   lowPassCoeff = (alphaCopy * lowPassCoeff) + ((1.0f - alphaCopy) * mixedSamples);    y = lowPassCoeff  sidechain = mixedSamples  I feel like they rearranged it just for fun.   lowPassCoeff = ((1.0f - alphaCopy) * mixedSamples) + (alphaCopy * lowPassCoeff);    This form will also work.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     We’re almost there! We’ve just got to compare the signal level to the threshold, otherwise we write 0’s to our output buffer, and send that on to get some god damn piece of quiet in this house. Thanks so much for reading!  Resources for this one;   https://github.com/passivist/GRNLR/wiki/Playing-the-Buffer   Cool granular project. I haven’t actually built it yet, but I’ll definitely be giving it ago. Anyone that uses Supercollider is a friend of mine. Also, he had some great explanations I borrowed.        </iframe>" data-provider-name="YouTube"         The Audio Programmer builds an IIR plugin. I couldn’t actually get this to work for some reason when I last tried it. but it builds on some of the concepts here with IIR filters (obviously). Much of the code is pre-built, and you can just download a file, but I think it’s best to start on some of his earlier videos.

Comment

Noise Gate Pt. 6.

Clear Function.
Static Cast

getReadPointer / getRITEPointer

Comment

      Noise Gate Pt. 5     

  

    
       
      
         
          
             
          
             
                  
             
          
             
          

          

         
      
       
    

  


     So we’re up to here so far. If you haven’t been following the blog up to this point, WELCOME! We’ll be building the introductory template for JUCE here on the Noise Gate plugin. You can find it on JUCE’s website  here . You can follow that, or you can go back and read all my nonsense as I stumble through the tutorial.   Today we will be implementing the start of the processBlock, the spot where all the magic happens. To begin with, we’ll have to have a look at Audio Programming, how it works, and why so much Audio Programming information is super outdated. Not saying mine won’t also be outdated, but it least it will be in one place.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     The processBlock. The mother of all shit happening in this audio program. Before we start writing in what we’ll be doing, let’s discuss the processBlock a big, talking our way through  the documentation.   The Documentation : Renders the next block.  The Translation : How audio programming works.  Audio Programming  WELL. Here’s the crux of audio programming. It requires a short and convoluted foray into computer programming, so here we go. I’m going to try and find a better overview rather than my ramshackled explanation. I fell into the master’s video, TIMURRRRR        </iframe>" data-provider-name="YouTube"         So Timur talks about how audio programming is different to most other C++ in that it’s real-time, and it’s lock free. When coding, there are many many many instructions being completed by the computer, and anything that’s not a straight line could take a highly variable amount of time. C++ in itself has no understanding of what  audio  is, which is why we lean on 3rd party libraries like JUCE and Maximillian to do the heavy lifting for us.  So the main point of audio. Representing the waveform displacement of air as numbers between -1 - 1. All of our music is just between these two figures. How? Because there may be MILLIONS of numbers between these, depending on the decimal places you go to. So technically, we’re looking at -1.0f and 1.0f as our possible numbers. As the sound causes displacement and analog gear creates differing voltage to follow the same shape, so too does digital audio. Now we use our sample rate to measure this as fast as we can based on some mathematics known as the Nyquist theory. All of this stuff can be better explained elsewhere, so i’m just flying through it.         </iframe>" data-provider-name="YouTube"         Let’s go with this guy, he looks pretty smrt.  Moving on.  To avoid getting 44100 callbacks per second (or 48000,88000,96000 or 192000 ((or others))) we process in larger audio buffers, taking in a big chunk of samples as an array of audio floats. This is referred to as the Buffer size.   Here’s the bit where Timur is talking about block sizes      

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     So the idea in audio programming is that the audio is being sent to and from the soundcard and your plugin/software. So all of our programming is done inside these audio buffers, and we’re sending them to the soundcard, which then asks for the next block and there’s this constant to and fro to ensure that the correct information is being passed in REAL TIME (the spooky bit) Making audio, whenever you need it, as fast as you can possibly make it.  The actual calling of audio information happens in the processBlock function!  When this method is called, the buffer contains a number of channels which is at least as great as the maximum number of input and output channels that this processor is using. It will be filled with the processor's input data and should be replaced with the processor's output. This is to be transferred to the soundcard as I said earlier, so we need to stay in the correct format.  The number of samples in these buffers is NOT guaranteed to be the same for every callback, and may be more or less than the estimated value given to   prepareToPlay()  . Your code must be able to cope with variable-sized blocks, or you're going to get clicks and crashes!  If you have implemented the getBypassParameter method, then you need to check the value of this parameter in this callback and bypass your processing if the parameter has a non-zero value.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     So this runs the function processBlock, which takes an AudioBuffer (a JUCE class) and a MidiBuffer (though we won’t be using that at this stage).     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Let’s first chat about  auto . Apparently it was a pile of shit until recently.  I heard that  here.  An auto variable understands what it is from inference. It’s a neat type!  In these lines of code, we’ll be separating the sidechain buffer from the main IO buffer for separate processing in subsequent steps. Again, we don’t want to be re-writing over the same buffer multiple times, nor do we want to try and access the information together as one lot, so instead we’re separating with getBusBuffer().      

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     We’re using these to separate out the sidechain from the input/output main delivery system. However in the tutorial, we’re looking for a definition of getBusBuffer that takes 3 arguments. We should be getting a definition that has a  buffer,  a  boolean  and a bus index to seperate them out. Searching that out gives us…     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     So we now have two separate busses through which to put our audio code, so we can measure each one and apply the value a little later.  Diggin’ through the documentation has also revealed these functions to me;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


    

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     So we could have also used these, unless I am mistaken. For now we’ll stick with the tutorial, but I might come back and implement these too.  The next lines of code are for our algorithm that’s coming up,where we’ll need access to those variables we made earlier in the .h.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Looking back to the original  alpha  and  threshold,       

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


       We declared these in the private header file as AudioParameterFloat Pointers. Therefore, we need to use a special function to get the value of these AudioParameters.  Looking at the JUCE API for AudioParameterFloa t we find…     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Which we can use to check out the values at the time. Now because we’re running this in our process block, we will be calling this each time we look for audio, which should keep our parameters correct, even if we change them in the middle of a block. I’m fairly sure that we have to make copies of these to use them in multiple places, as the C++ 11 standard dictates some “data race” that happens if you try and access the same memory in multiple places. So here we are duplicating the alpha as a new variable named alphaCopy to ensure that while the variable is the same, we’ll be accessing alphaCopy whenever we need alpha, because all of this is written on the audio thread, so it won’t be accessing alpha in multiple spots at once.  God that’s a lot of information for right now. Check out the video, watch it the whole way through, and then maybe watch a few more just to get your head around how angry people are about real-time audio and how difficult it is.        </iframe>" data-provider-name="YouTube"           </iframe>" data-provider-name="YouTube"         And for those of you that have done these ones and want some more to go on with, I’m having a look at the Audio Developer Conference 2017, as their entire conference is online. I believe there are 4 pathways and I intend to watch them all and nod like I understand what all these geniuses are saying.        </iframe>" data-provider-name="YouTube"

Comment

Auto types, Audio Programming Overview, and the processBlock

Comment

     

  

    
       
      
         
          
             
          
             
                  
             
          
             
          

          

         
      
       
    

  


     Noise Gate Part 4   If you’re just joining us or if you futzed your project due to my harebrained instruction, here’s the file from where you  should  be at. You should (if you’ve followed the blog series so far) understand every line of code, where it comes from in the documentation and what it’s supposed to be doing.  We’re getting to the bit I dread. The actual Audio programming part.  How we know these particular functions need to be written is another story that’s developing as we go! Sit tight, grab a croissant and stare lovingly across the room at that coffee shop barista as we get into the nitty gritty stuff of prepareToPlay.   If we move to our PluginProcessor.cpp and find the    void NoiseGateAudioProcessor::prepareToPlay (double sampleRate, int samplesPerBlock)   Then hold ⌘ + Click on the function, it’ll open a small dropdown box that let’s us select which definition we’d like to take a look at. For our purposes right now, we’re going for the source, so we can find the actual meaning of prepareToPlay. Jump to the definition in the audioProcessor and we’ll find the function a little ways down (or your selection should have taken you straight to it.      

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     There’s a wee bit of terminology here that’s important, but we’ll start by talking about virtual functions. prepareToPlay is a  virtual  function, which means that we are supposed to redeclare it, which will then mean it will be overriden by the new information. This has to do with memory allocation and something known as polymorphism, but let’s steer clear from that for right now. There are a few virtual functions in AudioProcessor, let’s see if we can find them, and then check our preset code to see if they show their ugly mugs.  Virtual Functions in AudioProcessor  So we have;    prepareToPlay    releaseResources    memoryWarningReceived    processBlock    processBlockBypassed    reset    setNonRealtime        When you create a new class that's a child of an existing class, it inherits all of the public methods from its parent. This leads to two common situations: 
 The child must override one of its parent's methods in order to gain new functionality. C++ makes this easy by allowing you to automatically override the original method by giving the new method the same name, inputs, and outputs. 
 Your program needs to work out whether it's the parent or a child, and C++ makes this easy by allowing you to return pointers to any type of class (parent or child) and cast them as pointers to the parent class. (more on this later) 
 If both of these things happen at the same time, it leads to an ambiguity: if a particular method is called, should it be the method of the parent class (based on the pointer type) or the method of the child class (based on the actual object)? C++ defaults to the parent's method unless you declare the parent's method as virtual, e.g.:  virtual void parentMethod();  
 With virtual functions, there's no more ambiguity. The program will definitely use the child's method in the case of virtual methods, and will definitely use the parent's method in the case of non-virtual methods. The GUI uses virtual methods extensively, for example in the case of the GenericProcessor class, which contains numerous methods that must be overridden by its child classes. 
     So JUCE is building much of the functionality behind the play, so that we can inherit much of the same functionality without having to rebuild it all the time (basically)  WHY DO I CARE?  Because JUCE (and C++) has one more important feature.  Pure  virtual functions (untainted by assholes). So, when we run into a Pure Virtual Function, we HAVE to declare it, so that the class is happy. If we fail to declare them, the class will get very unhappy. Let’s break some hearts.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     In the documentation, we’ll find pure virtual functions in the right hand column. It looks lyk dis;  If we look  here  and scroll down to our prepareToPlay method, we’ll find that it is indeed a pure virtual function. Which means we can make things very cross by removing it…but we wouldn’t do that…  Would we…?     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     If you’re still in the class declaration for prepareToPlay and you’re lost and confused, fear not. In the file hierarchy in the top bar, click the left arrow to go back to where you came from.  Now that we are back in the PluginProcessor.cpp, comment out the lines that have the entire prepareToPlay method. In C++, we don’t usually want to just delete what we’re writing, and so often what we would like to do is simply remove it for a time. On top of that, if we’d like to tell other programmers about the dumb shit we’re doing in our codebase, we want to be able to talk to them, rather than have to print strings (words) to the terminal.  Use the //  Simply comment out the crap you’d like to hide on each line with two slashes like so //     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Or, if you’d like to remove an entire block of code, we can use the /* at the start of what we want to hide, and */ when we are done, like so.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Which you choose is up to you. Once it’s all green and commented out (or if you’re in dark mode..well, it’s still green. I’m not a big fan of dark mode you see, it doesn’t gel with my colour scheme). Anyway, once you’ve commented it out, build the project again, which should fail.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Ugly error right? This is one of those that gets very scary, very fast. But I’ll hold your hand and we’ll make it through it. Hopefully this demonstrates why we have to pay attention when writing out our own functions and keeping up with pure virtual functions. In our case, the Projucer has actually formed some of this for us, so we couldn’t have missed them, but once we’re finished this tutorial and we start writing more, we’ll start to realise that if we just check out the documentation, we’ll work it out.   For instance, let’s find another pure virtual function, and delete that, and see what happens. Scrolling up I find that getName() is also a pure virtual. Let’s find it, and break it. Here it is, a little further up in the documentation. Notice that I skipped over getProgramName, although that’s a virtual function too!     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Either one that we comment out totally kills Xcode, and we’ll be left head scratching.   Moving on from pure virtual..  Now that we’ve discussed this, let’s dissect what’s going on with prepareToPlay     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     So the prepare to play function is getting ready to start, and we’re initializing two variables, lowPassCoeff and sampleCountDown  The low-pass coefficient will be calculated from the sidechain signal and the alpha parameter to determine the gating behaviour. It’s important to remember that we’re not actually doing any  audio  filtering, but just data filtering. The terms are quite common in scientific circles. So when we’re thinking about using a low-pass coefficient, we don’t mean one of these fat boiz.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     We’re just talking about allowing only values lower than the threshold point. This means that once the sidechain input hits the threshold, we’ll be getting NO audio. Not a lower amount (it’s not a compressor), and not a filtered thing (thought that would be swell). NADA. That would require a WHOLE LOT of other programming and things like IIR and FIR filters. That’ll probably be the next one I tackle. Maybe a compressor. Not sure. Please let me know your thoughts and that’ll make the decision for me!  The other variable, sample countdown allows us to keep track of the number of samples left with regards to the sample rate before the gating occurs. I guess this is to stop clicks and pops, though we’ll have to keep looking further into the code to work it out.    Looking a little further. The tutorial itself actually lays it on us that;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Because of course they are, and I haven’t yet worked out why they couldn’t  show  us one of them. But I guess they’ve got to keep that? Or do they? I really don’t know at this point, but we’ll roll with it.  There are a few things I’d like to expand upon in this project already. As we get to the end, we’ll look at adding in some GUI to allow us to change the threshold, as once we have it working, it’ll be pretty ON or OFF sort of working until then.

Comment

Pure virtual functions, and comments in code.

Noise gate pt. 4

Comment

      Noise Gate Pt 3  How exciting! A third one! I’ve now blown out my word count by quite a bit, but I think it’s better to see the whole thing, don’t you? No? Well… I guess I’ll keep doing it anyway. So we’ve had a bit of a play with the Constructor/Destructor, and we know what variables are. I’d say we are  flying ! But I guess that’s what motion sickness will do to you.  We are now at our third block of code from the tutorial, and I really do feel they had missed out a smidge, although the tutorial is for “intermediate”. It’s a bit like a beginner JUCE book that I read at one stage that was a HUNDRED AND EIGHTY FUCKING PAGES before they talked about Audio. We get it, you can make a calculator out of Juce, and make MsPAINT. Amazing. Not exactly what we want though is it? I’m going to try and keep this as Audio Centred, though we’ll undoubtably solve puzzles and make Zork at some point.  This next line of code is;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     In programming minds, this is a pretty well thought out process. I’m told by  JUCE  that in the  isBusesLayoutSupported()  function, we ensure that the number of input channels is identical to the number of output channels and that the input buses are enabled.   So we have   bool   (can only be true or false, 0 or 1)  isBusesLayoutSupported  Let’s have a look at isBusesLayoutSupported in the documentation. If we move to AudioProcessor::Bus, we’ll see what we are looking for.   https://docs.juce.com/master/classAudioProcessor_1_1Bus.html      

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Look at all my purty highlights! We could use this command, or a few others to check what’s going on.   ◆  isNumberOfChannelsSupported()  ◆  supportedLayoutWithChannels()  Are some of the other functions that do SIMILAR but not   THE SAME   thing.   What we’re trying to do is assume that the entire Layout (stereo, with Channel 1 L and Channel 2 R) is supported, NOT that it has two channels, OR that is has a L and an R, but that it exists in this configuration.  The Code So Far…  Our current project that JUCE spat out looks lyk dis;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Which looks pretty convoluted, and you’re probably afraid to delete anything from this. Luckily, you have my ham-fisted approach behind you! And together we can break anything!  Now, we aren’t using a MIDI Effect, so we can go ahead and can a bit of this straight away. I suppose we could set it so that our NoiseGate is triggered by some MIDI, maybe that would be a fun expansion later on. For now, we’ll get rid of;    #if JucePlugin_IsMidiEffect          ignoreUnused (layouts);         return true;    #else   When we fuck that off, we get a bright red line, which means    YOU MUST CONSTRUCT ADDITIONAL PYLONS       

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     This is because we removed an #if at the start. Now, you’re probably wondering why there’s a big ol octothorpe in front of the if statement. This is because it’s all preprocessor code, IE, code that is written by the Projucer. Re-generating files can make this messy, because the code you’ve written between these will be erased by the Projucer occasionally. What we want to do is ensure that what  WE  want is there, because JUCE ain’t the boss of us.  So let’s get rid of all the #endif in these points, taking special attention to  eradicate  the one outside the curly braces towards the end of the function. By removing these, we’ll probably get the “Control may reach end of non-void function”, which doesn’t sound great, but we’ll deal with that as we go, as we’ve not yet written the code so Xcode can calm the fuck down for now.  Now the sidechain we don’t care too much about, as that will proc the function we are looking for anyway, but it’s a good idea to ensure our input/output is the same, so we don’t pump sick beats off into the abyss (and crash our plugin/DAW). Now when we pop in their (JUCE) code, we get an error, and we get our first Build Failed (or maybe our tenth?)   Expected function body after function declarator    This fella has to do with how the function has been declared.       
   
     “ The override keyword serves two purposes:  It shows the reader of the code that “this is a virtual method, that is overriding a virtual method of the base class.”  The compiler also knows that it’s an override, so it can “check” that you are not altering/adding new methods that you think are overrides. ” 
   
   — https://stackoverflow.com/questions/18198314/what-is-the-override-keyword-in-c-used-for 
 
     As you can see, there’s a great deal of inheritance and virtual methods and shit going on here. Really a Gregor Mendel pile of shit that I am not totally familiar with at this stage, but we’ll get to it. I’m fairly sure at this stage that it’s more to do with parent and child relationships (or the equivalent of) when it comes to constructing abstracted classes from inheriting from a base class.    Gosh I sound like a programmer already.   For now, we’ll just get rid of  override.  We  could  change the function declaration, but let’s keep moving, and see if it messes us up later on or not.  The Function’s Function…  Because our function is checking the compatibility of channels and the coherence between the input/output channel count, we’re looking to  return   the answer of layouts (from the Bus class). If you’re not sure how to get to the bus class, you can hold ⌘  +  click on what you’re looking to find. For instance, hold  ⌘  +  click on BusesLayout (in the parenthesis after isBusesLayoutSupported (const   BusesLayout&  layouts )) and it’ll take you to the Bus class declaration, where the masters of JUCE have written some stuff that’s relevant to this usage of Bus. We can see that we’re using a pretty stock version of the function in our own code, as it pretty closely matches the declaration. It is, after all, pretty standard to check these sorts of things.  It gets a SMIDGE hairy where we look at the rest of the code of this line, with && and ! in it, meaning AND and NOT respectively. Boolean is, as i’ve said, all true and false, so we’re expecting it to read the code like a sentence, and answer YES or NO, essentially. When I have read programming books, one of the issues I have is that when they introduce types like  int  (integer, whole numbers),  float  (floating point, decimal place numbers) and the like, they hit you with a storm of information. Unsigned int and char and string and bool and float and this flurry of technical terminology. I get that it’s great to know, but instead of proceeding that way, I am going to explore the code through these tutorials, and make sure that we learn what we have to. Being on a “need to know” basis seems pretty intuitive to me, because otherwise there’s this veritable tome of information about C++ programming standards and books to read that are really dry and full of stuff we’re not interested in. We are into audio programming, so I want to try and just introduce concepts as they come. It might take a little longer to get to the finished product, but I think it’ll leave us with a much greater understanding of what’s occuring when we write the lines of code. Otherwise we’ll be stuck  knowing  what a float, and int and such are, but not  where  to use them or  why . I’ll try my best to expound on these concepts as I find them out, and if it doesn’t work for you, drop me a line. I’ll consider it, as I know that there are plenty of industry downsides to  only  learning what you need.   Back to the tutorial.    return layouts.getMainInputChannelSet() == layouts.getMainOutputChannelSet()               && ! layouts.getMainInputChannelSet().isDisabled();    So the line is actually saying  “Tell me that the Main Input’s Channel Set matches the Main Outputs Channel Set  AND  that it’s  NOT  Disabled.” If this is false, we’ll close this shit down, as we don’t want crazy stuff happening with mismatching input/output numbers, or we’ll throw an error.  The Wrap Up  So we’ve had a look today at Boolean logic, and at all the different types of Input/Output mappings that JUCE is capable of. Having a look through the documentation showed us the multiple ways of declaring pieces of code, and their differing arguments, and we pretended to learn a bit about override. We’ll keep moving and, as we get to the scarier parts of the code (the actual audio processing) we’ll start making some serious headway. If you’re diggin’ it, let me know. If not, I guess don’t?

Comment

Boolean and Bus functions in the AudioProcessor
Noise gate pt. 3

Comment

      BUILDING A NOISE GATE PT. 2  We left off last time with just about NOTHING done in our noise gate. But that’s okay! Hopefully you’ve had a fifteen moments to get lost in that JUCE tutorial by Timur about thread safety, and hopefully your day had a few of  these  moments in it.     

  

    
       
      
         
          
             
          
             
                  
             
          
             
          

          

         
      
       
    

  


     If you’re just joining the blog, welcome! Here’s a project with where we are currently up to (we’ve literally built a new project and added 4 lines, and joked about things all this time. Really, it’s lucky you’re here to keep us on track)  Now that we have completed our declaration of the private member variables (nod your head and pretend you understand, it will make you look real smrt) we can move to the .cpp file to add some functionality in.  Now the JUCE tutorial says;      
   
     “ In the class constructor, we initialise the plugin with three stereo buses for the input, output and sidechain respectively [1]. We also add two parameters namely threshold and alpha [2] as shown below. ” 
   
  
 
    

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Let’s break down what that means.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     The Constructor  The class constructor does the initialisation that JUCE tuts are talking about.  In our version, we’ll be adding this here.   A class  constructor  is a special member function of a class that is executed whenever we create new objects of that class.  A constructor will have exact same name as the class and it does not have any return type at all, not even void. Constructors can be very useful for setting initial values for certain member variables.   The return type they are talking about could be all sorts of things, but we’ll get to that. This line of code tells us what needs to be created when being constructed, and, the squiggle (tilde~) tells us what needs to be removed at the end, because it is the   destructor  .   A  destructor  is a special member function of a class that is executed whenever an object of it's class goes out of scope or whenever the delete expression is applied to a pointer to the object of that class.  A destructor will have exact same name as the class prefixed with a tilde (~) and it can neither return a value nor can it take any parameters. Destructor can be very useful for releasing resources before coming out of the program like closing files, releasing memories etc.   We can tell that this is the   Constructor   and   Destructor   ,  because they have the same name with the exception of the ~. Now you might ask;   Why do we put these here and not in the header file?  What’s a Constructor?  Where did these come from?  Well, two of these are expected, but if you’re asking about the constructor…  So, I can only assume, as I am just developing these ideas as we go, and because JUCE does not help us with this AT ALL, but I assume that because if we look it up in the documentation, bus properties is a struct;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     It is for this reason we’ll add it to the constructor. I guess constructors make structures, easy way to remember. Knowing the logical nature of code development, this is probably the exact purpose. If we now go diggin’ into the Class documentation and click on the  BusesProperties , we can find a few bits of the roadmap that might help us decipher this Kafka-esque nightmare. One thing I will say, it takes a long time to get to the point where you understand all the relevant components for an audio plugin. Even more so when you get into more algorithms. I am certainly excited to learn more, as I feel it happens more and more when you really dig in and try and find this stuff yourself. There are a lot of tutorials that just sort of tell you “Obviously we need an AudioProcessorValueTree state so let’s do one of those and then createAndAddParameter…” and they just trail off and make mouth noises until you stop watching the video.   ANYWAY  Let’s see how we go.  Buses Properties  Inside this window we can see some BusesProperties withInput and withOutput. These are addressed by the .  after the BusesProperties(). So we are starting to understand how to get within these data hierarchies and get to the point where it makes sense!  Let’s focus on breaking down the entirety of the first assignment in our constructor.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     So JUCE have this as our first channel, keeping in mind we’ll need an Audio Input, and Audio Output and  IN THIS INSTANCE  we have an additional Sidechain input. Unfortunately this time we won’t be using said sidechain to make SICK BEATZ but instead we’ll use it as the controller to cut low level noise.  In the documentation we have;    withInput   (const   String   &name, const   AudioChannelSet   &defaultLayout, bool isActivatedByDefault=true) const  So we need a String (which is basically some words, characters, letters, WHATEVER, inside “ “)  Then we need a comma.  Then we need a const (unchangeable) AudioChannelSet &defaultLayout.  Clicking on this brings us to a whole new circle of hell;   https://docs.juce.com/master/classAudioChannelSet.html   Which shows us all of the channel configurations that JUCE is capable of. Which is actually really really exciting! It lets us use Ambisonics and 7.1 and all sorts of crazy things that I have never heard of (like octagonal, and 35th ambisonic channels and bottom left side) for whatever plugin we are generating. Maybe next time we can make a surround sound plugin! (we won’t!)  So we’re just looking for a plugin that can take in stereo. We could make it mono if we wanted to, but lets keep it stereo.      

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Now all we’ve got to do is work out how to use this information in JUCE. Again, we’re already inside the channel using .withInput, so we just add AudioChannelSet::stereo()  So, let’s open up PluginProcessor.cpp  Typing it all out     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Woohoo! How exciting! So type it all in, and then we hit ⌘ + B,      

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     ….  And WE DID IT.   If we made a mistake, let’s not lose heart. Sometimes errors look incredibly spooky, sometimes not so much. For instance, we might just be missing a curly brace (computer speak for { and } ) or a semicolon ( ; ) so we can fix that up!     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Now it’s all getting exciting, (after all, we’ve completed ONE task!) but now we have to refocus. It’s the extra bit, using addParameter to add those threshold and alpha so that we can use it. If we go back to the documentation for AudioProcessor and search for addParameter, we’ll find this;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     So this then tells us that all we need to do to add a parameter to the audio procesor (that being one that is managed by the Audio Processor, being deleted automatically when it is removed and such, is call void AudioProcessor::addParameter ( taking in an AudioProcessorParameter*). However, we are loooking for something with the ability to have decimal places, so we will be looking for AudioProcessorParameterFloat OR AudioParameterFloat as in the tutorial.  Searching this in the documentation gives us  this page.   Searching through that and moving to the correct documentation of the function gives us;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


      You may also find a different definition with a little 1/2 next to it. Right now we’ll use 2/2, as that format fits the tutorial one, but don’t freak out, we’ll get to the other entry (far inferior) ((but not really)) a bit later.  So we’ll need;     a parameterID (usually just a name to call on, somehow different to the parameter name, not entirely sure why, someone tell me..)    The parameter name to use (that   definitely   won’t be the same as the parameter ID…    The range from min->max    the default value to begin with    aaaaand a bunch of optionals that we’ll probably skip most of the time (sorry Jules..)     Completing the Code  addParameter (threshold = new  AudioParameterFloat  ("threshold", "Threshold", 0.0f, 1.0f, 0.5f)); // [2] So in our example we call addParameter, set the threshold variable we used earlier, (the value declared in the header file (.h) to;  ParameterID = “threshold”  Name = “Threshold”  Min value = 0.0f (the f is there to tell us it’s of type float, meaning it can have decimal places, so it’s 0.0f not 0)  Max value = 1.0f  and   Default range of 0.5f  Pretty standard for a lot of controls to move between 0.0 and 1.0 instead of 0-255 or something insane like   SOME   people. By the way, the reason we’re using AudioParameterFloat and NOT just an AudioParameter of another type is for the Normalizable Range aspect. The big benefit here is that when using a Normalizable Range, it converts the entire mapping of the parameters to a 0.0->1.0. This range does not  have  to be linear either, as we have the opportunity to set the skew factor (that is, a point which changes the slide between values.   For instance;  The documentation of AudioParameterFloat gives us two entries. (You’ll see AudioParameterFloat 1/2 and AudioParameterFloat 2/2. This means we can either initialise it with the arguments inside 1/2 or 2/2, depending on what we’re looking for. The implementation we used above used 2/2. If we wanted to do the same-ish thing with 1/2 (a different implementation), we’d use THIS version.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     However this version means we need to write;  addParameter (threshold = new  AudioParameterFloat  ("threshold", "Threshold", 1.0, 0.5f));   This would ALSO allow us to change the category of the parameter (in case we wanted to build something that had a big chain perhaps) and it also allows us to Skew the range as I said earlier, by looking into the   NormalisableRange     documentation, so we can change the linearity of the parameter.  Anyway, let’s go back to our version from above;   addParameter (threshold = new  AudioParameterFloat  ("threshold", "Threshold", 0.0f, 1.0f, 0.5f)); //    Hitting build and we should see…     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


      Hooray! We did it!  Hopefully that explained what exactly is going on when we’re looking at typing these things out. Instead of a tutorial that just tells you to do something, without explaining what in the fuck.   Anyway, hopefully you learnt something in this, I know I did. See you next time! If you’re diggin’ this, let me know! Drop a comment or jump onto the mailing list.

2 Comments

C++ Concepts for beginners, the class constructor and preparing to play.

Noise Gate Plugin Pt. 2

2 Comments

      Blog 1 - JUCE FOR BEGINNERS  Have you tried and failed to do some Audio programming, only to be told to learn C++, then told to learn DSP, then C, then CSound, then read a 15 year old book and work it out yourself? ME TOO. This is for you, you poor poor soul.  After fighting my way through most of the Kontakt 6 manual, (and by most, I mean about 6 chapters, but hey, it’s rough going) I’ve decided to take the not so logical step of moving to JUCE. I thought i’d start this process by doing an exercise with the hive mind of the internet, and doing some of the tutorials i’ve found, as well as developing and explaining some basic principles of C++ and JUCE (an audio programming thingo). I’ve been fiddling with JUCE for a little while now, but haven’t managed to get very far. Most of the time, I’m doing a little in JUCE, lots in  Tensorflow , lots of angry googling, and then reverting to  Max/MSP .   After some time with MAX, and gen~ ( which allows you to export max patches straight to JUCE  ) ((not an easy task, but maybe I will get some time to help out, as it DOES work*) , I’ve decided to bite the bullet and really get into Audio Programming, the proper way. What I will be doing here is explaining my whole process. A process in which, I am by no means an expert, and will probably make horrible mistakes before I make correct decisions. In this way, it would be  radical  if people wanted to correct me.   What I seem to be finding online is a stream of “Well, this is how you do it basically, it’s not how we  would  do it, but it’s how you  could , in this VERY PARTICULAR example. But you shouldn’t do it this way. Ever. MOVING ON!”. On top of this, it seems that Audio programming is reserved for the seasoned computer science professionals, and, while i’m well aware of the detriment to the field I will have, i’m going to give it a go anyway.   SO!  JUCE  I’m not going to go through the Install process, as it’s all quite straightforward ( you can install it here ) and you’ve also got to install an IDE. For my purposes, I will be on OSX, so we’re using Xcode, which you can grab in the App store or  here.   I’m going to build the noise gate from  here . It’s the second one, after arpeggiator. I’m aware it gets mega complicated, but we’ll see how we go. I’m going to first try and write it without copping out and downloading the PIP package (a package that will give me the Projucer file, that will then build the plugin) and instead try and decipher the bullshit advice i’m given.      
   
     “ The noise gate is an audio plugin that filters out the input sound below a certain sidechain threshold when placed as an insert in a DAW track. ” 
   
  
 
     It’s quite exciting actually, because what i’d really like to do is reverse engineer this, and take out everything ABOVE a certain threshold, like the ducker plugin in Logic. I’ve used this to great effect to grab all the breaths and such BEFORE speaking, which I think sounds great. Inevitably, I will end up getting frustrated and giving up on this reverse engineering, so I guess a side-chained noise gate could be interesting as well. If you haven’t played with the Ducker in logic, load up an AUX track and hold option as you select the plugin slot.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


       So, once you’ve fiddled with that. Let’s move on. In the  NoiseGate  class, we’re suppose to have defined several private member variables to implement our noise gate behaviour as shown below: (keep in mind, don’t worry about the “ at the start and end, as i’m just trying to be  fancy  with Squarespace.      
   
     “ private: //================= AudioParameterFloat* threshold; AudioParameterFloat* alpha; int sampleCountDown; float lowPassCoeff; ” 
   
  
 
     We define these private member variables in our header file.  A New Project  Open the Projucer, click on Audio Plug in (because that’s what we’re making) and name it NoiseGate. The capitalisation is important, the naming is, it all is. If you want to name it something else, that’s cool too, you will just increase your chance of fucking up by about a billion, and you’ll have fun trying to copy code examples once the naming nomenclature gets creative. I am using a mac, so I will be going through Xcode hell, so if you’re lucky enough to not have to do that, yippee for you.  Once you’re done with that, Save and open in IDE (next to the exporter and the play button at the top of the window (still in Projucer). The rest of the projucer stuff we’ll get into, as it’s pretty great. It lets you add binary files (whatever they are), images, other JUCE libraries and even build bits of the UI. It’s pretty neat, even though it’s only cursory knowledge on my part so far.  We’ve got two sets of files in our Source Folder (which is the C++ source code). PluginProcessor and PluginEditor. The way audio programming works, as far as I understand so far, is that the audio processing and the graphics processing HAVE to be seperate. If they aren’t, there are all sorts of clicks and pops and terribleness that goes into the audio whenever you try and chance a graphics element while moving the knob or whatever. In our instance, I will try and break it down. PluginProcessor is how it works, PluginEditor is how it looks. We keep them seperate, so that there are no clicks and pops as we change graphics, and the whole thing stays optimized and effective without jamming or stopping up. This apparently gets harder and harder, but all the Timur Doumler tutorials on malloc means nothing to me at this stage.        </iframe>" data-provider-name="YouTube"         There are tons of great audio links I will hopefully be sharing, and if you’ve got more, that’s swell.  So there are also two parts of these files in our source folder (source code : C++). There are .cpp and .h files. The .h (stands for header files) is where you tell the computer WHICH functions or classes you’d like to use. We’ll get into what the difference is. For now, just think. You want a slider? You tell the computer you want a slider in the .h file. The flip side is the .cpp file, where you tell the computer WHAT the slider does. Or whatever else you are programming.  It’s explained EVEN BETTER HERE    Our Exercise  Dive into the Source folder and find the PluginProcessor.h file. This is where we will declare some seperate elements to build into the functions. As you can see from the finished product;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     It’s quite UI Lite. We can tell we’ll need the text “Threshold” and “Alpha”. We’ll need a number readout of the value, and a slider. I guess the slider will also need min/max values, and we can probably futz with the colour a bit, but that’s the gist of the whole thing.  So let’s start with the plugin functionality, because the JUCE tutorial told me to. Praise the Sun.     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Crack open the PluginProcessor.h and scroll on downtown. The Projucer has written all of this for us (good little Projucer) so we’ve only got ourselves and it to blame in a moment. Writing out this at the bottom of our file sets it as a private variable.   C++ offers the possibility to control access to class members and functions by using access specifiers. Access specifiers are used to protecting data from misuse.  Public  class members and functions can be used from outside of a class by any function or other classes. You can access public data members or function directly by using  dot  operator (.) or (arrow operator-> with pointers).  Private  class members and functions can be used only inside of class and by friend functions and classes.  So we use public variables all over the place, and private ones in very small but well connected ways. Because we want our threshold, alpha and such to be used   only    in the processor, we declare them privately.  .CPP and .H  When you get bored of this tutorial, and you are absent mindedly flipping between the .cpp and the .h you’ll see a number of interesting things. For instance, when you look at the .h you’ll see      

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Then, when you look at the .cpp file you’ll see;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     COINCEDENCE? I THINK NOT  This is what C++ is built for. We can see the  declaration  of the function in the .h file, and some of the  implementation  in the .cpp file. There’s a slight difference in how they are used. The reason we have to use NoiseGateAudioProcessor::producesMidi() and not just producesMidi() is because the producesMidi() has been  declared   thusly;     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     So in fact it’s part of the CLASS NoiseGateAudioProcessor, not just a floaty function. I think for me this was one of the hardest things moving from something like C# and visual programming languages to C++. Obviously it’s simple for a computer science major, but hey, that ain’t me, so hopefully i’m saving you a WORLD of hurt trying to work it out. Something that a lot of people will do (me included) when they first pick up JUCE is to  crack open the documentation , and then go on a large foray of wasted time head-scratching not understanding WHERE all this code comes from. When you’re starting to build your own things in JUCE, this is where you’ll be living, although you really need to understand the Hierarchy of C++ in order to get the most out of it. So in this instance, the class  AudioProcessor  is at the top of the tree, the  NoiseGateAudioProcessor  is an abstracted subclass of that, and then we have derived our functions  producesMidi();  below that. So now you’re probably wondering where producesMidi came from, and what else you can do with an audioprocessor. No? Well. Fine then..  Seen any good movies lately..?  OKAY GREAT LETS LOOK AT THE AUDIO PROCESSOR.  That be here;   https://docs.juce.com/master/classAudioProcessor.html   If you flick through the documentation you’ll see all sorts of different ones, whereas we are looking for this one. You’ll see a HUGE number of different functions, and the really cool thing is that we can write ANY of these in our program. So this is what JUCE is ACTUALLY FOR. Someone else (thank you Jules and Co.) has actually written most of this stuff out so that we don’t have to, and we simply inherit from their hard work, write our own Abstract versions of their classes, and use those!     

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     You’ll see in this instance it says “virtual bool” but don’t worry too much about that at this stage. It’s really exciting because we can add all this stuff! For instance, if we scroll a bit more we’ll find things like getBypassParameter() and getPlayHead for doing tempo/DAW synced stuff. How exciting! Now a few of these might need to be declared differently, so we’ll stick to this version so far, as this blog entry has turned quite long as it is.  Ensure that you have added those private variables and hit ⌘ + B to build it, hoping for no errors.      

  

    
       
      
         
          
             
                  
             
          

          

         
      
       
    

  


     Now if you do get an error, that’s okay. It’s USUALLY a typo issue. Go back and ensure that you’ve dotted your t’s and crossed your i’s, and it’s usually a semicolon or a simple typo, particularly in the function part of the typing. For instance, you can call a slider whatever the hell you want, but you can’t declare a Slidler instead. If not, it could have been something you changed in the Projucer before you started the project. As we’ve started really clean, there really shouldn’t be any errors, but hey, this may be a totally defunct blog post in 2 years like the rest of the literature on Audio Programming, so if not, drop me a comment and I will pretend I can help you out. If you run this, you should just have an empty box with “Hello World!” written in it. We’ll focus on changing that soon, but for now don’t stress.   Moving Forward  I may actually leave this at this point for now, as there’s quite a bit to look into so far. Have a read through some of the AudioProcessor Documentation and make sure to read up on PluginProcessor.h, .cpp and PluginEditor.h and .cpp. We’ll continue this plugin very shortly, so don’t get mad at me, just click on the next blog!  If you liked this blog, and you’d like to hear more, please hit the “Subscribe via-email” to be notified of new posts when I write them. I won’t be spamming you, but it would be brilliant to develop a resource that’s for those in Audio trying to get into programming, as many of the resources are more to do with programmers learning audio.

Comment

In the first tutorial, we explore some basic C++ concepts, how to get started Audio Programming, looking through JUCE resources and installing Xcode/JUCE.

Noise Gate Plugin Pt. 1

Comment