Thursday, 9 April 2020

Completing the 'Smooth' Suite - Max For Live plug-ins for Ableton Live

It started with MIDIrandomA, which provided several different type of 'constrained randomness' triggered by either MIDI events or a built-in LFO, and then allowed it to control parameters in other Ableton Live devices using what they call 'remote control' but most people associate with the 'Map' button. Blog reader hems suggested that it would be good if this could produce more than one mappable output, which is how MIDIrandomABC was conceived. But then, after further reflection,  the smoothing function that happens in MIDIrandomA seemed to be useful in a broader context, and so I produced MIDIsmoothR, where you can input any 'control voltage' rather than solely random noise, and so smooth/process any LFO or MIDI Controller...

However, MIDIrandomA and MIDIsmoothR are big, complex, flexible, versatile Max For Live devices. They can be daunting for a new user because there's a lot to tweak! So although MIDIsmoothRRR with three mappable outputs was an obvious follow-up, it seemed like this was a good time to also release the opposite: simple, minimalistic utility devices that just do the 'smoothing' function, plus offsetting and scaling. And so, the 'Smooth' Suite was born:

- MIDIsmoothR - single mappable output, sophisticated 'control voltage' smoothing and processing.

- MIDIsmoothRRR - three mappable outputs of sophisticated 'control voltage' smoothing and processing.

- MIDIsmoothY - single mappable output, smoothing only.

- MIDIsmoothD - just a scrolling display of the 'control voltage'.

- MIDIsmoothYD - single mappable output, with the scrolling display in the background.

These last four devices complete the Suite. MIDIsmoothD allows any 'remote control' 'control voltage' to be viewed graphically, and MIDIsmoothY is small and easy to use. For those people who like stuff to look cool, then there is MIDIsmoothYD's scrolling background.

In the (imperfect!) screen capture above, the LFO waveform is sent to the three 'Smooth' Suite devices: first MIDIsmoothY, then MIDIsmoothYD, and finally MIDIsmoothD.


Here's a simple infographic showing all of the members of the 'Smooth Suite':

In use

The screen capture and diagram above shows a LFO controlling the 'CV in' rotary control of MIDIsmoothRRR via 'remote control' mapping. The triangle wave is turned into a rather nice 'shimmery flame' waveshape by the B channel, and this is then sent to the MIDIsmoothD device to display it.


There was one previous blog post covering the first device in the 'Smooth Suite' - MIDIsmoothR. But this was a variant of an earlier series of devices: the 'Random' series.





In the past, I produced a 'dark' and 'light'-themed UI version of a delay effect, just to see which was more popular. The downloads so far (to 10th April 2020) are:

                   Dark       Light
KeyMon              400         348
Field Echo         1293         870
Sine3Generator      941         629
SpecD/PanEcho      1371        1225

For the 'shim' 'Smooth Suite' utility devices, the initial downloads indicate that the 'bare-bones' MIDIsmoothY is the most popular, then the 'background display' MIDIsmoothYD, and the 'display only' MIDIsmoothD has had the fewest downloads. Of course, none of these come close to one of my devices, which has had no downloads at all, ever!

Getting the devices in the 'Smooth' Suite.

You can get MIDIsmoothR_mr02 here:

You can get MIDIsmoothRRR_mr02 here:

You can get MIDIsmoothY_mr01 here:

You can get MIDIsmoothYD_mr01 here:

You can get MIDIsmoothD_mr01 (the display only) here:

Here are the instructions for what to do with the .amxd file that you download from

(In Live 10, you can also just double-click on the .amxd file, but this puts the device in the same folder as all of the factory devices...)

Oh, yes, and sometimes last-minute fixes do get added, which is why sometimes a blog post is behind the version number of

Modular Equivalents

In terms of basic modular equivalents, then implementing MIDIsmoothR_mr02 requires some quite sophisticated processing of a random noise source, so it probably isn't straightforward to do from off-the-shelf analogue modules, and is probably easier to do digitally. Assuming that a maths/data processing module can do the required computation, then there's one noise generator, one processing module, some triggering logic, an LFO for the free-running version, and a sequencer for parameter storage, giving an ME of 4 or 5!

MIDIsmoothRRR is just additional CV scaling and offsetting, plus two more patch cables! So an ME of 7.

MIDIsmoothY, MIDIsmoothD, and MIDIsmoothYD require only three modules: a slew rate limiter, a CV scaler and offset processor module, and an oscilloscope module. So the ME is 3.


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Tuesday, 7 April 2020

Sound Design: Ping-Pong sound in Ableton Live

In a complete break from the traditional content on this blog, here's a quick bit of sound design.

How to make a sound that works well to accompany a ping-pong video where a bat hits a ball... 

Piano roll a G3 and then a G4, a bar apart. For some reason, octave intervals work well for this type of sound... Try changing the intervals and see! I think it is something to do with the two different pitches being perceived as being at different distances or positions, but I've never managed to find any published research on this topic. (Which doesn't mean there isn't any, of course - one of the fascinating things about the InterWeb is that you can't always find things... Searches are not perfect, or deterministic...)


Take a sine wave, give it a fast attack, 80 ms decay, sustain zero, no release, and use a pitch envelope to pull it up from about 6 semitones down at the start of the note, pretty quickly.

The sine wave is used purely because many real world objects have a tendency to oscillate with the simplest possible waveform (and arguably the most efficient: the sine wave uses the least energy to wobble!)

The fast attack is because the transfer of energy from a hard (-ish) bat to a (hard) ball happens quickly. Compare and contrast the sound made by the strings on a tennis racket when it hits a fluffy tennis ball.

The slightly slower decay is just long enough that you can hear it (80ms is about ten times the shortest sound that you seem to be able to perceive, which is why a 'fast' 5ms attack time seems like it is fast!) and not long enough so that you become too distracted by the pitch.

The rising pitch envelope at the start of the sound adds to the natural-ness of the sound. Real-world sounds often seem to take time to get into a stable oscillation, and so this is a way of sign-posting that this is a sound that is meant to be used in a naturalisic context.

(Of course, for a true 'real' sound, then a recording of the actual sound would appear to be the best one to use... But this assumes that the actual sound is what people expect and is effective! I'm reminded of the sound of a soft drinks can being opened and the frothy liquid being poured into a glass that was synthesized by Suzanne Ciani many years ago...)

Which produces:

You might have noticed that the Noise generator is turned on, but with a very low cut-off on the 'Color' low-pass colour filter. This is very 'red' noise, and is used to add a bit of extra 'bat hits ba;;' randomness to the start of the sound. A more sophisticated implementation would use two synthesizer sound sources: one for the sine wave, and another for the noise burst (probably with a faster decay). For this simple example, I have just added in a bit of noise to the sine wave.

This 'tone plus noise' technique is usually credited to the French composer Jean-Claude Risset, and some drum synthesizer methods are often referred to as 'Risset' drums (For example: there's a plug-in included in the Audacity audio editor software.). This works very well - using band-pass or resonant low-pass filters to filter noise so that it adds uncertainty to a low frequency tone is very good for emulating many drum sounds, and is used on some classic 70s (and 80s, even though samples were becoming increasingly popular) drum machines.

(Over time, I have become increasingly cautious about facts - history often only captures the first person who publicises something, or the most authoritative source, or the best publicist, or the person with the most money to promote themselves, or just someone who got lucky, which is why many 'facts' aren't always what they appear to be... Look up Clement Ader in the context of powered flight, for example, or Alexander Graham Bell and the invention of the telephone, or Marconi and early radio...)

So that's the 'generation' part of the sound done. Note that the filter is wide open - using a resonant low-pass filter on a sine wave is usually spectacularly unimpressive. Now to the 'processing' part.


Next, apply a bit of hard saturation with soft clipping, followed by compression to tighten it up. Finally, wet reverb in high quality mode with early reflections, reflect and diffuse full on, and sized to taste.

The saturation-based waveshape 'distortion' is to add a bit of non-linearity to the louder parts of the sound. A pure sine wave sounds boring, and so adding a bit of 'over-drive' makes it sounds more 'real'. Imperfections are often what turns a synthetic sound into one that is more interesting and less 'synthetic'. The compression enhances the decay, and it also sounds like a compressor - which is another interesting imperfection: your ear knows what a compressor sounds like, and so putting a compressor in the sound tells your ear that you are hearing a sound that has been recorded. Adding in an artificial noise floor might be another method of adding fake cues for 'reality'.

(The real experts in using subtle cues to make audio sound real are the people who add sounds to movies and TV. For animations in particular, there are no sounds with the pictures, and so everything needs to be added: rustles, bangs, knocks, footsteps, soft drinks cans being opened, kissing, cutting bread, slurping coffee and more. This process if called 'Foley', named after Jack Foley, who was one of the people who used a wide variety of props to add sounds to moving pictures. Of course, a Foley artist would probably use a bat and ball for this particular piece of sound design...)

The reverb is really two things: the early reflections and the 'space'. The rapid echoes known as reflections accentuate the sharpness of the attack, and they emphasise that energy has been transferred. Blockbuster movies use a variety of busy, wobbly or low, growly sounds to indicate the movement of power, and these are so ingrained in what people expect that it feels wrong when you don't get them in real life. Spaceships in space don't make sounds, and yet you 'know' (and expect) that all of that pent-up energy required to thrust them into hyper-space just has to make a sound! Without the sound, it would feel 'fake'...). The 'space' part of the reverb is to give the listener a sense that the focus of their attention should be the bat hitting the ball - everything else goes 'out of focus', but it also imposes an artificial spacial environment that isn't present in reality. The apparent big reverberant space gives the sound gravitas, importance, significance - it screams (gently): 'Watch me!'

Which all looks like this in Ableton Live stock audio effects:


The resulting sounds are available on SoundCloud. The demo track contains just two notes: a G3 and a G4. (using the Ableton 'C3=60' note naming convention) The result is not perfect (and what is?), but imho a good starting point for this type of sound... You should use this tutorial as a starting point to exploring with your own personal variations - just copying what I've done will only get you part of the way along the lifetime quest that is sound design. For further study, you could compare and contrast my sound to commercial examples in music tracks and sound libraries...


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Max For Live 'Control Voltage' Smoothing device for Ableton Live...

I admit here and now that I don't know what to call the signals that go from an LFO to a mapped parameter in Ableton Live. If they weren't inside a Digital Audio Workstation (DAW) but were transferred by patch cables in a modular synth, then I would call them 'control voltages'. Ableton call the process 'remote control', but they don't seem to say what the signals are called. So in the absence of any authoritative guidance, I'm going to call them 'control voltages' but in quotes - that way I'm trying to indicate that they aren't voltages, but that I'm hijacking the phrase because I don't know what they should really be called...

But I do know the name of the subject of this blog post: MIDIsmoothR! A combination 'control voltage' smoothing / slew rate limiter, plus a random 'control voltage' source. Here's the story of how it came to be:

The story

When I published MIDIrandomA and MIDIrandomABC, they were intended to be interesting alternatives to the LFOs that are often used as sources for 'remote control' of parameters in Ableton Live. Particularly the 'random' 'noise' waveforms that old-school synthesists like me call 'Sample & Hold' or 'S&H', even though there's a whole unspoken abbreviation in there - we mean: 'the jerky segmented waveform that you get when you apply a Sample & Hold device to a Noise generator' and the source of the resonant filter cut-off sound cliche.

In a world where there seems to be an assumption that noise comes in only three flavours: 'white', 'pink' and 'coloured' (The terminology is derived from the same spectrum-based descriptions as for light. So 'white' light contains all of the visible wavelengths, just as white noise has the same intensity at every audible frequency. Pink light contains more lower wavelengths (at the 'red' end of the spectrum) and so pink noise contains more lower frequencies.) Anyway, just as there are lots of different colours of light, so there are many, many different types of noise - from rumbles to hisses, with wind and 'waves breaking onto the sand' somewhere in there as well.

MIDI effect devices with names containing 'LFO' almost always provide a 'Random' waveform. Sometimes there are two different versions: a pure noise waveform, plus a flat segmented 'Sample & Hold' version. The problem is that having just a single jerky segmented 'Sample & Hold' waveform assumes that the distribution of values is right for your application, and it might be that you do not want each possible level to happen with the same probability. Which is where MIDIrandomA and MIDIrandomABC's remit comes from - lots of different varieties of random, noisy 'control voltages'.

But sometimes that S&H waveform is too jerky, and you need something more rounded, which is where MIDIsmoothR is used. It allows the 'control voltage' 'remote control' of any LFO or other MaxForLive device to be smoothed with a three different processing options. Just map the LFO or other device so that it is 'remote controlling' the 'CV in' rotary control in MIDIsmoothR, and set the 'input/Random' selector switch to 'Input' so that the 'control voltage' will be processed inside MIDIsmoothR.

The 3 processing channels? A allows waveform quantisation and power-law distortion. B allows separate smoothing to be applied to the rising and falling parts of the incoming waveform (plus global smoothing as well). C allows you to remove rapid changes (below the limit set by the 'Delta' rotary control), and then allows that to be smoothed. You can choose which of these smoothing/processing options is sent to the output with the A/B/C switch, and then offset or scale the value that is sent to the parameter which has been selected by the 'Map' button. (Click on 'map' and then click on the parameter that you want to control...).

If you switch the 'Input/Random' selector to 'Random', then MIDIsmoothR behaves very similarly to MIDIrandomA, although the A, B and C channel processing/smoothing is slightly different. As the name suggests, MIDIsmoothR is designed for smoothing!

The big selection box on the upper left hand side chooses how the input is sampled. The 'Not synced' top option uses the LFO clock set by the 'Rate' control to grab the input value. The other options on this selector allow various MIDI messages to trigger the sampling:

- Any MIDI Note,
- Any change of MIDI Note number (so repeated notes will not trigger the sampling),
- MIDI note number 0 (the lowest MIDI note),
- MIDI note number 0 with a velocity of 1 (the lowest note and the quietest velocity value), or
- Any MIDI note with  velocity of one (the quietest velocity).

Three in One

It isn't immediately obvious when you first look at MIDIsmoothR, but it actually allows you to do three different things:

- Process 'remote control' 'control voltages' in various ways, including smoothing (sometimes called 'slew rate limiting' on modular synths)
- Generate random 'control voltages' and map them to controls in other devices (Ignoring the 'CV in' rotary control)
- Sample & Hold 'control voltages' from other devices (LFOs, MIDI Controllers, etc.) using MIDI event triggers and use that to control other devices

If I can think of anything else that I can squeeze in there, it will be in a future update... And on that topic:

Version 0.01 had a bug in channel B, which caused a fixed value to be output. This is fixed in version 0.02.

'Remote control' processing...

MIDIsmoothR is quite unusual - there aren't many 'remote control' 'control voltage' processing devices written in MaxForLive for Ableton Live (or indeed, native devices from Ableton!). Normally, you use the 'Map' button to send 'control voltages' over the 'remote control' system from a device that produces 'control voltages' (like an LFO, or MIDIrandomA!) to a control parameter in a device that you want to control (just about any parameter (rotary controls, sliders, buttons... in just about any device). But MIDIsmoothR goes in-between those two devices, modifying/processing the 'control voltages'. (For a while, I did wonder if I should call it PROCsmoothR...)

Above is a diagram of a 'remote control' connection from an LFO to a Delay device. The LFO 'Map' button would show that it was controlling (for example) the time delay buttons in the Delay device.

Adding MIDIsmoothR to process the 'control voltage' looks like this in Ableton Live:

On the left side, the LFO 'Map' button shows that it is controlling the 'CV in' rotary control in MIDIsmoothR (and note that the input selector in MIDIsmoothR is set to 'Input'). On the right side, the MIDIsmoothR 'Map' button shows that it is controlling the time buttons in the Delay device.

(The 'L' is because this is where the mapping was set up - to the Left channel time delay buttons in Delay. But the 'sync' button is active in Delay, and so the right time delays are the same as the left buttons. You can see two channels of random 'control voltages' mapped to the left and right time delay buttons separately (sync is off) in the blog post about MIDIrandomABC... and you can hear the effect in this SoundCloud demo...) 

So the 'remote control' connections diagram now looks like this:

The LFO controls the CV In rotary control of MIDIsmoothR, which processes the LFO waveform and then controls the time delay buttons in the Delay device. Although you can't see the connections explicitly in Ableton Live, the text that replaces the 'Map' in the 'Map' button gives slightly cryptic clues...

In use

You could apply different random delay times to different notes in a sequence, which sounds really unusual.

Or you can randomise the detune of a two oscillator synth...

Or you can use a smoothed 'control voltage' to change the Size or Decay Time parameters of a Reverb, which can sound a bit like granular synthesis. Randomly changing the 'Diffuse' parameter in a Reverb sounds like a more sophisticated version of the classic 1980s 'gated reverb' effect...

Getting MIDIsmoothR_mr02

You can get MIDIsmoothR_mr02 here:

Here are the instructions for what to do with the .amxd file that you download from

(In Live 10, you can also just double-click on the .amxd file, but this puts the device in the same folder as all of the factory devices...)

Oh, yes, and sometimes last-minute fixes do get added, which is why sometimes a blog post is behind the version number of

Modular Equivalents

In terms of basic modular equivalents, then implementing MIDIsmoothR_mr02 requires some quite sophisticated processing of a random noise source, so it probably isn't straightforward to do from off-the-shelf analogue modules, and is probably easier to do digitally. Assuming that a maths/data processing module can do the required computation, then there's one noise generator, one processing module, some triggering logic, an LFO for the free-running version, and a sequencer for parameter storage, giving an ME of 4 or 5!


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Sunday, 5 April 2020

Three Mappable outputs of controllable Random-ness in Max For Live for Ableton Live

Comments are always interesting - once you've filtered the spam and adverts out, of course! So when blog reader hems reminded me in a comment that having just one mappable output in RandomA was quite limiting, it nudged me into a new variant of MIDIrandomA...

MIDIrandomABC has three separate mappable outputs that can each be assigned to any of the three built-in types of randomness: called A, B, and C for brevity. So you can now control three parameters in Ableton Live with the same value, or an inverted version, or a scaled and offset version, etc. This enables lots more control over what you randomise and how!

One application that I've been playing with (I've watched too much Ricky Tinez videos on YouTube) is to control the delay time for left and right channels separately in the stock Ableton Live 'Delay' plug-in (other delays are available) as well as the feedback amount. Using the 'Any Note' mode, then the random vlues change for each note event in a clip, and so you get 'per note' changes to delay times and feedback. This sounds really rather nice - the sort of variability that tends to be more associated with modulars than DAWs... I can see that I will have to do a SoundCloud track and YouTube video when I have a moment...

Getting MIDIrandomABCmr02

You can get MIDIrandomABCmr02 here:

Here are the instructions for what to do with the .amxd file that you download from

(In Live 10, you can also just double-click on the .amxd file, but this puts the device in the same folder as all of the factory devices...)

Oh, yes, and sometimes last-minute fixes do get added, which is why sometimes a blog post is behind the version number of

Modular Equivalents

In terms of basic modular equivalents, then implementing MIDIrandomABCmr02 requires some quite sophisticated processing of a random noise source, so it probably isn't straightforward to do from off-the-shelf analogue modules, and is probably easier to do digitally. Assuming that a couple of maths/data processing modules can do the required computation, then there's one noise generator, two processing modules, some triggering logic, an LFO for the free-running version, and a sequencer for parameter storage, giving an ME of 6 or 7!


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Saturday, 28 March 2020

Music By 300 Strangers = 380 plus me

Every so often, I contribute to a project, safe and secure in the knowledge that no-one will ever hear of it, or hear it. Well, it may be that this time it might be different...

Over the last few weeks, a lot of musicians, who normally contribute virtual instruments, demos, and information to the web-site and forum, have been working on a collaborative 'systems music' project set off by Spitfire Audio's amazing Christian Henson (his Twitter picture should be on the left...) for Pianoday 2020 - the 88th day of 2020 (88 keys!). And, yes, I was a contributor...

Here's the original 'call to arms' from the end of February 2020...

Here's the splash screen from just before the YouTube video of the World Premiere, which was at 17:00 on Saturday the 28th March 2020 (the 88th day, of course):

Christian explains a lot about how the music was made here.

My contribution...

For my contributions I used my MaxForLive chord device, ProbablyChord, to automate the chord sequence provided by Christian, and used the constrained random controls to produce random inversions of the chords. The sounds that I used were produced by the 'Synthesizerwriter's 29 Bagpipes' virtual instrument that is available free on

Here's the bit in the main 'Music by 300 Strangers' where you can see my screenshot:

...and in the credits...

 ...there I am. My name in lights! Wow!


Pianoday 2020.   The official web-page for Pianoday 2020

Yamaha's Piano Day page.  Yamaha's page on Pianoday 2020 The Pianobook page

The collaboration project   Christian's 'call to arms'

systems music. What is 'systems music'?

ProbablyChord.   My MaxForLive device that I used to make my contribution

Synthesizerwriter's 29 Bagpipes.  The source of the sounds that I used in my contribution

YouTube video of the World Premiere.    The World Premiere YouTube video...


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Monday, 23 March 2020

Swapping MIDI Note Number and Velocity Value in a Max For Live plug-in for Ableton Live

When I produced the MIDI Note Filter recently, I realised that the internal processing would allow some other interesting MIDI processing functions to be carried out. One thing that I have always wanted to experiment with spans across two different topic areas: music and steganography (hiding data in other data), and it builds on my recent explorations of Note On and Off messages, as well as a recent twitter message that talked about security.

(I have also been busy producing material for Christian Henson's 'Pianobook Pianoday 2020' collaborative 'systems music' project due to be released on the 28th March 2020 as 'Music by 300 Strangers'. Search for '#pianobookpianoday2020' on YouTube to see some of the contributions... This has put me in a more experimental and explorative mode than my usual analytical one, so what follows is not my usual type of Max For Live plug-in... but this is a good thing!)

MIDIswapNVmr01 is a Max For Live plug-in that swaps the note number and the velocity value in MIDI Note On messages that pass through it, and it works with the implied Note Off messages that occur when the velocity value is zero. A button is provided that toggles between 'Swap' mode and 'Thru' mode, and the usual '!' panic button is also there to stop hanging notes (I have called it 'ANO' here because it flushes out hanging notes, but note that it does not send a MIDI 'All Notes Off' message - Ableton Live does that when you press 'Stop', which is why the dark blue ANO is shown by the monitor...). Yep, a minimal user interface!

One thing which I hadn't thought about until I created the MIDIfilterNOTEmr plug-in was what happens when you turn off a Max For Live plug-in using the 'Power' button in the top left hand corner. It turns out that if you turn a M4L plug-in off, then it is bypassed, so you can use the power button as a kind of secondary 'Thru' button. So I have included this in the screenshots!


The screenshot above shows the plug-in swapping note numbers and velocities. I have put two MIDI Monitor utilites before (on the left side) and after (on the right side) so that you can see what is happening. Looking at the very last note (just before the dark blue 'All Notes Off' ANO MIDI message), then you can see a Note On (shown as 'NON' in the Monitor utility) for E3 with a velocity of 65 goes through the plug-in and comes out as a Note On for F3 with a velocity of 64. You won't be surprised if I reveal that F3 is the note that corresponds to note number 65, so the incoming velocity of 65 has been converted to a note number (F3) as intended. The E3 incoming note is note number 64, and so this turns into the velocity of the outgoing note as 64.

Let's pause for a moment here whilst you get your head around this. The incoming note is E3 (65) with a velocity of 65. The outgoing note from the plug-in is F3 (64) with a velocity of 64. So the note number and the velocity value have been swapped.

A quick note about note offs. As I've mentioned several times recently, a MIDI Note On message with zero velocity is treated as if it is a Note Off message. So in MIDIswapNVmr01, a Note Number of 0 (C-2) turns into a velocity of zero, which means that whatever the value of the velocity of an incoming note, if the note number is zero (C-2), then the output will appear to be a Note Off message. As it happens, the 'hanging note' suppression mechanism that I use ignores Note Off messages that haven't ever had a preceding Note On, and so nothing comes out of the plug-in!

I wasn't sure how to capture 'nothing' happening, so for the screenshot above I played several C-2s and then an A-2, and then C-2 again. As you can see, the only output from the plug-in is an E6 with a velocity of 9, which is because the incoming note's A-2 is note number 9, and the incoming note's velocity of 100 maps to  E6 when it gets turned into a note number. The C-2 notes thus turn into pairs of 'Note Ons messages with zero velocity acting as Note Off messages', which are then stripped away by the hanging note protection, and so never even make it out of the plug-in.

So there's a caveat / warning / note for this plug-in:

If you input a C-2 into this plug-in, it will never get out!

The screenshot above shows the inputs and outputs in the 'Thru' mode - so the MIDI messages pass through with no changes.

And the 'Power Off' mode, where the power button in the top left hand corner is clicked to turn off the plug-in. This gives the same result as the 'Thru': the MIDI messages are not affected.

As an interesting extra, here's what happened when I tried the MIDI Monitor from the Max for Cats 'Gratis Hits' bundle pack of free Max For Live devices:

As you can see, the MIDIswapNVmr01 plug-in is disabled (powered off) and so the MIDI messages are unchanged, but the Monitor utility on the right has added an octave to each note number: C3 becomes C4, C4 becomes C5, and C5 becomes C6. I'm very used to seeing different octave numbers in MIDI software because there are various interpretations of the standard, and I tend to use whatever the software uses, which in the case of Ableton Live and Max is C3=60, so this has probably been fixed in an update, and I just don't have that update! One area that I steer well away from are any discussions about Middle C and MIDI.

Because this post has talked a lot about MIDI Note Numbers, then this seems like a good place to share two charts that I use a lot when I'm playing/programming. The first is a classic that you have probably seen in various forms before:

So the table above has notes along the top, and caves up and down, and lets you find the note number of any of the 128 MIDI notes. You can also use it in reverse - look up a number in the grid, and then read the note and octave from the two axes.

The second is more unusual...

This is a kind of 'inverted' version of the first table. It has decades (10s (tens)) on the vertical axis, and units (1s (ones)) on the horizontal axis. So to look up 65 (note F3 from earlier) you go down to the row that starts with 60, and then go across that row until you are in the column with 5 at the top. For 65 you get F 3 in the two boxes (note/octave). What is interesting about this are the patterns that you get when you take the usual '2 and 3' pattern of accidentals that you see on a normal music keyboard and wrap it around decimal numbers. The columns have either 3 and 3, or 2 and 4 patterns of notes, and the diagonals have more patterns: 3 and 4 and 2 and 3. if you are into patterns, then the last row of blocks of 2 always repeat the octave in the cell underneath, whilst blocks of 4 repeat the octave value of the cell above the first row. Block of 3 do different things depending on if they contain accidentals or not! If you are familiar with chess, then 'Knight's moves' on this table give you the cycle of fifths...

If you have ever seen one of my Tweets about cryptography, then my fascination with patterns now probably makes more sense. Music and cryptography both contain a lot of maths underneath, and so patterns are not unexpected!

Controllers and control

Swapping the note number for the velocity value renders makes it very difficult to play keyboards and MIDI controllers because the velocity value is often difficult to control with the same amount of precision as you would find on a keyboard where each key produces a specific note number. Velocity is determined by the rate at which the key is pressed down, and so is based on a time measurement. By swapping these two values, keyboards and MIDI controllers gain precise control over velocity (what used to be pitch control is now velocity control!), but lose the precision of pitch control.


Steganography is all about hiding information in places where it is not visible. One example is in JPEG picture files, where it is possible to hide data inside them by spreading it across the whole picture in ways that are not obvious. So there might be slight and gradual changes to brightness, colours, levels of noise or other parameters, that are not visible o the human eye, but that can be detected.

In this case, MIDIswapNVmr01 increases the precision with which changes can be made to the velocity value, but the pitch becomes less precise - in fact, trying to control pitch by how quickly you press a key spreads out pitch control into a very imprecise form! In graphical form, this might be represented by the amount of blurring (as a metaphor for the spreading out of data):

So a conventional music keyboard has precise control over pitch, so the word 'Pitch' is in focus, whilst the velocity control is less precise, and so the word 'Velocity' is blurred.

What MIDIswapNVmr01 does is swap the blurring - so now the word 'Pitch' is blurred, whilst 'the word 'Velocity' is in focus.

At Synthfest UK 2019, I spoke to Paul Ward about some of the FM sounds that I had programmed anonymously for the UK DX Owners' Club (many of which are widely available in various Public Domain collections - one set of my sounds have distinctive titles like : '-=[V]=- 1', for example), where there was a high amount of velocity sensitivity built into many of them. Controlling them can be tricky for fast runs of notes, and so Paul said that he preferred to use controllers like Mod Wheels to give more precise control.

What MIDIswapNVmr01 provides is precise velocity control from a keyboard, but sacrifices pitch control. With a fixed velocity from a keyboard, then MIDIswapNVmr01 only outputs one pitch, but you have precise control over velocity - and in fact, even a 88-note keyboard is going to give you access to only part of the full 1-127 velocity range (in exactly the same ways as it only normally lets you play 88 of the 0-127 range of MIDI note numbers!).

Which is why I said right at the start that this was for experimental purposes. I'm not expecting a sudden change in workflow so that people enter pitches very precisely (as usual!) and then use MIDIswapNVmr01 to enable them to add precise velocity as well. But my thinking is that given recent developments with MPE, and with sophisticated controllers like the Roli Seaboard, Haken Continuum and Expressive E Osmose or Expressive E Touche (others are available!)  then it might make people think more about additional control other than the pitch, timing and a very imprecise velocity value that you get from a traditional music keyboard.

Real-world instruments often have a lot of ways that the timbre can be influenced in real-time, and their players know how to exploit this - so I reckon that electronic musical instruments should be controllable in multiple dimensions as well. One very interesting illustration of this is in sample libraries of sounds produced using FM synthesis - what you get are very nicely sampled 'snapshots' of specific timbres, but you lose a lot of the subtle velocity control that programmers like myself put into sounds, and so the timbral variation is missing. I always remember when the first mass-market sampled pianos came out in the 1980s (Technics et al) that they were described by many players as sounding like:

'A very good recording of a piano, but not a piano.' 


As I mentioned, this plug-in is based on the MIDI processing core of the MIDIfilterNOTEmr utility, so let's see how it works:

As usual, I have tidied my normally untidy code in the screenshot above. The right hand side is very similar to the MIDIfilterNOTEmr code, with the 'select' object switching between two values depending on the incoming value. But here the 'swap' object at the top is used to change the note number and velocity over (as well as changing their 'right=-to-left' processing order), and so the velocity value does the same switching depending on 'Note On' or 'Note Off', but now switches the note number value because that is what will end up in the middle 'velocity' input of the 'note out' object.

On the left hand side, the 'select' object is again doing switching, but this time it is actually making sure that the velocity value is stored. The velocity value that comes out of the 'swap' object contains two values, the velocity value from the incoming notes, or zeroes from the Note Off messages. So the left hand side is all about capturing the non-zero velocity value, and making sure that it is stored when the zero velocity is output by the 'swap' object. So the two grey message boxes are just two stores in series, and the trigger for the storing is produced by the 'select' object when it detects a Note On or Note Off message. Finally, that captured and stored velocity value is fed into the left hand 'Note number' input of the 'note out' object. 

This approach could be extended to longer serial chains, and might open up additional possibilities for doing more complex MIDI processing. Because some previous posts have talked about Max's sample-level lower-level programming, then the message box approach used here could be thought of as being kind of half-way between ordinary Max and Gen: MIDI-event-level, perhaps? 


One unintentional side effect of MIDIswapNVmr01 is that it makes playing a music keyboard in the conventional way almost impossible... So if you are every visited by a highly skilled concert pianist, and they want to play one of your keyboards, then using MIDIswapNVmr (to swap the MIDI note number and the velocity value of their playing) will considerably impede them...

Alternatively, sometimes, changing a familiar constraint (like pitch control from a keyboard and changing it to being controlled by velocity) can break you out of creative road-blocks... Inverted keyboards are another way to do this, and I have produced a Max For Live plug-in that does 'proper' inverted MIDI keyboard mappings and lots more!), but just using a Scale utility over an octave can achieve a similar 'writer's block' mitigation.

Actually, combining the ideas in the previous paragraph - adding a Scale utility after MIDIswapNVmr01 so that the MIDI notes are constrained to a specific scale - is a very practical and useful way of using many of my weirder plug-ins (like my MIDI note range expander/compressor and offset utility). What the Scale utility is doing is constraining the variability from the velocity measurement, which takes us to the several variations of 'constrained randomness' in another of my plug-ins.

I'm wondering if I can find time to do a 'how to use my plug-ins in combination' tutorial...

Getting MIDIswapNVmr01

You can get MIDIswapNVmr01 here:

Here are the instructions for what to do with the .amxd file that you download from

(In Live 10, you can also just double-click on the .amxd file, but this puts the device in the same folder as all of the factory devices...)

Oh, yes, and sometimes last-minute fixes do get added, which is why sometimes a blog post is behind the version number of

Modular Equivalents

In terms of basic modular equivalents, then implementing MIDIswapNVmr01 depends on your MIDI/keyboard interface. If that produces CV outputs for Note Number (Pitch) and Velocity, then a couple of patch cables crossed over and connected to the output (with a gate cable as well) will do this directly, giving an ME of 1! This is the lowest modular equivalent so far, if my memory serves me correctly...

Links from this post:

Roll Seaboard (Company web-site)

Haken Continuum (Company web-site)

Expressive E Osmose (Company web-site)

Expressive E Touche (Company web-site)

MIDIfilterNOTE (

MIDIfilterNOTE (Blog post)

Christian Henson (YouTube channel)

#pianobookpianoday2020 (YouTube search)


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Thursday, 19 March 2020

The Allegorist - musical stories available now on Soundcloud, iTunes and other digital streaming services...

I may have mentioned this before...but in case I haven't...

I was at Ableton Loop 2017 in Berlin, at the Funkhaus. In one of the studio sessions, Mandy Parnell gave her opinion on the submitted tracks from a mastering perspective... But one track submission got a totally different reaction: stunned silence! After the session, I provided links for various routes forward (because I was very impressed)...

So here we are three years and two albums later, and it seems like a good time to remind people that there are musicians out there who produce amazing innovative music that is light-years away from all of the usual 'sounds-the-same' stuff that you keep hearing everywhere.

The artist that I'm talking about here that tells stories with music is called The Allegorist.

Link to website:

Links to releases:

Links to live performances: