Wednesday, 27 January 2021

Music Hackspace 'Max meetup USA #1' event report... (Modular CV Interfacing)

A week after the Europe Max meetup, the Music Hackspace had its first 'USA-timing-friendly' online Max meetup via This time there were three short presentations, but the 'CV/Modular' breakout room afterwards was particularly interesting. The first question that was posed was simple: how to interface Max to a modular synth to make drum sounds (I think - please let me know if my recollection is faulty), but the answers were not so short/simple and so I thought that it would be good to capture them here as a blog post. This is just part of the discussion that happened, so you should consider joining in next time...

Max Interfacing

Cycling '74's Max software can output audio, video, and MIDI, but outputting Control Voltages (CVs) and Gates/Triggers for controlling modular synths are less immediately obvious. There are some resources available on the Cycling '74 web-site, but they tend to only mention 'dc-coupled audio interfaces' or cover a specific device. 

DC-coupled audio interfaces are special cases of the ordinary audio interfaces that are used to get audio in and out of a DAW.  There are also specialist Modular MIDI-to-CV converters which are audio interfaces that are specifically designed to be dc-coupled and output CVs. Let's look at these two variants first:

1. Audio interfaces

Audio Interfaces are perhaps the obvious starting point, given that many people have them. They are a popular purchase for anyone who wants to make music using a computer - and if we wind time back by a couple (or triple) of decades, then the solution then was a 'sound card': a plug-in card (ISA-bus was one popular type) that provided better sound generation capabilities than the basic computer itself, as well as more 'music-making friendly' sockets than 3.5mm mono or stereo jacks. Sound input in those days was something that was very unusual in an off-the-shelf computer, and a sound card provided audio input capability - but the quality was not quite up to CD standards unless you spent a lot of money. 

Nowadays technology has moved on a lot, and 'as good as or better than CD quality' audio interfaces are now typically external boxes that connect to USB, although curiously, the computer socket remains stubbornly and exactly the same 3.5 mm mono or stereo jack sockets rather than quarter inch jacks, RCA/Phono sockets or balanced XLRs. I have always thought that if a computer was really designed for music use then it would not have 3.5mm jack sockets for audio... There again, there's money to be made by selling audio interfaces, and there are lots of adverts reminding purchasers of DAWs, audio editors and other music software that one of the first follow-up purchases should be an audio interface. 

An audio interface is just a converter from the digital numbers used to represent audio signals inside the computer, to the analogue audio signals that you find on quarter inch jacks or phono connectors when you hook a guitar or a synth to a pedal and then to an amplifier (or these days, more probably a software emulation of a vintage, distorting amplifier connected to an emulation of a vintage, slightly mis-used speaker cabinet, connected to a very clean amplifier). In other words, an audio interface contains an Analogue-to-Digital converter to input audio into the computer, and a Digital-to-Analogue converter to output audio from the computer. 

Audio interfaces normally get selected based on the number of inputs and outputs, the quality of the audio that they give, the highest sample rate (192 kHz for example), the number of bits that are used in the Digital-to-Analogue Converters (DACs) and Analogue-to-Digital Converters (ADCs) -16 bits is meh (CDs), 24 bits is high, and if they can run VST plug-ins (which also equates to expense). You might have noticed that 'Outputting control voltages for modular synths' wasn't in that specification list...

To output Control Voltages, you need an audio interface that has an unusual property in most audio systems. Audio signals are often quoted as being from 20Hz to about 20 kHz in frequency, from a low-pitched rumble to high pitched (kids can hear it but their parents can't) shrieks. The diagram above is impossibly perfect, but shows what an idealised frequency response might look like. As you go below 20Hz you feel wobbles rather than hear the audio, and eventually, at zero frequency, you get what is called direct current (DC) because it doesn't change (instead of current that changes all the time, which is called alternating current (AC)), which is where the wobbles stop and you just get a voltage (and a current flowing - there are various formulas that connect all of these things...). If you want a long explanation, just ask an electronics engineer why DC isn't called 'Direct Voltage'...

The problem with frequencies below 20Hz is that they are just wobbles, and you feel them rather than hear them. And getting a speaker to wobble can do nasty things to it - overheating, tearing itself apart, ripping the cone, warping in shape, etc. One way of experiencing DC is that thump you get when you power up amplifiers with the volume up high instead of at zero. So, to protect speakers (and people from being wobbled excessively), many audio systems don't go 'down to DC' (zero Hz) - they stop at about 20Hz. 

Unfortunately, frequencies below 20Hz, and especially zero Hz (which is stopped = a fixed voltage!) are exactly what is needed for CVs. Control voltages like Pitch or Modulation tend to change quite slowly (60 bpm = 1 Hz (!) which is one complete wobble per second). and so will not be output by an audio interface that has no response below 20 Hz. 

So what you need for CVs is an audio interface that has a frequency response that goes all the way down to DC (Zero Hz!), which is often called DC-coupled (because electronics engineers have jargon just like any other profession). The dashed line in the frequency response diagram above shows a response that goes all the way down to DC, but the log scale makes it difficult to show... Here's an example list from 2019 that shows some 'dc-coupled' possibilities then (you will need to research current devices...):

If you look at the text in the Ableton Live 'CV Tools' device free download, then it says that you need to use a dc-coupled audio interface, but doesn't go into any more detail:

(Technically, it should be 'DC-coupled', but lower case is often used instead...)

If you want to check an audio interface, then looking for the phrase '20Hz-20kHz' in the specification is usually a good indicator that an audio interface is NOT DC-coupled. That low number: '20Hz', is the clue. My Focusrite Scarlett has exactly this phrase in its specification, and yep, it is not DC-coupled, and so isn't good for outputting control voltages. There again, the specs make it very clear, and I bought an 'Audio Interface', not a 'Control Voltage Interface'.  

Sometimes the specifications can be difficult to interpret. Here are the specifications for the Native Instruments Komplete AUDIO 6 audio interface:

As you can see, the phrase 'DC coupled' is there! But only in the output (and also notice that it doesn't say '0Hz-20kHz! That would be far too obvious...). There again, the input doesn't mention the all-important phrase at all. There's a rule here:

If it is not in the spec, then there's probably a good reason why not...

This means that the output is DC-coupled, so you can use this audio interface to send CVs to your modular synth (or any synth with CV inputs), but that the input is NOT DC-coupled, which means that you can't use this audio interface to receive incoming CVs from a modular synth, a CV controller, or an analogue synth that outputs CVs. However, the inputs can have 48V applied to them, which is not recommended for connecting to most modular systems.

The specification has one additional, easily-overlooked 'feature'... There is an asterisk (*) after the 'for modular control... If you go to the end of the specification it says: *Limited to +/-2V range due to the AUDIO 6 being USB powered.' Aha! So the range of voltages that can be output is limited - which gives us another rule:

Always check for asterisks - they often try to hide a catch... 

Something to be very aware of when looking for a DC-coupled audio interface is the actual output voltage range - and be careful to never assume anything. Even if an audio interface is DC-coupled, it doesn't necessarily mean that the range of voltages that it can output are appropriate. Let's look at a popular modular standard and see if that tiny little asterisk has any significance...

I'm going to concentrate on Eurorack modulars here, but there are other standards... Eurorack audio signals can be a maximum of 10V from peak-to-peak, which is -5V to +5V (+/- is known as bipolar). Eurorack control voltages can be half that size (-2.5V to +2.5V), but can also be what is called 'Unipolar' and range from 0V to 8V. Control voltages that are used for pitch usually follow a 1V/Octave rule, although there are other ways of representing pitch, particularly on modular synths from the 'Sound Card Era' and even before that! Gate and trigger signals are usually 0V for Off, and 5V for On. All of these numbers mean that you may need to amplify the output of a DC-couple audio interface in order to get the right voltage levels... so that Utility module may be useful after all!

In the case of the Native Instruments Komplete AUDIO 6 (Why is it shouting 'Audio'?) then the control voltages are slightly smaller than the Eurorack range in bipolar mode, but way too small for unipolar mode. This could limit the range of, for example, a pitch CV, which might not be what you want. Worse, if you aren't aware of the limits of the output voltage, then you might spend time trouble-shooting a problem that seems to be in the modular when it is actually in the audio interface. 

Using audio signals to carry numbers is not new. Before broadband, modems used to turn the number in computer communications into frequencies so that they could be sent over telephone connections - and telephones are not DC-coupled! (300Hz-3.4 kHz for UK telephones). Data was (and still is) sent over radio by jumping between frequencies`; early methods used pairs of frequencies, whilst modern systems use more complex 'constellations' of frequency, amplitude and phase. 

One other important thing to remember is that price and external appearance aren't going to give you a reliable indication of an audio interface being DC-coupled. Check those specifications...  

In summary, then, audio interfaces come in two flavours: DC-coupled (which CAN be used to output CVs - but check the range), and Not-DC-coupled (which can't be used to output CVs). It is a good idea  to stick a label onto your audio interface to indicate if it is DC-coupled (input, output or both, plus the range of voltages) or if it is not.  

2. Modular MIDI-to-CV converters

A modular MIDI-to-CV interface is a purpose-designed converter that plugs into a USB socket and outputs Control Voltages (and sometimes inputs CVs and converts them to MIDI, although technically that would be a CV-to-MIDI converter!). So they go from DC up to the low wobbles (and maybe up above that where you can actually hear the frequency), and no need to amplify the output, the CVs are modular-compatible by default. Take care: a MIDI-to-CV interface module for one modular standard might not be suitable for another, plus the power supply might be different, and the mechanics will be different... As before, in this post I will only cover Eurorack...

One often-mentioned modular MIDI-to-CV interface is the Expert Sleepers ES-8, which has 4 analogue inputs and 8 analogue outputs on the front panel, plus various expansion options for additional I/O.  - Expert Sleepers ES-8  - ES-8 Manual

There are other devices, of course!  - Mutable Instruments' Yarns  - Doepfer A-190-3 USB to CV/Gate

and plenty more... 

Note that some MIDI-to-CV modules have 5-pin DIN inputs rather than USB sockets, so make sure to read the specs, otherwise you may need a USB MIDI Interface (most audio interfaces also provide MIDI I/O...). 

The Arturia KeyStep 

And then someone suggested the Arturia KeyStep. It has Pitch CV, Mod CV and Gate outputs, as well as MIDI In and Out.

The manual says that incoming MIDI notes are used as transpositions for the sequence, and are also converted to Pitch CV. So I looked for the MIDI Implementation Chart to see more information. Except I couldn't find one. Not in the manual. Not on the web-site. Not from a Google search. So I compiled one by testing exactly what the KeyStep actually does. You can download it from here...

Here's a summary of what I discovered:

- The KeyStep outputs Pitch CV based on incoming MIDI notes, plus whatever note is played on the KeyStep's keyboard, plus any Pitch Bend from the KeyStep's Pitchbend strip controller. Incoming MIDI Pitch bend messages seemed to be ignored (but this could be my error - please let me know if there is a way to make it happen...). Even so, being able to convert MIDI notes to Pitch CV was very useful - and lots of people have a KeyStep. Being able to add Pitch Bend to incoming MIDI notes can add a lot to a plain 8 or 16 step sequence...

- The KeyStep outputs Mod CV based on the Mod source that has selected in the MIDI Control Centre software from Arturia that is used to control the setup of the KeyStep (plus save sequences, etc...). Available source are the Mod Wheel, Velocity and Aftertouch. So if the Mod Wheel is chosen, then incoming MIDI Modulation (Wheel) Controller messages (CC1), plus the KeyStep's Mod Strip are added together and output as the Mod CV. If Velocity is chosen then the Velocity of incoming MIDI notes is added to the velocity of notes played on the KeyStep's mini-keyboard and output as the Mod CV. And finally, if Aftertouch is chosen as the source, then incoming MIDI Aftertouch message values values are added to the Aftertouch values from the KeyStep's mini-keyboard and output as the Mod CV (cool for a modular where people don't normally them to respond to Aftertouch). Lots of scope here for double keyboard possibilities, particularly adding Aftertouch to fast lead lines on a keyboard - where you don't have enough time to press on the keys to activate the Aftertouch. 

- The KeyStep outputs Gates only when its mini-keyboard or internal sequencer/arpeggio outputs a note. I couldn't get it to respond to incoming MIDI notes. Now there is lots of scope for experimental error here - the MIDI Control Centre provides lots of control over how the KeyStep behaves (like choosing the source for the Mod CV - if you choose Velocity or Aftertouch, then it might appear that incoming MIDI Mod wheel messages are ignored...), and I might have missed a vital setting. So I'm happy for all of this to be a draft, and if anyone has any additional information about how the KeyStep responds to incoming MIDI messages, then please let me know and I can update the MIDI Implementation Chart (and this post).

As a workaround for the lack of a Gate output, you could use Mod Wheel, Velocity or Aftertouch Mod CVs through a Utility module and create Gates using a threshold function. You could even use the value as a CV as well. You could also buy a MIDI-to-Gate/Trigger module! (GAS can be very bad with modular synths...)

The KeyStep is thus a partial solution to converting MIDI to CV so tat Max can be used to control a modular synth, and it opens up some creative control possibilities that aren't normally very easy to do. 

This is probably a good time to think about how closely related audio signals, controls voltages, and gates/triggers are in a modular synth. An audio signal can be used as a fast LFO, whilst a fast LFO can be an audio signal. A pulse LFO can be used as a continuous series of gates or triggers, and so on. A MIDI-to-CV module emphasises the interchangeability by making numbers in Max appear as voltages in the modular synth - so numbers that go up and down from a cycle~ object could be an LFO or an audio signal, whilst a number that stays the same for most of the time, but occasionally jumps up to a higher value, and then jumps back to the original value again, could be used as a gate. 

What a voltage does is defined largely inside Max by how the numbers change, rather than by the modular synth - the modular bit is just the way of turning those numbers into sound. This is why modulars are more interesting than conventional fixed architecture synthesizers...

But a lot of the fun of electronic music is DIY, and so here's some information on other ways that you can interface Max to a modular synth or an analogue synth:

3. Other possibilities...

In electronics, there are often alternatives. If you have any electronic design experience, then a frequency-to-voltage converter could be an interesting way to explore using an ordinary audio interface and Max's audio generation capability to convert frequency to voltage. 

Frequency-to-voltage converters often use a pulse generator plus some sort of averaging circuit (a low pass filter, for example) - so for the averaging circuit you could have a leaky 'bucket' (which could be a capacitor with a resistor that causes the voltage to 'leak' away), and a pulse generator circuit could be just a way to fill the 'bucket' with cups of water. The faster you put cups of water (pulses) into the bucket, the higher the voltage level, and so the frequency determines the output voltage. 

There are chips deigned to do frequency-to-voltage conversion, and all you would need to add is an input buffer and an output scaling amplifier (probably just an op-amp).

Here's some information about a few methods of converting F-to-V, mostly using dedicated chip-based Frequency-to-Voltage convertera: 

If you want to have something curious to think about, consider this: a Frequency-to-Voltage converter is just a reverse VCO. (A VCO turns a voltage into frequency...)

Because it comes from a legendary analogue circuit designer (Bob Pease), I'm inclined to forgive the blatant and incessant advertising on the following web-page (if I ever needed a reason to use an ad-blocker...):

You could use a Utility or Trigger module to threshold the output voltage and produce Gate or Trigger voltages, where low frequencies produce a low voltage output (Off) and higher frequencies produce a higher voltage output (On). Again the input will need to be buffered and scaled, and maybe offset. The core part of this, a F-to-V device converting 0-10kHz to 0-10V, is available as a £6 circuit board from Given the price of many modules, then this is a (unipolar mode) bargain! Steampunk experimentation awaits for brave synthesists!

The problem with using a chip or circuit board as a 'black box' is that you don't get any real feel for what is happening, so here's a circuit that does what the cups and bucket does:

...and here's how you could use the same circuit in a modular synth to make a simple Frequency-to-Voltage converter - you just solder a few components onto two 3.5mm sockets (or you could just cut a 3.5mm cable into two... 

For experimentation purposes then 'rats-nest' style is fine by me, although you could use a prototyping bread-board if you wish. I'm always intrigued by modular owners who have modules for everything, but who never actually do any DIY circuits. A modular is a DIY synthesizer, so why not build your own circuits to process audio or CVs...  

Using frequency-to-voltage converters may have other side effects: the latency might not be very low, but this might contribute to the appeal. For example, Buchla-style Low-Pass Gates have interesting time response characteristics which create a lot of their special sound. Modulars are very good at exploring these types of circuits - you could almost think of them as laboratory toolkits for audio electronics...

Frequency to voltage conversion is an old technique, hence the steampunk reference above. one of the first circuits that I ever had published was a variant of the diode pump, used to indicate if a clock was running or not... Frequency-to-voltage converters turn up in all sorts of equipment: radios, tachometers, speed controllers, and more...

Open. not closed...

Hopefully this post will help Max (and MaxForLive, PureData, and other similar programming environments...) users to control some of the real world beyond their screen. 

Interfacing Max to other devices, sources of numbers, other controllers, synthesizers, modulars and more opens up huge possibilities. One of the dangers of creating music on a screen is that the screen can become the only focus of the environment, and there is a strong temptation to put everything on the screen because of the immediacy, ease of editing, convenience... I believe that the most interesting challenges and opportunities in making electronic music come from the interfaces between the real and the virtual, the human being and the synthesis equipment, the possible and the 'to be solved', the screen and beyond, because that is where magic happens. 


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)

Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)

Tuesday, 19 January 2021

Music Hackspace 'Max meetup Europe #1' event report...

Just occasionally (Spitfire Audio please note), I get invited to music business events, so I was very pleased when the Music Hackspace (Based in Somerset House, London, although in these Covid-19 times, maybe 'Online' is a better location!) informed me about an interesting event in the middle of January... (I'm a MH 'mailing list' subscriber, and thoroughly recommend the Music Hackspace if you are into music technology...)

So 3pm GMT on Saturday, the 16th of January 2021, found me videoconferencing on, taking part in the first 'Max meetup Europe Edition'. Max is the commercial 'visual programming' language for multimedia published by Cycling '74 (Miller Puckette, one of the original authors of Max at IRCAM in France, has also released an open source branch called PureData

After the usual welcomes and intros, there were two short presentations on projects using Max: 

One (above) from Phelan Kane on using Weather metadata in a MaxForLive device to control music generation in Ableton Live (I loved the use of the 'dict.view' dict viewer object to give the hierarchical list), and another (below) from JB on exploring dual sampling and pitch manipulation using two instances of the 'groove~' object. Now I have to declare here that I'm a great fan of the groove~ object, and I have been working on a sample processing device using it for far too long, but that's another story. Here's a tease partial screen-shot showing one of the two groove~ objects...

My main 'go to' object at the moment is the live.grid object, but not with the chucker~ object that it is supposed to be used for. Instead I mis-use it to provide a neat user interface to some probability functions. And that is another story as well...


After this, attendees distributed themselves into breakout rooms (including chill rooms for those who didn't want to go too nerdy). I joined the MaxForLive breakout room because I've been doing more M4L than Max for quite a while. Now maybe I should do more Max, but TAS, as I've been saying too much...

The conversation started around MIDI Controllers. There's something about people who program Max For Live - they often seem to have a keen interest in MIDI Controllers, interfacing them, emulating them, reimagining them in M4L inside Live, etc. As usual with any discussion of MIDI Controllers, the topic of 'Custom' came up. I'm not immune to this, I have a half-built custom MIDI Controller made using the Makey Makey device, and I backed the Kickstarter Ototo project with the aim of turning it into a custom MIDI Controller. But DIY hardware is tricky (although I do like the occasional mod here and there...) and so the latest incarnation, the 'we built your custom MIDI Controller for you, was shared and there was lots of 'oohing' as everyone imagined something custom... This set us along a thread of 'MIDI Controllers' you may not have heard about, and it turns out that Yaeltex do some predefined controllers as well (like the 'MiniBlock2' shown here).

So that all of the discussion wasn't lost, I took some notes, and produced a database of most of the things mentioned, plus some others. You can view it either via the Music Hackspace Discord channel (max-meetups) or here: 

MIDI Controller database


Things then got a little bit philosophical as the discussion went into programming, particularly going 'deeper' than Max or MaxForLive. We talked about Gen, which took us to JUCE, and then to SOUL, then via Bela, and ended up with C++ or even DSP assembler. I think Axoloti was mentioned too, but no-one dropped in Faust. It struck me that this whole topic needed some sort of map, so I produced one:

I have deliberately avoided trying to position VSTs (or AUs, or...) or Faust on this mind-map, but it's a personal view of what part of the 'Audio Dev' landscape kind of looks like. I'm sure it isn't perfect, but it gives some positioning of technologies on that spectrum between 'Easy and fast to code, but middling performance' to 'Difficult and slow to code, but amazing performance'. It's a long time since I did Motorola 56000 DSP coding, and recently I've not gone any lower than Gen. I suspect that talking about this topic is going to be a regular feature of the Music Hackspace Max meetups - did I mention that they are monthly for Europe, and for the USA too, so that's fortnightly if you register for both. 

Oh, and they are free! 

All you need is your time and Zoom (not the music electronics company from Japan, but, the videoconferencing services provider...)

I have to say that I thoroughly enjoyed talking to other people about Max, and the conversation strayed well away from it as well, so it was more like a gathering of 'people who make music', and I'm always up for that. 

Here's a link to the Music Hackspace 'Upcoming Events' page, so that you can register for future events... 

I would like to thank the Music Hackspace for a fascinating and useful couple of hours spent in Max-land. I may well do it again!


Music Hackspace - The hosts of the Max meetup...
Cycling '74 - Max and more...
PureData - A very interesting alternative end-point to IRCAM research by Miller Puckette et al...
Phelan Kane - More about, and more from... - Pre-built and custom built MIDI Controllers
Makey Makey - DIY MIDI Controller enabler - just add physical hardware...
MIDI Controllers Database - Some of the available MIDI Controllers (let me know about others!)
Gen - is one layer underneath Max...
JUCE - do just about anything audio on a computer...
SOUL - even deeper down the rabbit hole...
Bela - C++, PureData, SuperCollider...
Faust - an alternative to C++?
Music Hackspace 'Upcoming Events' page - Future Max meetup Europe Edition & USA Edition events, and more...


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)

Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)


Thursday, 31 December 2020

Thoughts on Asynchronous Loops in MaxForLive for Ableton Live...

I ruthlessly prune comments. There are some people who think that every blog is a place where they can advertise for free, and so they add automatically generated comments with a clickable link somewhere and hope that people will click on it. I just delete these 'chancer' comments. But, sometimes, a genuine and interesting comment arrives...

ElDepleto wrote a comment recently at the end of the 'Non Euclidean...' blog post:

Hello. I have been reading your blog for a while and I really enjoy your devices. I am hoping you can help. I am looking for a m4l device that can play 4 asynchronous loops not tied to Ableton’s tempo or any tempo really. Thinking Discreet Music by Eno. I’d also like to be able do Reich style phasing with it. I am hoping you know of something that exists. Drag and drop would be ideal. Anyway, I hope you have a happy holiday and keep up the great work! -Brian

This one caught my attention, and I thought that it would be a good opportunity to do a one-off 'Adam Neely'-style 'Q&A' blog post, and you are reading it!

Asynchronous loops...

The blog post that the comment was attached to was was for my Non-Euclidean, Non-Linear Sequencer Toolkit, and this can be thought of as being close to the opposite end of a spectrum of approaches that has the 'I'm looking for...' 'Asynchronous Loops' at the other end. My Non-U, Non-L sequencer produces MIDI notes where the timing can be phased/slipped relative to each other, and you can have up to four sets of sequences running plesiochronously at once. The 'Plesio' prefix is a description of the case where two systems are not asynchronous, but they are not synchronous either. It turns out that async and sync are just extreme cases, an there are lots of 'partially synchronous' cases in between, hence 'plesio' meaning 'partly'. In this case, the minimum time interval is a 64th note, and so the phasing is coarse and quantised to Live's timing clock, but it can still give some interesting outputs. 

The 'other end' is multiple loops (samples that repeat with some degree of seamlessness) where the ultimate 'analogue' form would be four separate tape loops, with the minimum time interval between them being down to an atom of iron on the magnetic tape. The digital equivalent could be implemented in a number of ways: fractions of a cent of pitch shifts, perhaps, or just four different length loops played back at the same rate (although the sample rate imposes a 'quantisation' limit of time for this method), or different playback clocks... Of all of these, the analogue tape is probably my favourite - because it is simple and mechanical. So, not for the first time, digital technology has turned easy accessible DIY into something more like arcane complexity...


There's an easy answer, of course. Just search the repository of M4L devices. Unfortunately, whilst there are a lot of M4L devices on, the sheer number of them can make it hard to find a specific instance (or instances). It isn't an accident that it took several different attempts at Internet Search Engines (Alta Vista, for example) before we got the current giants of search, and it took a lot of thinking, invention, and several reworks of business models before we got where we are now. is a wonderful resource, but asking the developers of devices to describe their work isn't a guarantee of unbiased, accurate and consistent classification. 

Databases are interesting pieces of software. Shoehorning data into a spreadsheet and expecting it to be usable as a powerful relational database is unlikely to pay dividends, and actually, that's the point - for a database to have value, then it needs to have money spent on compiling it, on verifying the data in it, on making it accessible, on keeping it up-to-date, and more. Sometimes you can get people to do this themselves: Google Earth is an astonishing example of how ordinary people freely provide hugely valuable data updates. But it normally requires money up-front, and with payback later. The scale gets big very quickly. On the opposite end of the scale, you have, where in just over a year or so, over 400 sample-based virtual instruments have been freely donated as a common resource (looked after by volunteers), and there's already a problem of finding stuff, just like on 

400 is an interesting number. If you were asked to sort 10 numbers into order, then you would probably have no hesitation in doing it without any planning. For 100 numbers, then you would probably spend a bit longer planning out your approach. 400 is getting big enough to think about asking a few other people. 1,000 would be quite daunting. 10,000 and you would be thinking about having to do a lot pf planning and setup to achieve the task. 100,000 and most people would probably go to a specialist company to do it. So 400 looks like it might be close to the number where it starts to turn into something non-trivial and requiring real effort - and money. has almost 5,000 devices...

5,000 devices is a lot. If it takes you an hour or so to get thoroughly familiar with how something works and what it does, then you might do 5 or maybe 8 devices in a working day... This means that it will take you something like 2 years to get a good level of familiarity will all the devices, and this isn't taking into account updates and new device releases. I would be surprised if there are many people who have a good grasp of all of the devices on

Back Catalogue...

My first thought was to look in my own back catalogue! I have about 100 M4L devices on MaxForLive, and there are some that get close to what is required. (And if not, then the temptation to make one of my own would be enormous!)

I didn't find exactly what was required, but they were related and still interesting...

26 November 2017 - dFreez

dFreez is the 'drone-performance-oriented' version of sFreez, a 4-channel sample player that uses a 4-phase LFO to cyclically fade between the four samples. It makes creating atmospheric washes of sounds (drones, etc.) very easy - just drag and drop four samples and off you go! The addition was a slow 'Fade Up/Fade Down' control that can take a long time to fade up or down...

20 October 2017 - sFreez

This is the original 4-channel sample player with a 4-phase LFO that fades cyclically between 4 dragged and dropped samples to create continually changing washes of sound.

24 January 2016 - Saw4Generator

4 channels, but Sawtooth oscillators, not samples. But I learnt a lot about how tricky it can be to control 4 channels of sound at once...

9 February 2019 - INSTsineATMOSPHERE

INSTsineATMOSPHERE uses 3 channels of FM oscillators and is another attempt to provide a simple user interface to a complex sound generator.

30 August 2016 - gFreez

This is the ancestor of sFreez and dFreez, and uses granular 'frozen' spectra as the source material. So you capture a spectrum from an incoming instrument or recording, and then that is replayed as a looped 'grain'. More slowly changing washes of sound...

So looking back through previous MaxForLive devices had some 'close approaches', but no direct hits.  

So I thought about it from an oblique angle...

Go local!

I realised that Ableton Live itself was originally designed as a MIDI sequencer, but that when sample replay was added to the 'Clip Launching' (Session) view, then something very different was the result. Live's Session View doesn't have any link to time in the upper part of the window - just a matrix of clips. It isn't at all like the 'Piano Roll' or 'Tape Recorder' views that had time on the horizontal axis, and pitch or tracks on the vertical axis. 

Clips in Live can do a lot. I've always been a fan of the slightly obscure 'Clip Envelope' functionality, which many people overlook. But when you have a sample as a Clip, and you loop it, then you get a lot of ability to do interesting things, all from when the Session view first appeared and Ableton let people play around with samples without any dependence on a time axis.

So if you create a Clip in Ableton Live, and set the 'Warp' and 'Loop' buttons on, then it will play as a continuous loop. The length of the sample determines the length of the loop, so it isn't tied to Ableton's transport clock. Activating the 'Warp' button but not using any warping functions means that the clip plays asynchronously to Live's transport, but it also means that if your loop is seamless, then there is only one length that you can use - the one where it is seamless! So as long as your sample has silence at the end, then you can reduce the length of the sample, and it will loop that reduced length sample. (More about warping later).

I'm sure that this is well-known, but I hope that my rediscovery may be useful to some people. 

Here's a Clip on a Track, set up as I have outlined:

The sample that I'm using is just one of the factory samples that comes in Live Suite, I think. This is just the raw 'dragged and dropped' sample. There are several things to note in the screenshot. First, the sample is not a whole bar in length - you can see the 1.1.2 marker just after the middle of the sample, but this is a view of the whole sample, as you can see in the tiny preview box at the lower edge of the screenshot. Secondly, the 'quantisation' is shown as 1/512 in the lower right hand corner, which is equivalent to 'no quantisation' in Live. Finally, there's no yellow 'Warp' marker in the grey 'warp' bar on the right hand side. In the purple bar you can see the end triangle, and in the light grey bar under that you can see the repeat triangle (or is it the other way round?), but the next bar down, the 'warp' bar, doesn't have any marker at all. All three of these signs indicate that this sample isn't tied to Live's transport. 

Track 2 has the same sample, but this time the length of the sample in the Clip has been adjusted:

The upper red ellipse on the far side shows that the length of the clip has been shortened - the purple and light grey bars now end, and there's a light grey area to the right of them. The darker grey 'warp' bar still doesn't have a yellow 'Warp' handle in it (this is good for this application!). When you change the length of a sample by dragging the end triangles, then they jump to specific places, so you don't have complete control over the length, and those places are related to the bar and beat positions. (But remember that the sample in Track 1 is NOT linked to bar or beat positions at all...)

Track 3 is just a slightly shorter sample:

The white triangle has now gone black, which indicates something related to warping, but note that there isn't any yellow warp handle, so this is just a shorter sample. Playing all three tracks at once gives exactly the asynchronousity that ElDepleto wanted! (You just need to add a fourth track and tweak it, of course!). Using the same sample makes it very easy to hear hat is happening, but you can get very good results by transposing samples down or up, and by using the /2 and *2 halve/doubling buttons to change the length. Detuning samples gives asynchronicity at a finer level of detail, if you want. 


If you want, you can use/activate the Warp facilities and see how this changes things. Here's Track 2 with modifications in Tempo applied to the sample:

If you click on the far right hand side, in the darker grey warp bar, where the yellow 'Warp' handle would be if it was there, then you will find that it appears and you can move it so that the sample changes tempo to match the loop length. This does mean that it is now synched to Live's transport, but the loop length need not be whole bars, and so is 'plesiochronous'. You can see that the quantisation has now changed to 1/32, and the length is a whole number of beats. 

If we do the same with Track 3, then we get this:

Yes, a broader light grey region to the right hand side, and a shorter, tempo-tweaked clip...

You might like to try out both variants to see which gives the asynchronicity that you want. I have to apologise for not delivering a MaxForLive device that does this, hut having the native functionality in Live is very useful. If you want to have seamless samples, but which are slightly different lengths, then the technique that I use is to overlap copies of the start and end of the sample and do a cross fade, then merge and trim back to the original sample. This means that the sample fades out its end as it fades in its beginning. I wish that audio editing tools would automate this type of editing function (Audacity, for example), but once you've got your head around it, it isn't difficult to do. 

I have done a video which shows the 'no warp' asynchronicity in action, and this is available on my YouTube channel: 

If I can find time, I may see what a stand-alone MaxForLive version would look like...

And thanks to ElDepleto for the comment! Much appreciated!


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)

Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)



Wednesday, 30 December 2020

MIDI Distribution Processor - a different approach to making a sequencer

Step sequencers tend to follow a very well-defined blueprint, and getting away from fixed step timings, deterministic playback, fixed swing, limited probability controls, boring repeated velocity and other immutable articulations can be difficult. If you look back through my MaxForLive devices then you will find a few of my attempts to break free of the constraints, and this blog post aims to highlight my latest version.

When I created a tee-shirt design featuring a fictional modular synthesizer module with controls in the shape of a Christmas Tree, the plan was not to trigger the creative process. But my mind is strange, and that triangle shape got me thinking, and before I knew it, my brain had produced an idea for a very different type of sequencer - and it was different again from the Non-Euclidean, Non-Linear sequencer that I released only a few weeks ago.

The Christmas Tree looked like one of those marble-based binary decision trees that ends up with a Gaussian distribution of outputs, where the outermost 'bins' are the least likely to end up with a marble in them, whilst the innermost on are the most likely, and there is the familiar bell-shaped curve connecting them. What struck me was that the 'tree' of decisions was allowing control over the distribution of the outputs, and it was like a light coming on inside my head: I couldn't think of any sequencer that allowed control over the distribution of notes, not even any of mine!


So I did some exploratory programming, then tidied it up through a couple of iterations, and finally smoothed a few rough edges. The result is MIDIdistPROC, a MIDI note distribution processor that allows you to explore what happens when you choose a set of notes, and can then control the frequency of occurrence of those notes. It is kind of like a sequencer where the concept of 'notes in a specific order' doesn't exist. 

So if you give it C, E and G (as MIDI Note Numbers!), then the simplest (and the default) distribution would be for each note to be equally likely, so you would get outputs like CEG, CGE, GEC, GCE, ECG...  If you increase the likelihood of the C, then you might get CGC, CEC, GCC, ECC...  Ultimately, if you raise the likelihood of C to the max, then you might just get CCC, CCC, CCC... That struck me as being too much like current pop song melodies, so the design deliberately doesn't allow you to go quite that far, and you will always get a few other notes sprinkled here and there. If you want to use it to generate pop song melodies then you will just have to edit those 'other' notes out. Sorry.

Velocity uses the same principle: a 'pool' of velocity values where you control the distribution, but not the order. Over time, the values will fit the specified distribution, but without requiring a fixed sequence to happen.

Previously, I have looked at different 'flavours' of randomness, and a bit of experimentation resulted in another design decision: to provide a simple 'structured' source of notes where the user controls the amount of order or disorder. So there's a 'Mix' slider which has Random notes on one extreme (the left, of course), and a rising sequence of notes on the other extreme (the right), and so you can choose how much chaos you want to inject into the notes or velocities.

Following on from the many asynchronous clocks that I've been incorporating in designs for some time, I split the clocks for the notes and the velocities, which allows you to have different rates for notes and velocities, as well as different distributions and different mixes of random or ascending values. The note clock is the master clock for generating the MIDI notes, but isn't synched to Live's transport at all, so I should probably call this a 'toolkit' - because it is intended for exploration and experimentation.

From left to right, you have two sections: notes (light purple) and velocities (light grey). Apart from a few minor details, the two sections are very similar. At the top left hand side is the Rate rotary control (shown as BPM and Hz) for the beats. This is not synched to Live's transport clock. Directly underneath is a slider with 'MIX' in the centre. This mixes between a Random source of values and an Ordered source of values (from a rising sawtooth waveform), allowing you to choose between chaos on the left and order on the right, or any mix in between. As guitar pedal manufacturers like to say: We have designed it so that all settings will produce good results!' Underneath the Mix slider is a graphical representation of the past, which fades away into oblivion as it scrolls to the right. 

Most of the section is occupied by seven slider controls. These are arranged as a binary decision tree: the values from the Mix slider are sent to the left or right of the top-most slider (the little white lights show which way the values go...), and then go to one of the two slider underneath that, where they are again sent either left or right depending on their value, and thy finally end up at the lowest set of four sliders, where they are again divided into 8 outputs, an these output s can then be mapped to MIDI Note Numbers (or Velocities) in the number boxes at the lowest edge of the device. So, depending on the value that is produced by the Mix slider, a given value will end up at one of those 8 output boxes. And those 8 boxes can be set to either MIDI Note Numbers (light purple boxes) or MIDI Velocity values (light grey boxes).

The seven sliders are used to set how the values are distributed. If you press the 'Centre' button, then th sliders will be set to their default positions. The top slider will be at 63, which means that a value of 63 or less will go left, whilst any higher value will go right. This is why it is called a 'binary' decision tree: there are only two outputs at each layer, but the three layers results in a total of 8 final outputs. (Two for the first layer, then four for the middle layer, and then eight for the third/final layer). 

Yep, the velocity section is very similar! (But greyer...)

But note that the timing, amount of randomness, and distributions are totally separate for notes and velocity values...  This means that a specific note might have very different velocity values each time it happens (or you could set all the velocities in the 'pool' of values to be the same, but that would be boring!). 


You will need to follow MIDIdistPROC with an instrument to make sounds... I used a Collision-based Marimba sound a lot during development. Remember that the 'Centre' button resets the sliders to their mid positions, which is usually a good place to start. You will find that the sliders tend to interact, so the best approach is to start at the top slider and work downwards. Extreme slider positions may give a single output value on one side or the other, or even a single value either way if the slider above is also at an extreme value. The Note 'RateN' rotary control sets the speed at which MIDI notes are generated, whilst the Velocity 'RateV' rotary control sets the speed at which the velocity values are generated, and so changes the volume or timbre of the notes, but not their timing... The rates can be varied between 30 and 300 BPM... (Beats Per Minute, which I show as 'bpm' on the UI because I think it looks cooler! It turns out that both 'BPM' and 'bpm' can be used, although BPM is often used to mean 'Business Process Management', which is very corporate-speak and not very musical...)

This is a toolkit, which means that further processing of the outputs will probably be required, so be prepared to capture the MIDI notes and change their timing. At one stage, I did contemplate including timing distribution as well, but that quickly got very complex and it seemed better to leave it to you - plus I'm not in Plaid's league when it come to amazing uses for unusual time signatures!

Max For Live...

'Slider interactions' probably sounded interesting, so here's how the sliders are interconnected so that the upper ones affect the ones lower down. Only one layer is shown...

The upper slider has a range of 0-127 - the full range of MIDI notes from the Mix slider. The output is 63 when the default position is set by the 'Centre' button. The two sliders in the next layer down have different setups. The one on the left is going to need to have a range from 0 to the output of the upper slider, which is called 'n' in the diagram above. Sending a 'size $1' message to the slider will set its range to '0 to n' (where $1 has the value of 'n'). The slider on the right is slightly more complex. The slider needs to start at 'n' rather than zero (set by the 'min $1' message), and the range needs to be set to 128-n (so that the highest value is 127 on the far right hand side). So the 'size' message just needs to be 'size (!- 128)' to set the range correctly. 

I'm not perfect. The red 'X' shows how I made an error in logic and used the range to set the slider value - not a good idea. I located the problem and fixed it - after I did the screenshot composite shown above. So I ended up editing the M4L and the diagram! (The red cross is NOT a new MaxForLive object, of course!)

 The Asynchronous Clock is pretty straight-forward, and is included here because it shows the conversions to get the BPM and Hz values, which aren't as tricky as you might expect... The 'cycle~' object is very easy to use in this case!

 Using it

The sliders don't necessarily work the way you might expect. The more you move them across to the right, the more values will be sent to the LEFT, and vice-versa. You can watch the white indicator lights to verify this. Now that you know this, you should be okay, but you may find yourself accidentally moving the sliders the wrong way when your conscious brain hands over mousing to your subconscious brain. 

The three 'Preset' buttons for Notes and Velocities provide starting points for setting the 'pools' of output values. Feel free to use your own values! Note that the 'Octaves' preset illustrates very nicely that you do not need to have different values for all of the outputs, which is something that people tend to assume is the case. The presets also show why I didn't include any other 'Ordered' waveforms that the SawUp - you don't need them! You can change the output values to give the equivalent of any source waveform with 8 vertices. (This is more waveform choices than you get with most analogue monosynths (The MiniMoog, for example, has a mere six.) If the concept of waveforms being an emergent property at the end of a processing chain doesn't bother you, then you are in the right place!

I'm going to mention it again, because people are used to M4L sequencers that look a lot like MIDIdistPROC: This is NOT a conventional sequencer! There isn't any of the timing variation you might expect (all the notes are the same length), the notes and velocities aren't linked, and it isn't very good at repeating the same boring sequence over and over again. However, if you are interested in getting inspiration and breaking out of melodic cliches, then you may find it useful. (If you just realised why I have been referring to the output values as a 'pool' of values, then you are ready to exploit this device fully!)

One very useful piece of additional processing is the factory Ableton Live device called 'Scale', which is very good at transposing and constraining the output to a given range or scale. There are commercial plug-ins (like Scaler 2) that do similar things and more... You could also try my scale utility or my 'one control' to process the output of MIDIdistPROC. 

Getting MIDIdistPROC

You can get MIDIdistPROC here:

Here are the instructions for what to do with the .amxd file that you download from

(In Live 10, you can also just double-click on the .amxd file, but this puts the device in the same folder as all of the factory devices...)

Oh, yes, and sometimes last-minute fixes do get added, which is why sometimes a blog post is behind the version number of

Modular Equivalents

In terms of basic modular equivalents, then implementing MIDIdistPROC seems like it will be quite challenging - there are not many binary tree implementations that I'm aware of, but there are so many modules out there that I could easily have overlooked some. Worse, it may well be a hidden feature of a well-known module, so I may be completely wrong and it is a doddle to implement. 

Alternatively, then there are various utility processing modules that could be used to produce an eight segment transfer function, which would achieve the same end-result. So this might be only 1 or 2 ME. (Revised after I realised that this compresses all the layers into one!)

In reality, I suspect that MIDIdistPROC would probably be implemented in a very different way, by a super smart modular guru, by looking at the requirement from a totally different viewpoint. I would love to hear about it, by the way...


Non-Euclidean, Non-Linear sequencer     - MIDInonU

'one control'                                               - MIDIchronatixONE

'flavours' of randomness                            - MIDIrandomABC

my scale utility                                          - MIDINoteScalery


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)

Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)

Wednesday, 9 December 2020

Seasonal and Almost On-Topic!

It is the time of year when some people celebrate by sending other people greetings or gifts, when some people contemplate what they bought in the annual 'Black Friday' ever-expanding sales, when some people suddenly play a very specific genre of music (on-topic!), when some people anticipate the end of one year and the beginning of another year (with various emotions), and when some people do none of these things. 2020 seems to have been a year of extremes, of change, of polarisation and increased uncertainty. I hope the coming year is different!

Apparently, putting people in lockdown has resulted in huge sales of musical instruments... To kind of reflect this, I have added a seasonal design to the Tee-shirts in my online store: Synthesizerwriter's Store. If you wanted a 'Christmas Tree'-themed tee-shirt with a modular synthesizer bias, then you might be in luck! And if you wanted something that says 'Synthesis' in other ways then there are alternative designs and items - there are even cushions! 

Genre-specific music...

Here's an example of some seasonal genre-specific music, less most of the repeats and lacking a huge production budget...

A link to the music...

One of the main instruments that I used is a Kontakt virtual instrument that has the dubious honour of being a submission to that has been 'lost in the system' - it IS there, but the only way to find it on the site is if you know the URL! (or you do a search...)


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)

Synthesizerwriter's StoreSynthesizerwriter's Store
 (New ''Xmas Modular' design now available!)

Friday, 4 December 2020

Non-Euclidean, Non-Linear Sequencer/Toolkit = NonU in MaxForLive for Ableton Live

I always try to explore the edges of things. When pre-11 Ableton Live wasn't into probability, I published lots of MaxForLive devices showing some ways of adding probability, and Ableton seem to have taken the hint in Live 11! One other thing that I have always been interested in is unusual timing - my Probably sequencer includes probabilistic micro-timing per note, which is kind of tricky to get your head around. But recently, I've been playing with the opposite of the many Euclidean sequencers that are available in MaxForLive circles. So here's a non-Euclidean, Non-Linear, 4-section step sequencer/toolkit for you to explore elastic time and polyrhythms. I say 'toolkit' very deliberately here, because this isn't an M4L device that you just drop into a track and make cool drum sounds or 'bleepy' sequencer riffs, - rather it requires experimentation, recording of the output, retiming, and more. Once again, it is giving you 'modular'-style functionality in a DAW - although I don't know of any direct hardware equivalent modules for Eurorack et al...


Above are the 'headlines' about MIDInonU, whilst below is the 'in use' shot where it is followed by a drum Rack (note that it is very wide!):

From left to right, there is the clock part, where you can choose between a free-running 'Asynchronous' clock, and Ableton Live's own internal 'Synchronous' transport clock. Then there are two 4-step sequencers, then a 3-step, and finally a 5-step sequencer.

The 'Resync' button forces all the internal counters inside MIDInonU back to zero, and so resets all the timing. You will find that changing the Time rotary controls can cause a section to get 'out of sync' with the other sections - which can be avoided by only making changes when the 'Sync' clock is selected but Live's transport is not running (a red light in the 'Live Transport' indicator... But stopping every time is not ideal, and tweaking timings live is good, so the 'Resync' button is there to get everything back on track. Unfortunately, there is a short delay whilst all those counters reset, though...


Euclidean sequencers distribute steps as evenly as possible over the looped time: 4 beats on the beat being a familiar example that often gets overlooked! MIDInonU takes this as the starting point and then allows you to subvert it. So if you look at section A (there are 4 sections: A-D), then MIDInonU takes even time spacing as the starting point - so each of the 'Time' rotary controls is set half-way through the range, where the triangle is, at a value of 16. You can change the time between 2 and 30, which allows you to move the steps backwards and forwards in time by changing the 'Time' value away from the default value of 16 (where the triangle points to). 

The loop time is on the right hand side (underneath the section character), and is 64 (and green) when the looped time is a 4/4 bar. When the value is NOT 64 then it goes grey, because it is now longer or shorter in time that the 4/4 bar length. You can hear this by using section B to set up a 65 length sequence (as shown above), and you will hear the two sections drift out of time and back in again. (Read on to find out how to make this happen!)

The screenshot above shows a simple starting position. It plays a single note (MIDI 36) from step 1 of section A, 1 note per bar. 

The blocks of control buttons to the right of the Velocity indicators are used to control the note velocities. 'Off' mutes that step output. 'Manual' allows you to use the Velocity sliders on the left hand side to control the velocity directly. 

You can only adjust the velocity sliders when the 'Manual' buttons are lit with light purple. For the rest of the time, the sliders show the algorithmically-generated velocity values...

There are two algorithmic controls, which both use the 'Count' rotary controls: 'Split' just outputs two velocity values - max and off, and it does this based on a counter that increments for each repeat and resets when it reaches the 'Count' number. So for a Count value of 1, then every step will have high velocity, whereas for a Count of 3, there will be one high velocity and two low velocity steps. The 'Count' button just provides a scaled 0-127 velocity value, derived from the counter. This means that a Count value of 4 will step through four descending values of velocity. Try adjusting the count values and the buttons to see what the effect is on the velocity sliders.

In section A in the screenshot above, the Time3 and Time4 rotary controls are set to 15 and 17, but the loop time is still showing 64 (16+16+15+17=64). This means that step 3 is slightly ahead of where a pure 4 beats per bar step would be, whilst step 4 is slightly later. You can move the steps ahead or behind in time, and you do not need to have the total add up to 64 - as long as you are okay with the loop length not matching the bar length in Live.  

One important thing is hidden at the lowest part of the window - the small boxes there allow MIDI note numbers to be assigned to each step, so the 36 in section A is a bass kick drum, for example. You can set a different note number to each step, as well as assign the same note number in different sections - so you could have a '36' kick in sections A, B C, and D if you want. On the far right hand side of each section, in the 'All' column, are buttons labelled '=' that control all the buttons in that row. So the button to the right of the 'Off' row, sets all the buttons to 'Off'. Similarly, the small box on the right of the note number buttons sets all of the note numbers to the same value (and the same for the 'Count' rotary controls). You might find the following useful at this point:

"Just because you can, it doesn't mean you have to!"

That should give you some of the flavour of MIDInonU. Yep. Quirky. Not your average step sequencer, and not the usual controls or the usual way of working. 

Sections C and D

Section C has only three steps, and so doesn't start out with 64 micro-steps. You can set it so that there are 64 steps by increasing the time between each step - you can think of the steps as being connected by elastic, if you increase the time between two steps then you will need to reduce it between some others to keep the overall length of 64. Unless you don't want 4/4 bars, of course. Each of the sections can have different lengths. 

Section D has five steps. This makes any nicely-equally-spaced timing difficult, which, in this context, is definitely good! It is probably a good idea to start out with section A first, then add section B, and get very used to how they work with or against each other before jumping into using sections C and D.


Sections B, C and D can each have their timing delayed (individually) from section A. This can be surprisingly musical! When you have discovered that playing a 64/64 loop against a 65/64 loop just sounds like a drum pattern gradually going out of time and then back again, then it can be very  gratifying to discover that changing the delay can often immediately give a good-sounding output! 


The quick introduction above only scratches the surface of what you can do with MIDInonU. Note that you get two clock sources: an Asynchronous clock where the rate is not tied to Live's transport, and a Synchronous clock where the clock is the same rate as Live's transport. This means that you can explore timings that are not synced to Live if you wish, and if you are brave, you can change the timing source dynamically by mapping an LFO to the clock selector switch... Instead of using a drum instrument, try a synth instrument, for instance - and remember that you can assign a note number to any step in any section, or multiples of either. Try moving steps in pairs: increase the time on one, and reduce it on another - this keeps the loop length the same. Alternatively, deliberately have different loop lengths! Explore the 'Split' and 'Count' algorithms - the velocity accenting can change the feel of the sequence hugely. Basically, MIDInonU gives you the freedom to change a lot of things which conventional thinking normally stops you from changing. Enjoy! 

16 rotary controls...

A previous blog post featured my visual reminder accessory for my DJ Tech Tools MIDI Fighter Twister, which has 4 banks of 16 rotary controls, just like the 4 4 3 5 Time controls in MIDInonU. This is not a coincidence, and actually there are 16 'Count' controls as well, which could be mapped to a different bank in the Twister...

Getting MIDInonU

You can get MIDInonU here:

Here are the instructions for what to do with the .amxd file that you download from

(In Live 10, you can also just double-click on the .amxd file, but this puts the device in the same folder as all of the factory devices...)

Oh, yes, and sometimes last-minute fixes do get added, which is why sometimes a blog post is behind the version number of

Modular Equivalents

In terms of basic modular equivalents, then implementing MIDInonU is interesting - there are many 3,4, and 5 step sequencers that you could use, but the micro-timing is less common, and might be easier to do by processing the clocks for the step sequencers, probably using another sequencer. The velocity algorithms are going to require a quite sophisticated processing module because of those counters. One brute force method might be to use more sequencers to do the counting, but this is probably not going to be straight-forward. You could use a descending sawtooth LFO to replace the counters, or course, and then use a threshold gate for the 'Split' function.

One possible alternative would just be to use a sophisticated digital sequencer module, but then this is closer to a DAW and not quite in the spirit of DAWless 'analogue' modular... Tricky!

Overall, I reckon that MIDInonU would require an ME of about 20 minimum, which is quite high. I haven't tested this out myself, but I would be very interested to hear about it if anyone manages to get something like this functionality (or better!). 


Probably   A step sequencer that does a lot, and then some...

DJ Tech Tools MIDI Fighter Twister   One of my all time favourite MIDI Controllers!

Euclidean  Let's go to the source...


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)

Synthesizerwriter's StoreSynthesizerwriter's Store
 (New 'Modular thinking' designs now available!)