Sunday, 28 February 2021

The Synthesizerwriter M4L Tape (etc.) Suite - crafted for Christian Henson of Spitfire Audio

Every so often, Christian Henson, on of the two founders of Spitfire Audio, publishes a YouTube video where he talks about guitar pedals. In the past, he has talked about pedals like the Strymon Blue Sky, the Gamechanger Audio Plus pedal, and lots of tape emulation pedals including this recent one inspired by 70s/80s videotape:

A Video Stomp Box...Really?

Christian's most recent video built on all of his back catalogue of using pedals to make music:

Making Cinematic Music with Guitar Pedals

Which got me thinking - why not put together a suite of my Max For Live devices for Ableton Live, specifically targeted at the distortions and modulations that are found in tape machines, digital echoes and other audio storage/processing/playback devices? So not just tape, but ANY analogue or digital processor. And hey, I could dedicate it to Christian Henson!

The Synthesizerwriter M4L Tape (etc.) Suite

The obvious starting point was my Ironic Distortion M4L device, which produces distortions and perturbations like aliasing, intermodulation, and quantisation noise, as well as mains power modulation - that can all be used to degrade audio in a variety of ways that can emulate analogue or digital processors. 

Ironic Distortion - blog post

Ironic Distortion -

There are plenty of Saturation devices in M4L and VST formats, so I leave that to your own preference, but there was one glaring hole in my plan. I was lacking something to do Wow and Flutter, essential for tape emulation, plus I didn't have anything that simulated a broken power supply driving a digital processor... So I created one: called Ferrous Modulation

Ferrous Modulation

If a layout works, then re-use it. This rule works for guitar pedals, so I'm quite happy to re-use the legendarily crazy user interface from the Ironic Distortion M4L plug-in in Ferrous Modulation. So from left to right, you have Wow, Flutter, Mains Modulation, and Input sections. In each section, there is a slider/meter that sets the output level for that section, complete with a huge Mute button. 

So the Wow section has a mute button with 'W' on it, for 'Wow'. Above it is a control strip, with controls for the Frequency of the Wow, how much smoothing is applied to it, a display of the smoother wow waveform, then a stereo skew switch and rotary control, to emulate tape not being guided accurately or pulled inconsistently by the capstan and pinch wheel roller, and then two switches to take us above and beyond normal tape systems: a phase switch that lets you put the wow in phase or out of phase (extreme skewing and tape stretching), and a 'Sideband' switch which lets you choose single or double sideband outputs (tape machines will normally be single). Finally, there's a Gain control which sets the amount of wow that is applied, from subtle to overkill. Underneath the control strip are two real-time displays: the spectrum of the processed audio signal, and the sonogram (where time is horizontal, and frequency ism vertical, and spectral amplitude is colour). 

Next up is the Flutter section, this time with an 'F' on it. I've categorised flutter as being more cyclic than the band-limited noise that I've used for wow - there isn't any really definitive classification that I could find (most modern approaches to measuring wow and flutter treat them as just two different aspects of the same frequency modulation), and so the first rotary control is for the Frequency of the wow waveform, then a Smooth control (which makes no sense for a sine wave, but there you go), then a waveform selector which provides 10 waveforms, plus smoothed variations, followed by a waveform display. Then there is the same Skew, Phase and Sideband controls as before, plus the Gain control. Oh, and of course, the slider/meter sets the amount of processed signal that goes to the output.

The third section is the Mains Modulation section, which mis-labels the slider/meter as 'Level' instead of 'Mains' (which I will fix in the next update), but still has 'M' in the mute button. The Control strip this time has a selector switch for 50 or 60 Hz mains frequency, and the Single/Double Sideband toggle underneath. Then there's a Frequency rotary control, for those people whose mains power is not 50 or 60 Hz, then a 'Drive' control to control how much mains frequency modulation is applied to the audio, and a band-pass filter with a Q control to fine-tune the mains waveform (so you can over-drive it, and then tune high to just get harmonics of the mains). Underneath are the same spectrum and sonogram displays as the other sections.

The final section is the 'Input' section, and this allows you to mix in the original unsullied audio signal - the dry signal. I didn't want to confuse users with a normal wet/dry control because there are three wet signals, so I re-used this unusual scheme from the Ironic Distortion. Above the slider/meter are 15 storage boxes, where you can shift-click save your favourite settings. I like to encourage people to develop their own presets, so I don't provide any at the moment. But I have made one device which has presets: Octave Remapper (blog) Octave Remapper -

Audio Chain

My recommended chain of devices in your track strip in Abelton Live is:

[Ferrous Modulation] -> [Saturator, etc.] -> [Ironic Distortion]

Remember that there are many saturation and distortion devices that can be used to introduce your own preferred amounts of harmonic distortion, compression, saturation, waveshaping, etc. 

Not a Pedal!

The Suite isn't a hardware pedal, and it isn't available via (now there's an idea!), but it is free and it is capable of some horrendously bad 'tape'-influenced sounds, plus lots of other 'processed' sounds, many of which are not from equipment as we know it, and some subtle tones as well. 

I'm sure Christian (and you) will have a great time with it!

Getting Ferrous Modulation

You can get Ferrous Modulation here:

Here are the instructions for what to do with the .amxd file that you download from

(In Live 10, you can also just double-click on the .amxd file, but this puts the device in the same folder as all of the factory devices...)

Oh, yes, and sometimes last-minute fixes do get added, which is why sometimes a blog post is behind the version number of

Modular Equivalents

In terms of basic modular equivalents, then implementing Ferrous Modulation just requires three sections of frequency shifting, with appropriate modulation waveforms: band-pass filtered noise, a VCO or LFO, and an LFO for the mains. 

Overall, I reckon that Ferrous Modulation would require an ME of about 7 minimum. You may be able to find a frequency shifter that has built in modulation sources, in which case it might drop to 3 or 4 ME.


Ironic Distortion - blog post

Ironic Distortion -

Octave Remapper - blog post

Octave Remapper -

I would like to thank Christian Henson for his ongoing inspiration, enthusiasm, and for founding Spitfire Audio, who make wonderful samples!


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)

Synthesizerwriter's StoreSynthesizerwriter's Store
 (New 'Modular thinking' designs now available!)



Saturday, 27 February 2021

Not AI, and real AI - in music...

The people who post fake comments onto blogs annoy me. Except that many of them aren't even people - they are automated algorithms running on computers. An interesting example arrived in my Blog Comments Inbox recently, and it got me thinking about Artificial Intelligence (AI), and its more common predecessor: Artificial Stupidity.

The comment looked almost innocuous at first. it was reasonably well targetted to this blog because it talked about electronics, and the name was actually a link to a website that claimed to reveal places where you could learn about electronics. The first giveaway was that URL instead of a name, but the web-site itself was a dead giveaway. At first glance it seemed to be a couple of A4 pages-worth of text, talking about how to learn electronics via resources on the InterWeb. 

Not AI

But then I read the first paragraph, and then the second. Each seemed like a generic introduction to the topic, but they didn't actually get to anything like any recommendations for sites, or URLs... And the second paragraph didn't follow on from the first. In fact, they read like two different authors writing on two slightly different topics. The next two paragraphs were worse in their lack of linking, and in the divergence of styles. As I continued reading, I realised that each paragraph was just text extracted from a search term something like: "Where can I study electronics online?' and then assembled together on a web page, with lots of associated adverts. Nowhere did it actually get to anything useful like a real URL linking to online resources, nor were there any summary tables of good resources, and in fact, there wasn't any useful content anywhere on the page. The whole thing was designed to look good enough to fool someone into thinking it might be a useful thing for blog readers to know about, and to let the comment appear in the blog. Then the web-page would generate money for its owners whenever anyone clicked on the ads. In other words: a nasty parasite.

Well, I wasn't fooled, and I deleted the comment, and I would advise you to be cautious when you search for phrases like: 'How can I learn about electronics online?', because there are lots of leech sites like the one that I rejected. Alternatively, try these web-sites for proper learning relevant to this blog:

MIT circuits and electronics

MIT Practical Electronics

OU Intro to Electronics

Coursera EM Production

There, that's infinitely more genuine information that there was on that entire web-page. And there are lots more resources for you to find out there! Note that some of these are free, and some are not. The quality of some of the free ones (MIT, for example) is very high!

I reckon that the web-page that I rejected was probably not created by Artificial Intelligence (AI), it felt much more like a simple algorithm (Artificial Stupidity) with maybe some high level editing by a human being. So 'Not AI' rather than AI. But there are some interesting applications of real AI that are starting to appear that could affect how you make music in the future...or don't make music...

Real AI

The last couple of years have seen two big trends in electronic music: Cloud Sample Libraries, and AI Assistance. 

Subscription-based sample libraries like Splice , Noiiz , LoopCloud, and Roland Cloud provide access to huge numbers of ready-to-use samples, and mean that you don't need to fill a room with hardware synths, or even fill your computer's SSD or Hard Disk withVSTs. They aren't connected with AI, other than they use simple background algorithms to learn what you like and try to sell you more of that. But I'm not a fan of them because they typically require you to give them root-level access to your computer, which they justify by saying they have to protect all of the valuable content which you can download, but I'm not happy with something where you are giving them permission to do anything at all on your computer, After all, the news isn't full of repeated computer breaches where millions of User IDs, Passwords and Credit Card details are stolen by hackers, so there's no problem with giving deep unfettered access to your computer, is there? 

AI Assistance is more subtle, and I don't know of a generic word for it yet - there aren't enough similar instances of it for people to need a word for it, but this doesn't mean that there aren't lots of examples of it out there. It appears as drum machines, or melody generators, or chord suggestions, and it often provides easy access to generated patterns, melodies, chord progressions, etc. These are several steps up from the Randomisation generators that you got back in the 20th Century. 

AI Drum Machine - Algonaut Atlas

AI Beat Assistant - Rhythmic

VST Patterns - Sonic Charge Microtonic

One thing to be aware of is that a lot of the cheaper examples of 'AI Music' are actually just Machine Learning (ML), which has become very accessible recently to programmers, and allows a network of connected modes (a 'neural network') to learn from pre-prepared training materials and then to output lots of variations of it - give me more 'like this'...  ML is kind of 'entry-level' AI...

Unless you make movies, then you might not have seen another application for AI that has been gradually increasing the amount of advertising that they do, and that is AI-generated music for movies. In other words, if you have made a film or movie and you don't want to pay a human composer to write music for it, then you can get AI to do it for you... 

AI Music for Movies - Ecrett

Creative Assistant - Aiva

AI Music Generator - SoundRaw

Broad Application AI

To really appreciate where AI is going, then you need to look beyond the 'specific applications' and the often very obvious ML experiments, and go for something more generic. One very good example is OpenAI. When you go to their web-site they don't try and sell you a solution, Instead they show you just a selection of overviews of things that you might be able to use their AI software to do. This isn't just one or two possible applications - you scroll on and on through lots of things they can do. It's a bit like going into a DIY store to buy a screwdriver, and discovering that they sell jus a few other things as well... No, scrub that, imagine arriving at an out-of-town retail park where they have a car park surrounded on all sides by huge DIY, carpet, furniture, homeware, clothes, electricals, electronics, video gaming, computers, media... when you just wanted a screwdriver. OpenAI is an overwhelming place, and I think that it shows a glimpse not of the future, but of how lots of people will be making the future of what you do/buy/watch/play (etc.) next online.  




AGI Innovations

Semantic Machines

I've tried to curate these so that the AI Robot-oriented companies aren't included, but if you want to see the state-of-the-art in robots, then Boston Dynamics are a good starting point...

And finally...

I'm always intrigued by companies whose advertising is based around a competitor. It doesn't seem to be a very 'Intelligent' thing to do. When I did a Google search for 'openai' then the top two paid slots, right at the top of the search, were from two other companies also selling AI 'solutions'. I have not included them in the list above. If I'm searching for 'Moog', then do I want the top result to be from another synthesizer manufacturer? (Oh, and when I did this search, then the top answers were all 'Moog'!)

(Oh, and why did I use those notes in the graphic near the start of this blog?)


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)

Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)

Friday, 26 February 2021

Should I look at the Spectrum, or the Waveform? - [Single Cycle Part 4]

One of the 'useful things to remember' that I have always had in my mind is something that I learned reading through a pile of old 'Wireless World' magazines from a cupboard at the back of the Physics Lab at my school:

Spectra can be better diagnostics than waveforms 

(I'm using 'Spectra' here as the plural for 'Spectrum'. You can replace it with 'Spectrums' if you prefer... I won't tell anyone.)

It was from an article where they described how a project to recreate the sound of a church organ by reproducing the waveform failed because the result sounded totally different. From the first part of this series then you may not be suspecting that they probably only matched up the 'top' 30 to 40 dB of the sound (the visible bit on a 'scope) - the '30 dB Rule' as I call it. When I've experimented with A/S (Analysis/Synthesis), the iterative synthesis technique where you analyse the target sound/timbre, get a reasonably close synthesised version of it, then subtract the two to get a 'residual', and then synthesize that, and so on, then I wondered if you could use this to keep removing layers of 40 dB or so of visibility, getting a better approximation each time...

Anyway, a reasonably good spectrum analyser is going to show you a lot about the spectrum of a sound - and the harmonics that it shows will give you detail well below 40 dB down. But the spectrum isn't perfect either, because it shows the magnitude of the harmonics in a sound, but generally, not the phase relationships. As was shown in part two of this series (Single Cycle 2), then phase relates to the tiny timing difference between the same point on two waveforms - it could be zero crossings, or positive peaks: anywhere that is easy to compare. Although the horizontal axis is the 'time' axis, many people think of the phase more in terms of the shape of the waveform 'sliding' horizontally - which kind of removes the link that is implicit in a 'time waveform'! But this 'sliding' approach does explain how it is possible to have phase differences that are not directly related to time - if you take a waveform and invert it, the two waveforms are then 'out of phase' even though neither of them has moved in time (although it might take a finite amount of time for the inversion to happen, of course!)

Where this gets interesting is when the waveform is not symmetric. If you invert a sawtooth, then what does 'phase' mean? The zero crossing position gives a reasonably neat alignment of the sawtooth waves, but using the positive peak is confusing, and it would be better to use the fast 'edge' between the positive and negative peaks - but is this then ignoring the time for that fast edge. So should the zero crossing in the middle of the fast edge be used? 

When the waveform is even less symmetric, then neither peaks nor zero crossings may be a viable choice for a reference point. In the example above, inverting the waveform means that the positive peaks are different, and there are two candidate zero crossings. When waveforms are this different, then phase starts to lose any meaning or value for me... Of course, you could use the fundamental of the two waveforms, in which case the inverted waveform would be seen as out-of-phase or inverted.

Phase is important in filter design (like in loudspeaker crossovers, for example), in noise cancellation (two anti-phase signals will cancel out to give silence, although getting two precisely out-of-phase signals is not very easy in a large volume in the real world), and in creating waveforms (in additive synthesis, for example). It turns out that the phase can be very important as a diagnostic tool: so a visually smooth filter cut-off might well be hiding a phase response that goes all over the place. 

Why is Phase Important?

The standard example to show why 'phase is important' is to take a 'square'-ish waveform made from a few odd harmonics, and to change the phase of one of them. Suddenly the square wave isn't square any longer... 

What has always fascinated me is the number of harmonics that are required to get waveforms that are close to the mathematically perfect, sharp, linear wave shapes that you see in text books. In the example above, then 23 harmonics are used to make a 'wobbly' square wave - actually, of course, then because a square wave is made up of odd harmonics, then there are not 23 actual sine waves used to make up the square wave, since just under half of them have zero amplitude. 

So when the phase of the third harmonic (three times the frequency of the fundamental) changes, then two things happen. Most text books will show the changed waveform, and will note that it still sounds like a square wave (the harmonics are the same...). But it is more unusual for there to be any mention of the change in the peak amplitude - the 'F3 out of phase' waveform on the right hand side is about 50% bigger, peak-to-peak, than the 'conventional' square wave approximation on the left hand side. It turns out that changes in the phase of harmonics can affect the shape and the peak-to-peak amplitude, and more: the phase of the harmonics can be used to optimise a waveshape for some types of processing, although this is normally used in applications like high power, high voltage electricity distribution rather than audio.

But this 'phase is important to the shape of the waveform' principle applies to any waveform, and this can give surprising results. Take a triangle wave: it has only odd harmonics, and they drop off rapidly with increasing frequency, so the triangle really is what it sounds like: a sine wave with a few harmonics on top. Now you are probably intrigued by this, and rady to explore it yourself, so there's a very useful online resource at: It is an additive synthesizer that lets you explore the amplitude (volume/size/value) of harmonics, as well as their phase! (This is called a Fourier Synthesizer, after the Fourier series, which is the mathematics behind adding different sine waves together to give waveforms...)

Here's a screenshot of a triangle wave produced using the Fourier Synthesizer from M J Ruiz:

I have edited the colours of the sliders to emphasize the harmonics which have zero amplitude (black), the harmonics which are 'in phase' with the fundamental (blue), and the harmonics which are 'out of phase' with the fundamental (orange). In-phase is shown as a value of 0 in the screenshot - meaning zero degrees of phase, where a complete cycle would be 360 degrees. Out-of-phase is shown as 180 degrees - half way round a cycle of 360 degrees.

 The screenshot above shows an unedited view of the same triangle wave, but with the phases changed so that all of the harmonics are in-phase. The result is more like a slightly altered sine wave than a triangle wave - but it sounds like a triangle wave...

Earlier I pointed out that the sound of a square wave with the third harmonic changed in phase was the same as a square wave with no phase change on the third harmonic. It turns out that your ears are not sensitive to phase relationships of this type, and so the square waves, and the triangle waves, all sound the same regardless of the phase relationships of the harmonics. BUT if you change the phase of a harmonic in real-time, then your ear  WILL hear it. Static phase relationships between harmonics are not heard, but changes in phase are...

If you think about it, then this is not as surprising as it might at first sound. Your ears are very good at detecting changes of phase, because that's how they know what frequency they are hearing! But fixed differences in phase just change the shape, and your ears don't pick that up. One possible explanation for this is that your ears evolved as they did because the harmonic content of sounds was important for survival (maybe locating sources of food, or danger!), but the shape of the waveform was not. Discovering that the human hearing system is not optimised for sound synthesis may be a disappointment for some readers...

One other thing that you may have noticed in the Fourier Synthesizer screen-shots is the small amplitudes of the harmonics for the triangle wave. The sliders used to control the amplitudes are linear, whereas the way that harmonics are typically shown in a spectrum analyser is on a log scale: as dBs. 

The spectrum above shows this quite nicely (plus some other interesting things as well, most of which are because this isn't a 'real' triangle wave, but one that I constructed inside Audacity). The fundamental frequency of the waveform is 50Hz and goes higher than the 0dB reference level (maybe +5dB), and the 3rd harmonic at 150 Hz is at about -24 dB which translates to -19 dB when you add that +5dB. But the Fourier Synthesizer showed this as 0.11 on the linear scale. It turns out that -19dB is a voltage ratio of about 0.11. Thhe 5th harmonic is at -33dB, which is -28dB when you add the +5 dB, and this is a voltage ratio of 0.04, which matches the Fourier Synthesizer value of 0.04. The 7th harmonic is -41dB, which becomes -36 dB which is  0.016, and the Fourier Synthesizer has 0.02. 

So the spectrum analyser harmonic levels are nice numbers, whereas the Fourier Synthesizer harmonics amplitudes are small numbers. I much prefer the spectrum analyser log scale, and here's a chart that shows how the spectrum analyser dBs down relates to the Fourier Synthesizer slider value:

Note that 0.5, which means that you set the Fourier Synthesizer slider to the half way point, corresponds to -6dB. Anything below -40dB is a slider value of below 0.01, which means moving that slider by 1 hundredth of its travel, which is a small distance. This table kind of reinforces why the 40dB Rule mentioned in part 1 of this series exists - the slider values are just tiny, and this means that the harmoncs are going to be tiny too. Probably too small to be seen on a screen!

So if I was going to be designing a Fourier Synthesizer, or an Additive Synthesizer, then I wouldn't use the slider values, because they are typically small and are going to be hard to set easily. Instead I would use the dB values, which are simple numbers and are going to be much easier to set correctly.

Conclusions (so far)

From this series so far, there's quite a lot of things that we now know about waveforms and spectrums:

- Anything below -40dB is going to be difficult to see on a screen

- A single cycle waveform might have unusual harmonics in it

- The waveform does not always tell you what harmonics are present

- The spectrum always tells you what harmonics are present

Spectra can be better diagnostics than waveforms 

- Phase is important for waveshapes, but not what they sound like

- Your ears can only hear changes of phase

- Controlling the level of harmonics should use a log (dB) scale

In the next part, I'm going to talk about noise in single cycle waveforms, and why it doesn't do what you might expect.


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)

Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)



Sunday, 31 January 2021

Scope 'Thru' Box for audio waveform monitoring...

Yes, I'm late with my 'Waveform or Spectrum' blog post: part 4 of the 'Single Cycle' series. In the meantime, to illustrate the preparation that goes into my posts, here's how I made a custom 'Thru' box for my low-cost 'build it yourself' digital oscilloscope to make it easier to look at audio waveforms on quarter inch jack plugs. (or 3.5mm mono jacks for Eurorack...)

'Build It Yourself...'

You may have seen them on Smile.Amazon... - there are quite a few low-cost, single channel, digital oscilloscopes with enough bandwidth for audio but not much more. I bought a kit for a DSO 138mini produced by JYE Tech ( that came with a clear plastic case, built it and it has served me well whilst I dither trying to decide which real scope to buy! Maybe I should do a blog post on how doing research aimed at finding the 'right' product can slow down GAS (Gear Acquisition Syndrome) to a crawl...

Anyway, the kit comes with a 'probe' cable which has a BNC connector on one end, and two small crocodile clips with rubber covers on the other. The BNC connector is THE standard input connector for 'scopes', and I don't think I've ever seen any other connector used for this... However, those croc clips aren't very good with audio connectors. It is just about possible to grab hold of a 3.5 mm stereo jack...

...but the grip on a quarter inch jack is precarious, and feels like it is going to spring off at any moment. 

What is needed is some sort of 'Thru' box - a way to connect to a quarter inch jack, whilst not interrupting the audio. So that's what I made...

Unlike previous 'mods' posts, this time I'm going to give a bit more detail about the construction. The circuit is simple: two jack sockets connected in parallel, and a BNC connected to the tip and sleeve. 

You may have noticed that I have used stereo jack sockets. My thought was to make a mono and stereo jack compatible 'Thru' box, but when I started figuring out the circuit, I realised that there was a problem with having a mono and a stereo jack plugged into the box if I used stereo sockets, so I reverted back to using the jack sockets as if they were mono.

The Circuit

The circuit is very simple, the sleeve and tip of the two sockets are connected together, and also to the BNC socket that connects to the oscilloscope. On the sockets, these are the two outer connections, which makes it easy to solder.

When a mono jack plug is inserted into a stereo jack socket, then the tip, ring and sleeve make connections to the metal connectors. But because the ring connections are not connected to anything else, then they can be ignored. 

I did contemplate going a bit 'British' with my design, by adding in a toggle switch to allow the selection of various different sockets to be connected to the BNC 'output' socket, but I eventually decided that the vast majority of sockets used in synthesizers and pedals (and my studio, which had a big influence) were quarter inch jacks.

For use with most modular synths, then there isn't a problem - Eurorack, Moog, Roland (and others) both use mono jack plugs and sockets (14 inch or 3.5mm - so just replace the 1/4 inch jacks with 3.5mm!), whilst Buchla (and my own first modular synth) use Banana plugs (or 4mm, as I knew them) and a Thru connection panel for 4mm plugs and sockets is trivial. And yes, I know that there are other connection systems used in modular synths: Wiard use Bantam plugs, ARP uses either 3.5mm mono jacks (2600, Odyssey, Avatar, Little Brother, etc.) or matrix slide switch panels (2500, etc.), and EMS use patch pins (which are actually tiny jack-like connectors - you can put resistors between the tip and sleeve!) for the Ghielmetti matrix patch panels, and there are others. I'm sure there are ways to connect an oscilloscope to these other modulars, but it is out of 'scope' for this blog post. (Did you see that pun?)

If you want to make a 3.5mm Jack version for use with Eurorack, then most of what follows is probably going to be useful, only the size of the sockets is different!


Drilling holes for jack sockets is easy, but when they are going inside in a small cast metal box, then advance planning needs to be carried out to ensure that the jacks will actually fit, and so I pre-arranged the two jack sockets inside the (50x50x31mm) box and determined that they would fit, as well as the BNC socket. There are two sets of horizontal marks because I initially centred the sockets, and then realised that I also needed to leave room for the BNC socket, so I moved the two jack sockets lower to one side.

You can get a smaller box (52x38x31mm) than this one, and getting all three sockets into that would be more of a challenge - this would probably be worth looking into for a commercial design because the smaller sized box is cheaper. I have always had a soft spot for these die-cast metal boxes, and it is interesting to see that some guitar medals (at the 'boutique', end of the market in many cases) make a point of deliberately using that bare metal look. 

Having decided that I could fit the two sockets inside the diecast box, I marked the positions for the holes and drilled small pilot holes first, then a larger hole, and finally the 10mm hole. I used a 10mm drill because it was the largest drill bit that I had immediately to hand, and so I needed to increase the size a little so that the 10.5mm diameter jack socket would fit. For this, I mis-used my trusty de-burring tool - going slightly beyond just taking the rough edges of the drilled holes quickly opened the holes up enough for the sockets to pass through.

I then used the box and holes as a jig to check that I would be able to solder the connections. It is always a good idea to pre-assemble things before committing to the final assembly. 

 Here are the two sockets, using the drilled box as a jig, and you can see that it is easy to bend the legs (these are PCB-mount jack sockets, which I'm mis-using!). This version was my first attempt and so I bent all three pins of the stereo sockets. For the circuit as shown above, you only need to bend and solder the tip and sleeve.

When I marked up the holes inside the box before doing the drilling, I had noticed that space inside the box was quite cramped, and I worried that the other pins (the NC (normally closed) would short against the case. This shouldn't be a problem because the contacts open when you insert a jack, but this also seemed like a good point to mention heat-shrink insulation, which is the preferred way of insulating metal from touching other bits of metal, and preventing fingers and other objects from touching metal. The thin plastic adhesive tape that is known as 'insulation tape' or 'electrical tape is not really the ideal stuff for doing this, despite the name. The stickiness fades with time and the tape unravels. In more than 40 years working in electronics, I have never, ever seen insulation tape used to insulate anything! 

In the above photo, you can see the un-shrunk heat shrink sleeving on the top right (just cut it to length) , the heat gun at the left hand side, and the sockets in the middle, with the NC pins shrouded in heat shrink. A more conventional use for heat shrink would be inside a jack socket, where you would use it to cover the solder connection to the tip terminal, so that if the braid shielding should come loose with use, it can't touch any of the metal associated with the tip, where the signal is...

 I then used the box as a jig again, and soldered the pins of the jack sockets together. As I've already mentioned, you only need to solder the NO (Normally Open) tip and sleeve pins together. A perfectionist might put heat shrink sleeving on this pin...

Next. the BNC socket. This is a little more complex, because the required hole is not round. BNC connectors work by your fingers twisting the plug onto the socket, so there's a lot of twisting force on the socket, and if it isn't securely held in place, then it will gradually loosen with time. Here's the hole that is needed for the BNC socket that I got, plus how I made it:

Stage 3 shows the final hole. It has two flat parts that would not be present if you just drilled a 9mm hole. So, starting at stage 1, I drilled a pilot hole, then a 5mm hole, then an 8mm hole. I then used a rectangular cross-section needle file to remove the metal shown in yellow - the top and bottom of this hole would be the flat parts of the final hole. Stage 2 used a half-round needle file to remove the metal in yellow - avoiding the two flat parts that were produced in stage 1. 

This resulted in the final hole, as shown in stage 3 (and above!). When the BNC is mounted in this hole, the two flat parts of the hole prevent the BNC socket from twisting, and so ensure that it stays secure in the box.  

Most PCB-mount sockets (like the jack sockets) don't have flat areas on the holes they require, because the PCB holds them in one orientation. 

The strangely shaped pliers that I used to tighten up the jack sockets are called Wilkinson Quickgrips. They were originally produced for British Telecom engineers as an easy way to tighten nuts of a variety of sizes. These days they turn up at collector's fairs sometimes - I got this pair at a fair in Rufford in Lancashire. 

For the BNC socket, I soldered a wire to the centre terminal...

and another wire to the washer tag. I use Blu-Tack as a way of holding things down when I am soldering them. It helps prevent burnt fingers!

I then stripped the ends of the wires with a wire-stripper tool...twisted the wires...

...and 'tinned' the end of the wires by melting a little bit of solder onto them using a soldering iron. Tinning wires makes it easier to solder them to other bits of metal. In this case, I needed to connect these wires to the pair of jack sockets, so the tinning would make this easier.

Here's the inside of the box once all the soldering was done. 

I then tested the thru box with my Studiospares UCT2 cable testing box to make sure that I didn't have any short-circuits, bad solder joints or other problems. 

And here's the Thru Box connected to the BNC cable, ready for use.

And here the box is, being used to check a resonant single cycle waveform... In this case, I haven't used the 'thru' - I have just taken the output from a sample player and connected it to the Thru Box. If I wanted to hear it, then I would just use a jack cable to connect the other jack socket to an amplifier. 



If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)

Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)

Wednesday, 27 January 2021

Music Hackspace 'Max meetup USA #1' event report... (Modular CV Interfacing)

A week after the Europe Max meetup, the Music Hackspace had its first 'USA-timing-friendly' online Max meetup via This time there were three short presentations, but the 'CV/Modular' breakout room afterwards was particularly interesting. The first question that was posed was simple: how to interface Max to a modular synth to make drum sounds (I think - please let me know if my recollection is faulty), but the answers were not so short/simple and so I thought that it would be good to capture them here as a blog post. This is just part of the discussion that happened, so you should consider joining in next time...

Max Interfacing

Cycling '74's Max software can output audio, video, and MIDI, but outputting Control Voltages (CVs) and Gates/Triggers for controlling modular synths are less immediately obvious. There are some resources available on the Cycling '74 web-site, but they tend to only mention 'dc-coupled audio interfaces' or cover a specific device. 

DC-coupled audio interfaces are special cases of the ordinary audio interfaces that are used to get audio in and out of a DAW.  There are also specialist Modular MIDI-to-CV converters which are audio interfaces that are specifically designed to be dc-coupled and output CVs. Let's look at these two variants first:

1. Audio interfaces

Audio Interfaces are perhaps the obvious starting point, given that many people have them. They are a popular purchase for anyone who wants to make music using a computer - and if we wind time back by a couple (or triple) of decades, then the solution then was a 'sound card': a plug-in card (ISA-bus was one popular type) that provided better sound generation capabilities than the basic computer itself, as well as more 'music-making friendly' sockets than 3.5mm mono or stereo jacks. Sound input in those days was something that was very unusual in an off-the-shelf computer, and a sound card provided audio input capability - but the quality was not quite up to CD standards unless you spent a lot of money. 

Nowadays technology has moved on a lot, and 'as good as or better than CD quality' audio interfaces are now typically external boxes that connect to USB, although curiously, the computer socket remains stubbornly and exactly the same 3.5 mm mono or stereo jack sockets rather than quarter inch jacks, RCA/Phono sockets or balanced XLRs. I have always thought that if a computer was really designed for music use then it would not have 3.5mm jack sockets for audio... There again, there's money to be made by selling audio interfaces, and there are lots of adverts reminding purchasers of DAWs, audio editors and other music software that one of the first follow-up purchases should be an audio interface. 

An audio interface is just a converter from the digital numbers used to represent audio signals inside the computer, to the analogue audio signals that you find on quarter inch jacks or phono connectors when you hook a guitar or a synth to a pedal and then to an amplifier (or these days, more probably a software emulation of a vintage, distorting amplifier connected to an emulation of a vintage, slightly mis-used speaker cabinet, connected to a very clean amplifier). In other words, an audio interface contains an Analogue-to-Digital converter to input audio into the computer, and a Digital-to-Analogue converter to output audio from the computer. 

Audio interfaces normally get selected based on the number of inputs and outputs, the quality of the audio that they give, the highest sample rate (192 kHz for example), the number of bits that are used in the Digital-to-Analogue Converters (DACs) and Analogue-to-Digital Converters (ADCs) -16 bits is meh (CDs), 24 bits is high, and if they can run VST plug-ins (which also equates to expense). You might have noticed that 'Outputting control voltages for modular synths' wasn't in that specification list...

To output Control Voltages, you need an audio interface that has an unusual property in most audio systems. Audio signals are often quoted as being from 20Hz to about 20 kHz in frequency, from a low-pitched rumble to high pitched (kids can hear it but their parents can't) shrieks. The diagram above is impossibly perfect, but shows what an idealised frequency response might look like. As you go below 20Hz you feel wobbles rather than hear the audio, and eventually, at zero frequency, you get what is called direct current (DC) because it doesn't change (instead of current that changes all the time, which is called alternating current (AC)), which is where the wobbles stop and you just get a voltage (and a current flowing - there are various formulas that connect all of these things...). If you want a long explanation, just ask an electronics engineer why DC isn't called 'Direct Voltage'...

The problem with frequencies below 20Hz is that they are just wobbles, and you feel them rather than hear them. And getting a speaker to wobble can do nasty things to it - overheating, tearing itself apart, ripping the cone, warping in shape, etc. One way of experiencing DC is that thump you get when you power up amplifiers with the volume up high instead of at zero. So, to protect speakers (and people from being wobbled excessively), many audio systems don't go 'down to DC' (zero Hz) - they stop at about 20Hz. 

Unfortunately, frequencies below 20Hz, and especially zero Hz (which is stopped = a fixed voltage!) are exactly what is needed for CVs. Control voltages like Pitch or Modulation tend to change quite slowly (60 bpm = 1 Hz (!) which is one complete wobble per second). and so will not be output by an audio interface that has no response below 20 Hz. 

So what you need for CVs is an audio interface that has a frequency response that goes all the way down to DC (Zero Hz!), which is often called DC-coupled (because electronics engineers have jargon just like any other profession). The dashed line in the frequency response diagram above shows a response that goes all the way down to DC, but the log scale makes it difficult to show... Here's an example list from 2019 that shows some 'dc-coupled' possibilities then (you will need to research current devices...):

If you look at the text in the Ableton Live 'CV Tools' device free download, then it says that you need to use a dc-coupled audio interface, but doesn't go into any more detail:

(Technically, it should be 'DC-coupled', but lower case is often used instead...)

If you want to check an audio interface, then looking for the phrase '20Hz-20kHz' in the specification is usually a good indicator that an audio interface is NOT DC-coupled. That low number: '20Hz', is the clue. My Focusrite Scarlett has exactly this phrase in its specification, and yep, it is not DC-coupled, and so isn't good for outputting control voltages. There again, the specs make it very clear, and I bought an 'Audio Interface', not a 'Control Voltage Interface'.  

Sometimes the specifications can be difficult to interpret. Here are the specifications for the Native Instruments Komplete AUDIO 6 audio interface:

As you can see, the phrase 'DC coupled' is there! But only in the output (and also notice that it doesn't say '0Hz-20kHz! That would be far too obvious...). There again, the input doesn't mention the all-important phrase at all. There's a rule here:

If it is not in the spec, then there's probably a good reason why not...

This means that the output is DC-coupled, so you can use this audio interface to send CVs to your modular synth (or any synth with CV inputs), but that the input is NOT DC-coupled, which means that you can't use this audio interface to receive incoming CVs from a modular synth, a CV controller, or an analogue synth that outputs CVs. However, the inputs can have 48V applied to them, which is not recommended for connecting to most modular systems.

The specification has one additional, easily-overlooked 'feature'... There is an asterisk (*) after the 'for modular control... If you go to the end of the specification it says: *Limited to +/-2V range due to the AUDIO 6 being USB powered.' Aha! So the range of voltages that can be output is limited - which gives us another rule:

Always check for asterisks - they often try to hide a catch... 

Something to be very aware of when looking for a DC-coupled audio interface is the actual output voltage range - and be careful to never assume anything. Even if an audio interface is DC-coupled, it doesn't necessarily mean that the range of voltages that it can output are appropriate. Let's look at a popular modular standard and see if that tiny little asterisk has any significance...

I'm going to concentrate on Eurorack modulars here, but there are other standards... Eurorack audio signals can be a maximum of 10V from peak-to-peak, which is -5V to +5V (+/- is known as bipolar). Eurorack control voltages can be half that size (-2.5V to +2.5V), but can also be what is called 'Unipolar' and range from 0V to 8V. Control voltages that are used for pitch usually follow a 1V/Octave rule, although there are other ways of representing pitch, particularly on modular synths from the 'Sound Card Era' and even before that! Gate and trigger signals are usually 0V for Off, and 5V for On. All of these numbers mean that you may need to amplify the output of a DC-couple audio interface in order to get the right voltage levels... so that Utility module may be useful after all!

In the case of the Native Instruments Komplete AUDIO 6 (Why is it shouting 'Audio'?) then the control voltages are slightly smaller than the Eurorack range in bipolar mode, but way too small for unipolar mode. This could limit the range of, for example, a pitch CV, which might not be what you want. Worse, if you aren't aware of the limits of the output voltage, then you might spend time trouble-shooting a problem that seems to be in the modular when it is actually in the audio interface. 

Using audio signals to carry numbers is not new. Before broadband, modems used to turn the number in computer communications into frequencies so that they could be sent over telephone connections - and telephones are not DC-coupled! (300Hz-3.4 kHz for UK telephones). Data was (and still is) sent over radio by jumping between frequencies`; early methods used pairs of frequencies, whilst modern systems use more complex 'constellations' of frequency, amplitude and phase. 

One other important thing to remember is that price and external appearance aren't going to give you a reliable indication of an audio interface being DC-coupled. Check those specifications...  

In summary, then, audio interfaces come in two flavours: DC-coupled (which CAN be used to output CVs - but check the range), and Not-DC-coupled (which can't be used to output CVs). It is a good idea  to stick a label onto your audio interface to indicate if it is DC-coupled (input, output or both, plus the range of voltages) or if it is not.  

2. Modular MIDI-to-CV converters

A modular MIDI-to-CV interface is a purpose-designed converter that plugs into a USB socket and outputs Control Voltages (and sometimes inputs CVs and converts them to MIDI, although technically that would be a CV-to-MIDI converter!). So they go from DC up to the low wobbles (and maybe up above that where you can actually hear the frequency), and no need to amplify the output, the CVs are modular-compatible by default. Take care: a MIDI-to-CV interface module for one modular standard might not be suitable for another, plus the power supply might be different, and the mechanics will be different... As before, in this post I will only cover Eurorack...

One often-mentioned modular MIDI-to-CV interface is the Expert Sleepers ES-8, which has 4 analogue inputs and 8 analogue outputs on the front panel, plus various expansion options for additional I/O.  - Expert Sleepers ES-8  - ES-8 Manual

There are other devices, of course!  - Mutable Instruments' Yarns  - Doepfer A-190-3 USB to CV/Gate

and plenty more... 

Note that some MIDI-to-CV modules have 5-pin DIN inputs rather than USB sockets, so make sure to read the specs, otherwise you may need a USB MIDI Interface (most audio interfaces also provide MIDI I/O...). 

The Arturia KeyStep 

And then someone suggested the Arturia KeyStep. It has Pitch CV, Mod CV and Gate outputs, as well as MIDI In and Out.

The manual says that incoming MIDI notes are used as transpositions for the sequence, and are also converted to Pitch CV. So I looked for the MIDI Implementation Chart to see more information. Except I couldn't find one. Not in the manual. Not on the web-site. Not from a Google search. So I compiled one by testing exactly what the KeyStep actually does. You can download it from here...

Here's a summary of what I discovered:

- The KeyStep outputs Pitch CV based on incoming MIDI notes, plus whatever note is played on the KeyStep's keyboard, plus any Pitch Bend from the KeyStep's Pitchbend strip controller. Incoming MIDI Pitch bend messages seemed to be ignored (but this could be my error - please let me know if there is a way to make it happen...). Even so, being able to convert MIDI notes to Pitch CV was very useful - and lots of people have a KeyStep. Being able to add Pitch Bend to incoming MIDI notes can add a lot to a plain 8 or 16 step sequence...

- The KeyStep outputs Mod CV based on the Mod source that has selected in the MIDI Control Centre software from Arturia that is used to control the setup of the KeyStep (plus save sequences, etc...). Available source are the Mod Wheel, Velocity and Aftertouch. So if the Mod Wheel is chosen, then incoming MIDI Modulation (Wheel) Controller messages (CC1), plus the KeyStep's Mod Strip are added together and output as the Mod CV. If Velocity is chosen then the Velocity of incoming MIDI notes is added to the velocity of notes played on the KeyStep's mini-keyboard and output as the Mod CV. And finally, if Aftertouch is chosen as the source, then incoming MIDI Aftertouch message values values are added to the Aftertouch values from the KeyStep's mini-keyboard and output as the Mod CV (cool for a modular where people don't normally them to respond to Aftertouch). Lots of scope here for double keyboard possibilities, particularly adding Aftertouch to fast lead lines on a keyboard - where you don't have enough time to press on the keys to activate the Aftertouch. 

- The KeyStep outputs Gates only when its mini-keyboard or internal sequencer/arpeggio outputs a note. I couldn't get it to respond to incoming MIDI notes. Now there is lots of scope for experimental error here - the MIDI Control Centre provides lots of control over how the KeyStep behaves (like choosing the source for the Mod CV - if you choose Velocity or Aftertouch, then it might appear that incoming MIDI Mod wheel messages are ignored...), and I might have missed a vital setting. So I'm happy for all of this to be a draft, and if anyone has any additional information about how the KeyStep responds to incoming MIDI messages, then please let me know and I can update the MIDI Implementation Chart (and this post).

As a workaround for the lack of a Gate output, you could use Mod Wheel, Velocity or Aftertouch Mod CVs through a Utility module and create Gates using a threshold function. You could even use the value as a CV as well. You could also buy a MIDI-to-Gate/Trigger module! (GAS can be very bad with modular synths...)

The KeyStep is thus a partial solution to converting MIDI to CV so tat Max can be used to control a modular synth, and it opens up some creative control possibilities that aren't normally very easy to do. 

This is probably a good time to think about how closely related audio signals, controls voltages, and gates/triggers are in a modular synth. An audio signal can be used as a fast LFO, whilst a fast LFO can be an audio signal. A pulse LFO can be used as a continuous series of gates or triggers, and so on. A MIDI-to-CV module emphasises the interchangeability by making numbers in Max appear as voltages in the modular synth - so numbers that go up and down from a cycle~ object could be an LFO or an audio signal, whilst a number that stays the same for most of the time, but occasionally jumps up to a higher value, and then jumps back to the original value again, could be used as a gate. 

What a voltage does is defined largely inside Max by how the numbers change, rather than by the modular synth - the modular bit is just the way of turning those numbers into sound. This is why modulars are more interesting than conventional fixed architecture synthesizers...

But a lot of the fun of electronic music is DIY, and so here's some information on other ways that you can interface Max to a modular synth or an analogue synth:

3. Other possibilities...

In electronics, there are often alternatives. If you have any electronic design experience, then a frequency-to-voltage converter could be an interesting way to explore using an ordinary audio interface and Max's audio generation capability to convert frequency to voltage. 

Frequency-to-voltage converters often use a pulse generator plus some sort of averaging circuit (a low pass filter, for example) - so for the averaging circuit you could have a leaky 'bucket' (which could be a capacitor with a resistor that causes the voltage to 'leak' away), and a pulse generator circuit could be just a way to fill the 'bucket' with cups of water. The faster you put cups of water (pulses) into the bucket, the higher the voltage level, and so the frequency determines the output voltage. 

There are chips deigned to do frequency-to-voltage conversion, and all you would need to add is an input buffer and an output scaling amplifier (probably just an op-amp).

Here's some information about a few methods of converting F-to-V, mostly using dedicated chip-based Frequency-to-Voltage convertera: 

If you want to have something curious to think about, consider this: a Frequency-to-Voltage converter is just a reverse VCO. (A VCO turns a voltage into frequency...)

Because it comes from a legendary analogue circuit designer (Bob Pease), I'm inclined to forgive the blatant and incessant advertising on the following web-page (if I ever needed a reason to use an ad-blocker...):

You could use a Utility or Trigger module to threshold the output voltage and produce Gate or Trigger voltages, where low frequencies produce a low voltage output (Off) and higher frequencies produce a higher voltage output (On). Again the input will need to be buffered and scaled, and maybe offset. The core part of this, a F-to-V device converting 0-10kHz to 0-10V, is available as a £6 circuit board from Given the price of many modules, then this is a (unipolar mode) bargain! Steampunk experimentation awaits for brave synthesists!

The problem with using a chip or circuit board as a 'black box' is that you don't get any real feel for what is happening, so here's a circuit that does what the cups and bucket does:

...and here's how you could use the same circuit in a modular synth to make a simple Frequency-to-Voltage converter - you just solder a few components onto two 3.5mm sockets (or you could just cut a 3.5mm cable into two... 

For experimentation purposes then 'rats-nest' style is fine by me, although you could use a prototyping bread-board if you wish. I'm always intrigued by modular owners who have modules for everything, but who never actually do any DIY circuits. A modular is a DIY synthesizer, so why not build your own circuits to process audio or CVs...  

Using frequency-to-voltage converters may have other side effects: the latency might not be very low, but this might contribute to the appeal. For example, Buchla-style Low-Pass Gates have interesting time response characteristics which create a lot of their special sound. Modulars are very good at exploring these types of circuits - you could almost think of them as laboratory toolkits for audio electronics...

Frequency to voltage conversion is an old technique, hence the steampunk reference above. one of the first circuits that I ever had published was a variant of the diode pump, used to indicate if a clock was running or not... Frequency-to-voltage converters turn up in all sorts of equipment: radios, tachometers, speed controllers, and more...

Open. not closed...

Hopefully this post will help Max (and MaxForLive, PureData, and other similar programming environments...) users to control some of the real world beyond their screen. 

Interfacing Max to other devices, sources of numbers, other controllers, synthesizers, modulars and more opens up huge possibilities. One of the dangers of creating music on a screen is that the screen can become the only focus of the environment, and there is a strong temptation to put everything on the screen because of the immediacy, ease of editing, convenience... I believe that the most interesting challenges and opportunities in making electronic music come from the interfaces between the real and the virtual, the human being and the synthesis equipment, the possible and the 'to be solved', the screen and beyond, because that is where magic happens. 


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)

Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)