Tuesday, 30 March 2021

MIDI Pitch Bend - A Tiny Inconsistency

Sometimes 'The Bears' really are lurking, ready to get you if you step on the cracks between the paving stones... 

One of the things that has been beaten into me, over many years of working with hardware, firmware and software, is a rule that has many forms, but which boils down to something like:

"Question everything. Measure everything at least twice. Always ask: 'Why?"

It is an expanded version of the 'Never Assume Anything' rule. It has served me well. But you must never let your guard down...

Ever...

Photo by Synthesizerwriter

The MIDI Pitch Bend Inconsistency

As with all unexpected things, it crept up on me silently, unannounced, from a direction I wasn't expecting. When you have spent a long time with something, then you think you know about it. Since I got my first copy of the original MIDI Specification back in the mid 1990s, then I have read it carefully and repeatedly. I spotted some of the things that were put in there by knowledgable hardware people who really knew their stuff, like what a MIDI Clock message actually looks like 'on the wire' of a 5-pin DIN cable, and why it was defined like that. And since you are now intrigued, I'm going to leave that until another post...

So the original MIDI Specification 1.0 (1996) has a section for Channel Voice MIDI Messages, starting with Note On (0x8n in modern formatting, but shown in mid 90's style as 8nH, where 'H' means Hexadecimal and 'n' is the MIDI channel (0x00-00xF or 00H-0FH for 1-16)), then Note Off (0x9n, 9nH), through to Pitch Bend (0xEn, EnH). After that you have the System Common MIDI Messages, which all start with '0xF'. So all of the 'highest bit set' values are specified, from 0x9 to 0xF.

The Pitch Bend message is the last of the Channel Voice messages to be specified, and the specification  contains just two paragraphs - the second of which is just two sentences and is just clarification about sensitivity. Here's that first paragraph:  

This function is a special purpose pitch change controller, and messages are always sent with 14 bit resolution (2 bytes). In contrast to other MIDI functions, which may send either the LSB or MSB, the Pitch Bender message is always transmitted with both data bytes. This takes into account human hearing which is particularly sensitive to pitch changes. The Pitch Bend Change message consists of 3 bytes when the leading status byte is also transmitted. The maximum negative swing is achieved with data byte values of 00, 00. The center (no effect) position is achieved with data byte values of 00, 64 (00H, 40H). The maximum positive swing is achieved with data byte values of 127, 127 (7FH, 7FH).

There are quite a few important take-aways in this paragraph. Firstly: Pitch Bend messages are ALWAYS 14 bit resolution. Now I've done quite a lot of Max and MaxForLive devices, and Max is a very useful general purpose tool for exploring MIDI... In Max, there are two basic objects that are used specifically for receiving Pitch Bend messages (there are other, more generic MIDI 'parsing' objects...): 'bendin' and 'xbendin'. 'bendin' is the 'basic' object, and it returns 7-bit values for pitch bend of 0-127 (a single MIDI data byte), whilst 'xbendin' is the 'extra precision' object, and it returns 14-bit values from 0-16,383 (two MIDI data bytes)). 

7-bit and 14-bit Pitch Bend objects in Max

The next important thing here is that the 'bendin' object is throwing away the second byte, the Least Significant Byte (LSB), so the values that you get are just the raw 7-bit values (0-127) that are in the Most Significant Byte (MSB). As I'm sure you know already, individual MIDI 'bytes' only have 7 bits available for data, which is why the value doesn't have the range of 0-255. You need multiple MIDI 'bytes' in a message to get extra resolution. In the 14-bit-oriented way that MIDI represents higher resolution numbers, then for a value represented with two 'bytes', the MSB is the top 7 bits, and the LSB is the bottom 7 bits. So the range covered by the LSB is from 0x0000 to 0x007F (0 to 127) in steps of 1, whilst the MSB is from 0x0000 to 0x3FFF, in steps of 128. Now 0x3FFF is 16,383, so that's where the full MIDI Pitch Bend resolution of 0-16,383 comes from.

Note. I need to point out that Pitch Bend messages, by design, should include 14-bit 'extra precision' values - as noted by the MIDI Specification - because pitch bend is a 'special purpose' controller. Max and MaxForLive provide access to the 7-bit lower resolution value only because that value can then be used for other things, anywhere in MIDI or Ableton Live, or even externally if you convert it to a Control Voltage. For the control of pitch, then 14-bits are a much better idea, because this will give you nice smooth changes of pitch.

Ok. All sorted.

Not quite. There's a problem. 

Pitch Bend is bipolar: it can be positive or negative. In MIDI, the 'no bend', middle, detented position is defined as being a value of 8,192 (0x4000 or 4000H), which would be output as a value of 64 from Max's 'bendin' object and as a value of 8,192 from Max's 'xbendin' object. Max does provide another special object, called 'xbendin2', and this outputs the two 7-bit Bytes separately, so you can see the actual MSB and LSB if you want to. 

So negative Pitch Bend is the 64 values from 0 to 64 when we are talking 7-bit resolution, and the 8,192 values from 0 to 8,192 for 14-bit values. All perfectly fine and reasonable. But the positive Pitch Bend is slightly different. it can only go from 64 to 127, which is 63 values, because the highest 7-bit value MIDI allows is 127. yes there are 128 possible values in 7 bits, but if you start at 0, then you end up at 127. There are 128 values between 0 and 127. Max's 'bendin' object only provides 7-bit values for controlling other 7-bit parameters, and you would not use it for actually bending the pitch of a note - you would hear the steps! But the smaller numbers do make it very clear what is happening...

In 14-bits, then it goes from 8,192 to 16,383, and there are only 8,191 values, because 16,384 is ever so slightly larger than you can represent in a 14-bit number. 

The MIDI Specification 1.0 doesn't hide this. That final sentence of the first paragraph says:  

The maximum positive swing is achieved with data byte values of 127, 127 (7FH, 7FH).

The previous two sentences in the paragraph define the centre position and the maximum negative swing - but most people don't notice that 0->64->127 and 0->8,192->16,383 aren't symmetric. There is one less positive number than negative, and it is not hidden, it is in plain sight, printed in the specification. Unfortunately, the big numbers (16,383, and 8,191) tend to obscure what is actually happening...

In other words:

If no pitch bend at all has a value of zero, then the most negative pitch bend value is -8192. But the most positive pitch bend is 8191. (14-bit values are used here because these are what pitch bend applies to!)

Yep, The Bears just got us. 

The MIDI Pitch Bend Message doesn't allow us to bend up by the full amount. We can bend down and produce 64 7-bit messages or 8,192 14-bit messages (assuming our MIDI Controller outputs every value as a message, but that's another story). But when we bend up, then there are only 63 7-bit or 8,191 14-bit messages that can be output. That final value (8,192) is just outside of what MIDI allows. 

This means that if you set your PitchBend sensitivity to be 1 octave, then you can bend down by exactly one octave, but you will only be able to bend up by slightly less than one octave. The Owner's Manuals for MIDI Controllers, synthesizers and any other devices that output MIDI Pitch Bend messages generally say it exactly like it is - they say what the maximum positive output is. What they tend not to mention is that this is slightly less than what you need to do a pitch bend up that has the same range as a pitch bend down. And with 14-bits of pitch resolution, then the difference is very tiny. Miniscule.

In fact, I would guess that you've never noticed it...

So if you want exact pitch bend that utilises the end-stop of the Pitch Bend wheel or lever, then you should only bend downwards. This applies to any device that uses MIDI 1.0, regardless of age, firmware, operating system or manufacturer. Oh, and MIDI 2.0 is... different, because it has even higher resolution available.

Actually, there's another solution, and that is to not use the limits of the Pitch Bend wheel or lever (or push pad, or however it is implemented on your device), and that is to set the range to one note more than you require, and then to only move the wheel, lever, etc. by the amount required to get the bend you actually require. So for an octave, you might set the range to 13 notes up and down, and then only ever bend up or down by 12 notes. This gives perfect pitch bending, albeit with slightly less than the resolution of 16,384 values that were intended by the MIDI specifiers (but only very slightly less!). It does mean that you can't use the end stops of the wheel, lever, etc, but that's a minor inconvenience, and Pitch Bend by ear is so much better than relying on mechanics...

Oh, yes, and if you are thinking that this is a tiny difference in the pitch bend, and that it doesn't matter, then re-read that section in the MIDI Specification 1.0. It says that the MIDI Pitch Bend messages always use 14-bits resolution BECAUSE '...human hearing... is particularly sensitive to pitch changes.' I will gloss over the fact that it then goes on to define positive MIDI Pitch Bend so that it isn't perfect in precisely the place where human hearing is particularly sensitive. 

Not an Error

Actually, there is no error at all here. Nothing to see. This isn't a mistake by the people who wrote the MIDI Specification 1.0. It is nothing more than a consequence of the way that number work in these particular circumstances. Image the simplest pitch bend controller: three positions, No Pitch Bend (in the middle), Full negative (at one end of the travel of the wheel, lever...), and Full positive (at the other end of the travel). So these could be represented by -1, 0 , and +1. But this gives a jerky pitch change, of course!

If we increase the resolution by 5 times, something interesting happens. The range is now -5 to 0 to +5, and there are 11 values instead of the 3 values that we had for -1, 0, and +1. So if we start with the most negative value (-5) and assign it to 0 on the pitch wheel, then the middle (zero) value will be at 6 on the pitch wheel, and the most positive value will be at 11. Aha!: We have a Marshall Amplifier 'goes up to 11' situation. Unfortunately, no matter how the range and the resolutions are set, there will always be an odd number of values, consisting of the negative numbers, plus the negative numbers, plus that zero in the middle. So we need a controller with an odd number of values (which would also be very useful for that Marshall amp!)...

The binary world of computer is even. The basic counting system (binary) is based on two values: 0 and 1. So if you have just one bit to represent a number then there are two possible values: 0 and 1. Two bits gives four values: 00, 01, 10, and 11, which are 0, 1, 2, 3, and 4 in decimal number form. Any number of bits used will always give an even number of possible values. MIDI's 7-bit numbers have 128 different values, which are normally shown from zero: 0 to 127.  MIDI's 14-bit numbers have 16,384 different values, and if we show them from zero they go from 0 to 16,383. 

(Decimal numbers are even as well! So are pairs, dozens...) 

When we take these even numbers of possible values and try to map them on to a Pitch Bend wheel, lever, etc. then there's a problem, because we now know that the total number of pitch bend values are always odd since there has to be a zero in the middle. The positive and negative values are symmetric and have the same range, but the need to have a zero position in the middle adds an extra number and we get an odd number of values. No matter how hard you try, if you have an even number of drawers and an odd number of things to put in those drawers, there will always be at least one empty drawer or thing left over (each drawer will hold only one thing, of course, in this scenario - which matches the way that numbers work very nicely!). 5 drawers and 4 things? One drawer will be empty. 4 drawers and 5 things? One thing will be left over, because all the drawers will be full. 

One possible solution is to have two zeroes! If you assign two of the values in the middle to zero, then you can have perfect matching! 

In the case of MIDI Pitch Bend, the design puts a single 'no pitch change' zero value at 8,192, full negative at 0, and full positive at 16,383. So the negative range has 8192 different values (0 to 8,192), and the positive range has 8,191 different values (8,192 to 16,383, which is 8,191). There is no value of 16,384 because that would require a 15-bit number, and we only have 14-bits. 


So the biggest negative pitch bend message value is -8,192, and the biggest positive pitch bend message is +8,191. The biggest possible positive pitch bend is always going to to be 1/8192th smaller than the biggest negative pitch bend message value. And it is a teeny, tiny value! Nothing to worry about. It is a minute pitch difference.

But it is an inconsistency!  

  







---


If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Synthesizerwriter's Store (New 'Modular thinking' designs now available!)

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)




 



Sunday, 7 March 2021

Emulating Turning a Circuit On and Off...

On the Discord channel of the Music Hackspace, @MeLlamanHokage asked about simulating power failure in a circuit using Max. I replied:

For audio circuitry then you would need to implement things like: a low-pass filter with a falling cut-off frequency, an increasing noise floor, rising (or falling) gain, falling clipping levels (eventually to zero volts), rising intermodulation distortion, a changing dc offset...

(Here's a sneak peak of what was in my head - scroll down for more details):


And so I idly wondered if it would be possible to do this in Max - actually MaxForLive, because I realised that this might be a perfect addition to the Synthesizerwriter M4L Tape (etc.) Suite. In much the same way as you record release 'tails' or 'triggers' on a piano, so that you capture the sound made when a key is released and returns to its 'resting' position, then I realised that an M4L emulation of a circuit turning off (and on again) could be used to add in the sound of Guitar Effects Pedals being switched in and out of circuit, or having their power removed. 

This was hugely reinforced when Christian Henson @chensonmusic (of Pianobook and Spitfire Audio fame) did a performance using just a piano sample loop and his pedal board with 23-ish pedals in a recent YouTube video (there are three Moog pedals off to the right of the screenshot, btw):

https://www.youtube.com/watch?v=yF6VKqpC-nA

Imagine @chensonmusic Christian Henson's head and arm hovering over a massed array of guitar pedal goodness...
Christian Henson @chensonmusic 'plays' his pedal-board...

That's two nudges in the same direction, so I stopped dithering, and started coding...

Now I know that the Effect/Bypass switches in guitar effect pedals are designed to not introduce any glitches or noise into the audio stream, and I know that they don't turn the power on and off to the pedal (well, not usually)! But not all of them will function perfectly, and power supplies do fail! I have had several synths, groove boxes and drum machines that made interesting sounds as you turned them off, and I would venture that this may be an obscure and under-utilised area of sampling. See the 'Reminder...' section near the end of this blog post for more thoughts on this...

Are the sounds made by foot-switches on guitar pedals part of a performance? That's an interesting question... So how can I make it easier for everyone to explore this aspect as well?

Free Samples...

Whilst there may be little actual sonic effect on audio signals from a modern foot-switch, there's also the very characteristic sound that guitar pedal foot-switches make in the real world: definitely something to sample and use as a drum sound. (I notice that the BBC overdubbed the sound of a gun being cocked to replace the sound of a seat belt clicking in a recent 'edgy' trail...) So I did some sampling of switches from a variety of sources*, and this turned into an editing session, and you can get the results here for free:


These samples were created by me, and are released into the public domain with a CC0 licence. There are the sounds of foot-switches, and lots more. 'Footswitch14' and 'Footswitch Toggle 14' are what I think of as the classic guitar pedal foot-switch sound, but you may have your own preference...

As always, let me know if you want more of this type of content.

*Sources: Eventide H9 Dark, EHX Oceans 12, EHX SuperEgo+, Poly Digit, Donner DT-1, MIDI Fighter Twister, Rebel Technology OWL, and some other obscure stuff...

ON Off Emulation

But back to the main topic! I hasten to say that my original reply wasn't based on having already seen something that emulated a circuit being turned on and off. Instead it was me quickly thinking about what happens when you remove power from a circuit. So now that Christian Henson had inspired me to  actually design something, I jotted down some more concrete ideas about what 'turning something on and off' actually meant, as well as what sorts of things would happen to the circuits.

The first thing I thought about was a state diagram. At first sight, this is obvious. There are only two states: On and Off. Duh! (And there's the trap, sprung...)

But, with a little more analytical thinking, it is slightly more complex than this. There are actually four states: Off, Turning On, On, and Turning Off. The 'Turning On' and 'Turning Off' states normally happen so quickly that we don't notice them, but the longer they are, the more you notice them. One example where I've done this type of thing before was in a few MaxForLive devices that I programmed where the volume can be set to rise or fall slowly. The 'generic' version is called '3rd Hand' because it almost gives you a third (and very steady) hand to control rising or falling volume...

http://blog.synthesizerwriter.com/2018/07/slow-fades-in-live-performance.html - 3rd Hand

What was really interesting was that when I included an automatic volume control in a device: https://maxforlive.com/library/device/5258/instsineatmosphere then the only feedback comment that I received was that it didn't make any sound, which was exactly what happened with an earlier 'Generator' device where you had to click on a button to get a sound, so since then I have not included this feature in many devices. Please let me know if you would like it to be included!

But back to 'states'...

This '4 states' approach is very useful for live performance where you want something to evolve slowly: like an audio drone that gradually evolves over minutes, or tens of minutes, or longer. If you've ever gone to a 'Drone' performance by William Basinski (there are other audio drone specialists) then you will know the power of a slow rising, changing sound. I saw/heard him when he performed at Ableton Loop 2017...there's something about live performance!

In a drone performance, then there's a lot of slow evolution of sounds, timbres, volume... (the 'Turning On' and 'Turning Off' states) and very little time when the soundscape is static ('the 'On' or 'Off' states). To revisit the old saying about music typically being made up of 'sound and silence', then a better recipe might be that music is effective when it contains interesting/evolving/changing sounds seasoned with  sprinkles of silence.

In this case, the change was all about what happens when circuits get turned on or off. And so I jotted down as my first proper approximations:

- Mains breakthrough from PSU

- Clicks, Crackle and Crunches

- Noise (Rising then falling?)

- Clipping of audio

- Loss of high frequencies in audio

And then I programmed a custom 'synthesizer' in MaxForLive, that used a 4-state slow On and Off generator to drive 5 sections producing each of the 5 features that I had noted. I'm sure there are more things happening, but this was a first attempt, and I'd never seen anything like this before... Now when I say 'I programmed' then that can make it sound like something trivial and rapid, but that isn't quite how the process works... Anyway, some hours later...

OnOff Emulation mr 0v01


There are five processing sections, plus a sixth 'control etc.' section on the right hand side. The layout is much the same as the Ironic Distortion and the Ferrous Modulation M4L devices:

Ironic Distortion - blog post.                   Ironic Distortion - M4L.com

Ferrous Modulation - blog post.              Ferrous Modulation - M4L.com

Each section has a 'Mix' slider, which sets the level of the output of that section, with a big 'Mute' button.


But because this device is all about controlling the 'Turning On' and 'Turning Off' states, then there are extra controls in each section which are devoted to setting how time affects each section. The big grey bar is the current On/Off stare: 100 (all of the bar is grey) is On, whilst 0 (all of the bar is black) is Off. The movement of the bar (up or down) shows how things change during the 'Turning On' or 'Turning Off' states. To the right of the bar are four rotary controls, plus an 8-button switch selector.  The screenshot above is for the I (Impulse) section, and so each starts with that letter. So I-On is the time for the bar to move from Off to On (the 'Turning On' time), whilst I-Off is the time for the bar to move from On to Off (the 'Turning Off' time). 

The 8 position switch in the middle controls how the bar moves. L is for Linear, and so the bar just moves steadily from top to bottom (a straight line graph). S is for Sine, so the bar slows down as it gets to On. Z is Sine-squared, so the slow-down is sharper. P is for Power, so the bar slows down as it gets to Off. Q is for Power-squared, so the slow-down is more abrupt. B is for Bipolar, so the bar slows down as it gets to On and Off. D is for Bipolar-squared, so the slow-down is more abrupt. Finally, the light purple S that is linked to the two rotary controls on the right is for 'Smooth' (or Log) and this provides two additional controls that let you change the time of a logarithmic slow-down as the bar gets near to  the On or the Off. 


The best thing to do is to listen to the effect that the controls have! It is a bit like the Flutter waveforms in Ferrous Modulation: sine sounds boring, whilst the narrow spikes sound jerky... You may find that the Smooth setting gives good control, but remember that the S, Z, P and B positions don't have the slow-down at one extreme, and so have a very different effect. If you find yourself over-using the Smooth setting, then deliberately choose one of the more abrupt options. 

Separate controls are provided for each sections because the final sound works best when each of the sections has different timing! If you have the same timing for all of the sections than it will sound boring... One of the settings that I like is to have the Clip section happen last, so that you last thing you hear is the distorted audio...

The Sections

From the left hand side, the sections are in two parts. The first two sections process the incoming audio signal: 

Freq. 

This section just applies a low-pass filter to the audio input, and lowers the cut-off frequency as the bar moves from On to Off.  (Or raises it as the bar moves from Off to On). This emulates that way that some audio circuitry loses high frequencies as the power supply is reduced.

Clip.

This section has optional Compression and Filtering, but the main effect is to apply clipping to the audio input, where the clip limits reduce with the bar, so there is no Clipping when the device state is On, but more and more clipping as the state approaches Off. Note that the volume drops to zero when the state is Off... so you don't hear the effect of extreme clipping when the limits are zero! 

The next three sections are generators, and so do not process the incoming audio:

Noise

White noise filtered to give it various colours. The 'Gain' control and the mixer slider are effectively the same control...

Impulse

Crackles, pops, crunches, and other 'Impulse'-type sounds are generated in this section. Because the source is random noise, then these will not repeat - every time a new and different sound will be produced. 

Mains

This is slightly strange - it introduces the sound of mains hum, which is weird if the power supply has been removed. But, curiously, as with many sound effects, nonsensical often seems to work. It seems that having mains hum fade in and out implies something happening with the power - a bit like the way that sparks always seem to jump out of control panels on spaceships and submarines in the movies whenever there's an explosion or a collision. 

None of these sections is particularly complicated. Each does just one generation or processing function. Feel free to look into the code to see how each works - there's no magic used. 

The Big Button

One the far right hand side is the Control section. This has the big 'On/Off' button, text that shows the current state (1 of 4 possible states), and a generous set of memories for your own creations - just shift-click to store, click to recall. 

Reminder...

It is probably worth noting that although this might appear to be just a generator of On/Off sounds, it is also a processor - the Freq and Clip sections process audio, so if you don't input anything then they won't produce any output. The ideal setup is to have a sound that you want as the basis of the final result, and then use OnOff Emulation as a way of 'bracketing' it with an On and Off emulation.

This is slightly different to what the Release Trigger or Release Tail samples are doing in a piano sample - they deliberately do not include the piano note itself. (You press down on a key, and then release it, so you get the sound made by the key itself and the associated 'action' mechanics, but you don't do this with a string vibrating. ) OnOff Emulation processes any audio that is sent to it, and then layers the mains, impulse and noise sections on top of it. 

And this set me thinking... What about a release trigger/tail generator? I've never seen a dedicated one, because all of the release triggers/tails that I've ever used have been just samples that are part of the sample set of a sampled piano... The only exception that I can think of is for Harpsichord-type sounds on a DX7 (or other FM synths,  etc.), where the sound of the jack hitting the string when the note is released, is synthesized separately using an envelope where the attack and decay are short, the sustain level is zero, and the release rises to the final/initial level to give an envelope that only affects the release segment of the note. Oh, and acoustic guitar sounds, where string buzz is also synthesized this way... There's probably more now that I'm thinking about it this topic...

So a release trigger/tail generator would have different sections, and actually might have some aspects in common with a 'Riser' synthesizer, although they don't seem to be as 'in vogue' as they were a few years ago. I remember suggesting that a custom riser could be made by processing a 'reverb'ed section of a track in an Ableton Loop Studio Session back in 2015, and got a tumbleweed reaction from the room, and the 'Name' Producer then made one using filtered noise and everyone else nodded their heads. I have always swum against the tide, by the way...

I have added this task to my 'ever-expanding' (as Loopop says on YouTube) list of 'things to do'.  Don't hold your breath, though: it's a long list and I'm very busy. 

And Finally...

I haven't really seen a device like this before. Well, actually, that's not exactly right, because I have - this is just a synthesizer, but not your common variety. This is a purpose-built 'custom' synthesizer made to produce just one type of sound.  What I haven't seen before is a synthesizer that is dedicated to making the sound of a circuit being turned on or off. But now I have. And so do you, now! I think it shows the power of Max that you can use it to make arbitrary sounds and sounds that haven't been made before (or maybe sounds that I think might not have been made before...).

My grateful thanks to @MeLlamanHokage for the original question about turning circuits on and off, and to Christian Henson for using a pedal board as a performance instrument at exactly the right moment to get me to turn speculation into reality. Thank you, sirs!

---

Getting ONOff Emulation

You can get OnOff Emulation here:

    https://maxforlive.com/library/device/7074/onoff-emulation

And yes, I realise that it should be called AUDonOFFemulation if I was to use my own naming scheme, but that just reads crazy!

Here are the instructions for what to do with the .amxd file that you download from MaxforLive.com:

     https://synthesizerwriter.blogspot.co.uk/2017/12/where-do-i-put-downloaded-amxd.html

(In Live 10, you can also just double-click on the .amxd file, but this puts the device in the same folder as all of the factory devices...)

Oh, yes, and sometimes last-minute fixes do get added, which is why sometimes a blog post is behind the version number of MaxForLive.com...

And no, I haven't had a chance to test it in Live 11 yet... To much to do, so little time...

Modular Equivalents

In terms of basic modular equivalents, then implementing OnOff Emulation is a mixer plus a VCF plus a Clipper plus a noise source, an envelope follower and a State-Variable Filter, plus a trigger and 10 AR envelopes. Nothing is complex, but there's quite a lot of separate bits to deal with, which can be tricky on a modular... 

Overall, I reckon that OnOff Emulation would require an ME of about 20. Alternatively, you could just take any modular patch and see what happens when you power it down and up. (Caution: Turn your amplifier volume down, and use a limiter on the input. Increase the volume slowly and carefully - with caution. Not recommended with headphones! Do not turn modulars on and off repeatedly and quickly!) 

---

If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)


Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)

 



 













Sunday, 28 February 2021

The Synthesizerwriter M4L Tape (etc.) Suite - crafted for Christian Henson of Spitfire Audio

Every so often, Christian Henson, on of the two founders of Spitfire Audio, publishes a YouTube video where he talks about guitar pedals. In the past, he has talked about pedals like the Strymon Blue Sky, the Gamechanger Audio Plus pedal, and lots of tape emulation pedals including this recent one inspired by 70s/80s videotape:

A Video Stomp Box...Really?

Christian's most recent video built on all of his back catalogue of using pedals to make music:

Making Cinematic Music with Guitar Pedals

Which got me thinking - why not put together a suite of my Max For Live devices for Ableton Live, specifically targeted at the distortions and modulations that are found in tape machines, digital echoes and other audio storage/processing/playback devices? So not just tape, but ANY analogue or digital processor. And hey, I could dedicate it to Christian Henson!

The Synthesizerwriter M4L Tape (etc.) Suite

The obvious starting point was my Ironic Distortion M4L device, which produces distortions and perturbations like aliasing, intermodulation, and quantisation noise, as well as mains power modulation - that can all be used to degrade audio in a variety of ways that can emulate analogue or digital processors. 

Ironic Distortion - blog post

Ironic Distortion - M4L.com

There are plenty of Saturation devices in M4L and VST formats, so I leave that to your own preference, but there was one glaring hole in my plan. I was lacking something to do Wow and Flutter, essential for tape emulation, plus I didn't have anything that simulated a broken power supply driving a digital processor... So I created one: called Ferrous Modulation

Ferrous Modulation


If a layout works, then re-use it. This rule works for guitar pedals, so I'm quite happy to re-use the legendarily crazy user interface from the Ironic Distortion M4L plug-in in Ferrous Modulation. So from left to right, you have Wow, Flutter, Mains Modulation, and Input sections. In each section, there is a slider/meter that sets the output level for that section, complete with a huge Mute button. 

So the Wow section has a mute button with 'W' on it, for 'Wow'. Above it is a control strip, with controls for the Frequency of the Wow, how much smoothing is applied to it, a display of the smoother wow waveform, then a stereo skew switch and rotary control, to emulate tape not being guided accurately or pulled inconsistently by the capstan and pinch wheel roller, and then two switches to take us above and beyond normal tape systems: a phase switch that lets you put the wow in phase or out of phase (extreme skewing and tape stretching), and a 'Sideband' switch which lets you choose single or double sideband outputs (tape machines will normally be single). Finally, there's a Gain control which sets the amount of wow that is applied, from subtle to overkill. Underneath the control strip are two real-time displays: the spectrum of the processed audio signal, and the sonogram (where time is horizontal, and frequency ism vertical, and spectral amplitude is colour). 

Next up is the Flutter section, this time with an 'F' on it. I've categorised flutter as being more cyclic than the band-limited noise that I've used for wow - there isn't any really definitive classification that I could find (most modern approaches to measuring wow and flutter treat them as just two different aspects of the same frequency modulation), and so the first rotary control is for the Frequency of the wow waveform, then a Smooth control (which makes no sense for a sine wave, but there you go), then a waveform selector which provides 10 waveforms, plus smoothed variations, followed by a waveform display. Then there is the same Skew, Phase and Sideband controls as before, plus the Gain control. Oh, and of course, the slider/meter sets the amount of processed signal that goes to the output.

The third section is the Mains Modulation section, which mis-labels the slider/meter as 'Level' instead of 'Mains' (which I will fix in the next update), but still has 'M' in the mute button. The Control strip this time has a selector switch for 50 or 60 Hz mains frequency, and the Single/Double Sideband toggle underneath. Then there's a Frequency rotary control, for those people whose mains power is not 50 or 60 Hz, then a 'Drive' control to control how much mains frequency modulation is applied to the audio, and a band-pass filter with a Q control to fine-tune the mains waveform (so you can over-drive it, and then tune high to just get harmonics of the mains). Underneath are the same spectrum and sonogram displays as the other sections.

The final section is the 'Input' section, and this allows you to mix in the original unsullied audio signal - the dry signal. I didn't want to confuse users with a normal wet/dry control because there are three wet signals, so I re-used this unusual scheme from the Ironic Distortion. Above the slider/meter are 15 storage boxes, where you can shift-click save your favourite settings. I like to encourage people to develop their own presets, so I don't provide any at the moment. But I have made one device which has presets: Octave Remapper (blog) Octave Remapper - M4L.com

Audio Chain

My recommended chain of devices in your track strip in Abelton Live is:

[Ferrous Modulation] -> [Saturator, etc.] -> [Ironic Distortion]

Remember that there are many saturation and distortion devices that can be used to introduce your own preferred amounts of harmonic distortion, compression, saturation, waveshaping, etc. 

Not a Pedal!

The Suite isn't a hardware pedal, and it isn't available via Pianobook.co.uk (now there's an idea!), but it is free and it is capable of some horrendously bad 'tape'-influenced sounds, plus lots of other 'processed' sounds, many of which are not from equipment as we know it, and some subtle tones as well. 

I'm sure Christian (and you) will have a great time with it!

Getting Ferrous Modulation

You can get Ferrous Modulation here:

     https://maxforlive.com/library/device/7045/ferrous-modulation-ch

Here are the instructions for what to do with the .amxd file that you download from MaxforLive.com:

     https://synthesizerwriter.blogspot.co.uk/2017/12/where-do-i-put-downloaded-amxd.html

(In Live 10, you can also just double-click on the .amxd file, but this puts the device in the same folder as all of the factory devices...)

Oh, yes, and sometimes last-minute fixes do get added, which is why sometimes a blog post is behind the version number of MaxForLive.com...

Modular Equivalents

In terms of basic modular equivalents, then implementing Ferrous Modulation just requires three sections of frequency shifting, with appropriate modulation waveforms: band-pass filtered noise, a VCO or LFO, and an LFO for the mains. 

Overall, I reckon that Ferrous Modulation would require an ME of about 7 minimum. You may be able to find a frequency shifter that has built in modulation sources, in which case it might drop to 3 or 4 ME.

Links

Ironic Distortion - blog post

Ironic Distortion - M4L.com

Octave Remapper - blog post

Octave Remapper - M4L.com

Ferrous Modulation - M4L.com

I would like to thank Christian Henson for his ongoing inspiration, enthusiasm, and for founding Spitfire Audio, who make wonderful samples!

---

If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)


Synthesizerwriter's StoreSynthesizerwriter's Store
 (New 'Modular thinking' designs now available!)


  



 


Saturday, 27 February 2021

Not AI, and real AI - in music...

The people who post fake comments onto blogs annoy me. Except that many of them aren't even people - they are automated algorithms running on computers. An interesting example arrived in my Blog Comments Inbox recently, and it got me thinking about Artificial Intelligence (AI), and its more common predecessor: Artificial Stupidity.

The comment looked almost innocuous at first. it was reasonably well targetted to this blog because it talked about electronics, and the name was actually a link to a website that claimed to reveal places where you could learn about electronics. The first giveaway was that URL instead of a name, but the web-site itself was a dead giveaway. At first glance it seemed to be a couple of A4 pages-worth of text, talking about how to learn electronics via resources on the InterWeb. 

Not AI

But then I read the first paragraph, and then the second. Each seemed like a generic introduction to the topic, but they didn't actually get to anything like any recommendations for sites, or URLs... And the second paragraph didn't follow on from the first. In fact, they read like two different authors writing on two slightly different topics. The next two paragraphs were worse in their lack of linking, and in the divergence of styles. As I continued reading, I realised that each paragraph was just text extracted from a search term something like: "Where can I study electronics online?' and then assembled together on a web page, with lots of associated adverts. Nowhere did it actually get to anything useful like a real URL linking to online resources, nor were there any summary tables of good resources, and in fact, there wasn't any useful content anywhere on the page. The whole thing was designed to look good enough to fool someone into thinking it might be a useful thing for blog readers to know about, and to let the comment appear in the blog. Then the web-page would generate money for its owners whenever anyone clicked on the ads. In other words: a nasty parasite.

Well, I wasn't fooled, and I deleted the comment, and I would advise you to be cautious when you search for phrases like: 'How can I learn about electronics online?', because there are lots of leech sites like the one that I rejected. Alternatively, try these web-sites for proper learning relevant to this blog:

MIT circuits and electronics

MIT Practical Electronics

OU Intro to Electronics

Coursera EM Production

There, that's infinitely more genuine information that there was on that entire web-page. And there are lots more resources for you to find out there! Note that some of these are free, and some are not. The quality of some of the free ones (MIT, for example) is very high!

I reckon that the web-page that I rejected was probably not created by Artificial Intelligence (AI), it felt much more like a simple algorithm (Artificial Stupidity) with maybe some high level editing by a human being. So 'Not AI' rather than AI. But there are some interesting applications of real AI that are starting to appear that could affect how you make music in the future...or don't make music...

Real AI

The last couple of years have seen two big trends in electronic music: Cloud Sample Libraries, and AI Assistance. 

Subscription-based sample libraries like Splice , Noiiz , LoopCloud, and Roland Cloud provide access to huge numbers of ready-to-use samples, and mean that you don't need to fill a room with hardware synths, or even fill your computer's SSD or Hard Disk withVSTs. They aren't connected with AI, other than they use simple background algorithms to learn what you like and try to sell you more of that. But I'm not a fan of them because they typically require you to give them root-level access to your computer, which they justify by saying they have to protect all of the valuable content which you can download, but I'm not happy with something where you are giving them permission to do anything at all on your computer, After all, the news isn't full of repeated computer breaches where millions of User IDs, Passwords and Credit Card details are stolen by hackers, so there's no problem with giving deep unfettered access to your computer, is there? 

AI Assistance is more subtle, and I don't know of a generic word for it yet - there aren't enough similar instances of it for people to need a word for it, but this doesn't mean that there aren't lots of examples of it out there. It appears as drum machines, or melody generators, or chord suggestions, and it often provides easy access to generated patterns, melodies, chord progressions, etc. These are several steps up from the Randomisation generators that you got back in the 20th Century. 

AI Drum Machine - Algonaut Atlas

AI Beat Assistant - Rhythmic

VST Patterns - Sonic Charge Microtonic

One thing to be aware of is that a lot of the cheaper examples of 'AI Music' are actually just Machine Learning (ML), which has become very accessible recently to programmers, and allows a network of connected modes (a 'neural network') to learn from pre-prepared training materials and then to output lots of variations of it - give me more 'like this'...  ML is kind of 'entry-level' AI...

Unless you make movies, then you might not have seen another application for AI that has been gradually increasing the amount of advertising that they do, and that is AI-generated music for movies. In other words, if you have made a film or movie and you don't want to pay a human composer to write music for it, then you can get AI to do it for you... 

AI Music for Movies - Ecrett

Creative Assistant - Aiva

AI Music Generator - SoundRaw

Broad Application AI

To really appreciate where AI is going, then you need to look beyond the 'specific applications' and the often very obvious ML experiments, and go for something more generic. One very good example is OpenAI. When you go to their web-site they don't try and sell you a solution, Instead they show you just a selection of overviews of things that you might be able to use their AI software to do. This isn't just one or two possible applications - you scroll on and on through lots of things they can do. It's a bit like going into a DIY store to buy a screwdriver, and discovering that they sell jus a few other things as well... No, scrub that, imagine arriving at an out-of-town retail park where they have a car park surrounded on all sides by huge DIY, carpet, furniture, homeware, clothes, electricals, electronics, video gaming, computers, media... when you just wanted a screwdriver. OpenAI is an overwhelming place, and I think that it shows a glimpse not of the future, but of how lots of people will be making the future of what you do/buy/watch/play (etc.) next online.  

OpenAI

DeepMind

Nnaisence

AGI Innovations

Semantic Machines

I've tried to curate these so that the AI Robot-oriented companies aren't included, but if you want to see the state-of-the-art in robots, then Boston Dynamics are a good starting point...

And finally...

I'm always intrigued by companies whose advertising is based around a competitor. It doesn't seem to be a very 'Intelligent' thing to do. When I did a Google search for 'openai' then the top two paid slots, right at the top of the search, were from two other companies also selling AI 'solutions'. I have not included them in the list above. If I'm searching for 'Moog', then do I want the top result to be from another synthesizer manufacturer? (Oh, and when I did this search, then the top answers were all 'Moog'!)

(Oh, and why did I use those notes in the graphic near the start of this blog?)

---

If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)


Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)


Friday, 26 February 2021

Should I look at the Spectrum, or the Waveform? - [Single Cycle Part 4]

One of the 'useful things to remember' that I have always had in my mind is something that I learned reading through a pile of old 'Wireless World' magazines from a cupboard at the back of the Physics Lab at my school:

Spectra can be better diagnostics than waveforms 

(I'm using 'Spectra' here as the plural for 'Spectrum'. You can replace it with 'Spectrums' if you prefer... I won't tell anyone.)

It was from an article where they described how a project to recreate the sound of a church organ by reproducing the waveform failed because the result sounded totally different. From the first part of this series then you may not be suspecting that they probably only matched up the 'top' 30 to 40 dB of the sound (the visible bit on a 'scope) - the '30 dB Rule' as I call it. When I've experimented with A/S (Analysis/Synthesis), the iterative synthesis technique where you analyse the target sound/timbre, get a reasonably close synthesised version of it, then subtract the two to get a 'residual', and then synthesize that, and so on, then I wondered if you could use this to keep removing layers of 40 dB or so of visibility, getting a better approximation each time...

Anyway, a reasonably good spectrum analyser is going to show you a lot about the spectrum of a sound - and the harmonics that it shows will give you detail well below 40 dB down. But the spectrum isn't perfect either, because it shows the magnitude of the harmonics in a sound, but generally, not the phase relationships. As was shown in part two of this series (Single Cycle 2), then phase relates to the tiny timing difference between the same point on two waveforms - it could be zero crossings, or positive peaks: anywhere that is easy to compare. Although the horizontal axis is the 'time' axis, many people think of the phase more in terms of the shape of the waveform 'sliding' horizontally - which kind of removes the link that is implicit in a 'time waveform'! But this 'sliding' approach does explain how it is possible to have phase differences that are not directly related to time - if you take a waveform and invert it, the two waveforms are then 'out of phase' even though neither of them has moved in time (although it might take a finite amount of time for the inversion to happen, of course!)

Where this gets interesting is when the waveform is not symmetric. If you invert a sawtooth, then what does 'phase' mean? The zero crossing position gives a reasonably neat alignment of the sawtooth waves, but using the positive peak is confusing, and it would be better to use the fast 'edge' between the positive and negative peaks - but is this then ignoring the time for that fast edge. So should the zero crossing in the middle of the fast edge be used? 

When the waveform is even less symmetric, then neither peaks nor zero crossings may be a viable choice for a reference point. In the example above, inverting the waveform means that the positive peaks are different, and there are two candidate zero crossings. When waveforms are this different, then phase starts to lose any meaning or value for me... Of course, you could use the fundamental of the two waveforms, in which case the inverted waveform would be seen as out-of-phase or inverted.

Phase is important in filter design (like in loudspeaker crossovers, for example), in noise cancellation (two anti-phase signals will cancel out to give silence, although getting two precisely out-of-phase signals is not very easy in a large volume in the real world), and in creating waveforms (in additive synthesis, for example). It turns out that the phase can be very important as a diagnostic tool: so a visually smooth filter cut-off might well be hiding a phase response that goes all over the place. 

Why is Phase Important?

The standard example to show why 'phase is important' is to take a 'square'-ish waveform made from a few odd harmonics, and to change the phase of one of them. Suddenly the square wave isn't square any longer... 

What has always fascinated me is the number of harmonics that are required to get waveforms that are close to the mathematically perfect, sharp, linear wave shapes that you see in text books. In the example above, then 23 harmonics are used to make a 'wobbly' square wave - actually, of course, then because a square wave is made up of odd harmonics, then there are not 23 actual sine waves used to make up the square wave, since just under half of them have zero amplitude. 

So when the phase of the third harmonic (three times the frequency of the fundamental) changes, then two things happen. Most text books will show the changed waveform, and will note that it still sounds like a square wave (the harmonics are the same...). But it is more unusual for there to be any mention of the change in the peak amplitude - the 'F3 out of phase' waveform on the right hand side is about 50% bigger, peak-to-peak, than the 'conventional' square wave approximation on the left hand side. It turns out that changes in the phase of harmonics can affect the shape and the peak-to-peak amplitude, and more: the phase of the harmonics can be used to optimise a waveshape for some types of processing, although this is normally used in applications like high power, high voltage electricity distribution rather than audio.

But this 'phase is important to the shape of the waveform' principle applies to any waveform, and this can give surprising results. Take a triangle wave: it has only odd harmonics, and they drop off rapidly with increasing frequency, so the triangle really is what it sounds like: a sine wave with a few harmonics on top. Now you are probably intrigued by this, and rady to explore it yourself, so there's a very useful online resource at: http://www.mjtruiz.com/ped/fourier/ It is an additive synthesizer that lets you explore the amplitude (volume/size/value) of harmonics, as well as their phase! (This is called a Fourier Synthesizer, after the Fourier series, which is the mathematics behind adding different sine waves together to give waveforms...)

Here's a screenshot of a triangle wave produced using the Fourier Synthesizer from M J Ruiz:

I have edited the colours of the sliders to emphasize the harmonics which have zero amplitude (black), the harmonics which are 'in phase' with the fundamental (blue), and the harmonics which are 'out of phase' with the fundamental (orange). In-phase is shown as a value of 0 in the screenshot - meaning zero degrees of phase, where a complete cycle would be 360 degrees. Out-of-phase is shown as 180 degrees - half way round a cycle of 360 degrees.


 The screenshot above shows an unedited view of the same triangle wave, but with the phases changed so that all of the harmonics are in-phase. The result is more like a slightly altered sine wave than a triangle wave - but it sounds like a triangle wave...

Earlier I pointed out that the sound of a square wave with the third harmonic changed in phase was the same as a square wave with no phase change on the third harmonic. It turns out that your ears are not sensitive to phase relationships of this type, and so the square waves, and the triangle waves, all sound the same regardless of the phase relationships of the harmonics. BUT if you change the phase of a harmonic in real-time, then your ear  WILL hear it. Static phase relationships between harmonics are not heard, but changes in phase are...

If you think about it, then this is not as surprising as it might at first sound. Your ears are very good at detecting changes of phase, because that's how they know what frequency they are hearing! But fixed differences in phase just change the shape, and your ears don't pick that up. One possible explanation for this is that your ears evolved as they did because the harmonic content of sounds was important for survival (maybe locating sources of food, or danger!), but the shape of the waveform was not. Discovering that the human hearing system is not optimised for sound synthesis may be a disappointment for some readers...

One other thing that you may have noticed in the Fourier Synthesizer screen-shots is the small amplitudes of the harmonics for the triangle wave. The sliders used to control the amplitudes are linear, whereas the way that harmonics are typically shown in a spectrum analyser is on a log scale: as dBs. 


The spectrum above shows this quite nicely (plus some other interesting things as well, most of which are because this isn't a 'real' triangle wave, but one that I constructed inside Audacity). The fundamental frequency of the waveform is 50Hz and goes higher than the 0dB reference level (maybe +5dB), and the 3rd harmonic at 150 Hz is at about -24 dB which translates to -19 dB when you add that +5dB. But the Fourier Synthesizer showed this as 0.11 on the linear scale. It turns out that -19dB is a voltage ratio of about 0.11. Thhe 5th harmonic is at -33dB, which is -28dB when you add the +5 dB, and this is a voltage ratio of 0.04, which matches the Fourier Synthesizer value of 0.04. The 7th harmonic is -41dB, which becomes -36 dB which is  0.016, and the Fourier Synthesizer has 0.02. 

So the spectrum analyser harmonic levels are nice numbers, whereas the Fourier Synthesizer harmonics amplitudes are small numbers. I much prefer the spectrum analyser log scale, and here's a chart that shows how the spectrum analyser dBs down relates to the Fourier Synthesizer slider value:


Note that 0.5, which means that you set the Fourier Synthesizer slider to the half way point, corresponds to -6dB. Anything below -40dB is a slider value of below 0.01, which means moving that slider by 1 hundredth of its travel, which is a small distance. This table kind of reinforces why the 40dB Rule mentioned in part 1 of this series exists - the slider values are just tiny, and this means that the harmoncs are going to be tiny too. Probably too small to be seen on a screen!

So if I was going to be designing a Fourier Synthesizer, or an Additive Synthesizer, then I wouldn't use the slider values, because they are typically small and are going to be hard to set easily. Instead I would use the dB values, which are simple numbers and are going to be much easier to set correctly.

Conclusions (so far)

From this series so far, there's quite a lot of things that we now know about waveforms and spectrums:

- Anything below -40dB is going to be difficult to see on a screen

- A single cycle waveform might have unusual harmonics in it

- The waveform does not always tell you what harmonics are present

- The spectrum always tells you what harmonics are present

Spectra can be better diagnostics than waveforms 

- Phase is important for waveshapes, but not what they sound like

- Your ears can only hear changes of phase

- Controlling the level of harmonics should use a log (dB) scale

In the next part, I'm going to talk about noise in single cycle waveforms, and why it doesn't do what you might expect.

---

If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)


Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)






 










 


Sunday, 31 January 2021

Scope 'Thru' Box for audio waveform monitoring...

Yes, I'm late with my 'Waveform or Spectrum' blog post: part 4 of the 'Single Cycle' series. In the meantime, to illustrate the preparation that goes into my posts, here's how I made a custom 'Thru' box for my low-cost 'build it yourself' digital oscilloscope to make it easier to look at audio waveforms on quarter inch jack plugs. (or 3.5mm mono jacks for Eurorack...)

'Build It Yourself...'


You may have seen them on Smile.Amazon... - there are quite a few low-cost, single channel, digital oscilloscopes with enough bandwidth for audio but not much more. I bought a kit for a DSO 138mini produced by JYE Tech (www.jyetech.com) that came with a clear plastic case, built it and it has served me well whilst I dither trying to decide which real scope to buy! Maybe I should do a blog post on how doing research aimed at finding the 'right' product can slow down GAS (Gear Acquisition Syndrome) to a crawl...

Anyway, the kit comes with a 'probe' cable which has a BNC connector on one end, and two small crocodile clips with rubber covers on the other. The BNC connector is THE standard input connector for 'scopes', and I don't think I've ever seen any other connector used for this... However, those croc clips aren't very good with audio connectors. It is just about possible to grab hold of a 3.5 mm stereo jack...


...but the grip on a quarter inch jack is precarious, and feels like it is going to spring off at any moment. 


What is needed is some sort of 'Thru' box - a way to connect to a quarter inch jack, whilst not interrupting the audio. So that's what I made...


Unlike previous 'mods' posts, this time I'm going to give a bit more detail about the construction. The circuit is simple: two jack sockets connected in parallel, and a BNC connected to the tip and sleeve. 

You may have noticed that I have used stereo jack sockets. My thought was to make a mono and stereo jack compatible 'Thru' box, but when I started figuring out the circuit, I realised that there was a problem with having a mono and a stereo jack plugged into the box if I used stereo sockets, so I reverted back to using the jack sockets as if they were mono.

The Circuit

The circuit is very simple, the sleeve and tip of the two sockets are connected together, and also to the BNC socket that connects to the oscilloscope. On the sockets, these are the two outer connections, which makes it easy to solder.

When a mono jack plug is inserted into a stereo jack socket, then the tip, ring and sleeve make connections to the metal connectors. But because the ring connections are not connected to anything else, then they can be ignored. 


I did contemplate going a bit 'British' with my design, by adding in a toggle switch to allow the selection of various different sockets to be connected to the BNC 'output' socket, but I eventually decided that the vast majority of sockets used in synthesizers and pedals (and my studio, which had a big influence) were quarter inch jacks.

For use with most modular synths, then there isn't a problem - Eurorack, Moog, Roland (and others) both use mono jack plugs and sockets (14 inch or 3.5mm - so just replace the 1/4 inch jacks with 3.5mm!), whilst Buchla (and my own first modular synth) use Banana plugs (or 4mm, as I knew them) and a Thru connection panel for 4mm plugs and sockets is trivial. And yes, I know that there are other connection systems used in modular synths: Wiard use Bantam plugs, ARP uses either 3.5mm mono jacks (2600, Odyssey, Avatar, Little Brother, etc.) or matrix slide switch panels (2500, etc.), and EMS use patch pins (which are actually tiny jack-like connectors - you can put resistors between the tip and sleeve!) for the Ghielmetti matrix patch panels, and there are others. I'm sure there are ways to connect an oscilloscope to these other modulars, but it is out of 'scope' for this blog post. (Did you see that pun?)

If you want to make a 3.5mm Jack version for use with Eurorack, then most of what follows is probably going to be useful, only the size of the sockets is different!

Drilling

Drilling holes for jack sockets is easy, but when they are going inside in a small cast metal box, then advance planning needs to be carried out to ensure that the jacks will actually fit, and so I pre-arranged the two jack sockets inside the (50x50x31mm) box and determined that they would fit, as well as the BNC socket. There are two sets of horizontal marks because I initially centred the sockets, and then realised that I also needed to leave room for the BNC socket, so I moved the two jack sockets lower to one side.


You can get a smaller box (52x38x31mm) than this one, and getting all three sockets into that would be more of a challenge - this would probably be worth looking into for a commercial design because the smaller sized box is cheaper. I have always had a soft spot for these die-cast metal boxes, and it is interesting to see that some guitar medals (at the 'boutique', end of the market in many cases) make a point of deliberately using that bare metal look. 


Having decided that I could fit the two sockets inside the diecast box, I marked the positions for the holes and drilled small pilot holes first, then a larger hole, and finally the 10mm hole. I used a 10mm drill because it was the largest drill bit that I had immediately to hand, and so I needed to increase the size a little so that the 10.5mm diameter jack socket would fit. For this, I mis-used my trusty de-burring tool - going slightly beyond just taking the rough edges of the drilled holes quickly opened the holes up enough for the sockets to pass through.

I then used the box and holes as a jig to check that I would be able to solder the connections. It is always a good idea to pre-assemble things before committing to the final assembly. 


 Here are the two sockets, using the drilled box as a jig, and you can see that it is easy to bend the legs (these are PCB-mount jack sockets, which I'm mis-using!). This version was my first attempt and so I bent all three pins of the stereo sockets. For the circuit as shown above, you only need to bend and solder the tip and sleeve.

When I marked up the holes inside the box before doing the drilling, I had noticed that space inside the box was quite cramped, and I worried that the other pins (the NC (normally closed) would short against the case. This shouldn't be a problem because the contacts open when you insert a jack, but this also seemed like a good point to mention heat-shrink insulation, which is the preferred way of insulating metal from touching other bits of metal, and preventing fingers and other objects from touching metal. The thin plastic adhesive tape that is known as 'insulation tape' or 'electrical tape is not really the ideal stuff for doing this, despite the name. The stickiness fades with time and the tape unravels. In more than 40 years working in electronics, I have never, ever seen insulation tape used to insulate anything! 


In the above photo, you can see the un-shrunk heat shrink sleeving on the top right (just cut it to length) , the heat gun at the left hand side, and the sockets in the middle, with the NC pins shrouded in heat shrink. A more conventional use for heat shrink would be inside a jack socket, where you would use it to cover the solder connection to the tip terminal, so that if the braid shielding should come loose with use, it can't touch any of the metal associated with the tip, where the signal is...


 I then used the box as a jig again, and soldered the pins of the jack sockets together. As I've already mentioned, you only need to solder the NO (Normally Open) tip and sleeve pins together. A perfectionist might put heat shrink sleeving on this pin...

Next. the BNC socket. This is a little more complex, because the required hole is not round. BNC connectors work by your fingers twisting the plug onto the socket, so there's a lot of twisting force on the socket, and if it isn't securely held in place, then it will gradually loosen with time. Here's the hole that is needed for the BNC socket that I got, plus how I made it:


Stage 3 shows the final hole. It has two flat parts that would not be present if you just drilled a 9mm hole. So, starting at stage 1, I drilled a pilot hole, then a 5mm hole, then an 8mm hole. I then used a rectangular cross-section needle file to remove the metal shown in yellow - the top and bottom of this hole would be the flat parts of the final hole. Stage 2 used a half-round needle file to remove the metal in yellow - avoiding the two flat parts that were produced in stage 1. 


This resulted in the final hole, as shown in stage 3 (and above!). When the BNC is mounted in this hole, the two flat parts of the hole prevent the BNC socket from twisting, and so ensure that it stays secure in the box.  


Most PCB-mount sockets (like the jack sockets) don't have flat areas on the holes they require, because the PCB holds them in one orientation. 

The strangely shaped pliers that I used to tighten up the jack sockets are called Wilkinson Quickgrips. They were originally produced for British Telecom engineers as an easy way to tighten nuts of a variety of sizes. These days they turn up at collector's fairs sometimes - I got this pair at a fair in Rufford in Lancashire. 


For the BNC socket, I soldered a wire to the centre terminal...


and another wire to the washer tag. I use Blu-Tack as a way of holding things down when I am soldering them. It helps prevent burnt fingers!


I then stripped the ends of the wires with a wire-stripper tool...twisted the wires...


...and 'tinned' the end of the wires by melting a little bit of solder onto them using a soldering iron. Tinning wires makes it easier to solder them to other bits of metal. In this case, I needed to connect these wires to the pair of jack sockets, so the tinning would make this easier.


Here's the inside of the box once all the soldering was done. 


I then tested the thru box with my Studiospares UCT2 cable testing box to make sure that I didn't have any short-circuits, bad solder joints or other problems. 


And here's the Thru Box connected to the BNC cable, ready for use.


And here the box is, being used to check a resonant single cycle waveform... In this case, I haven't used the 'thru' - I have just taken the output from a sample player and connected it to the Thru Box. If I wanted to hear it, then I would just use a jack cable to connect the other jack socket to an amplifier. 

Links




---

If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)


Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)