Sunday, 28 February 2021

The Synthesizerwriter M4L Tape (etc.) Suite - crafted for Christian Henson of Spitfire Audio

Every so often, Christian Henson, on of the two founders of Spitfire Audio, publishes a YouTube video where he talks about guitar pedals. In the past, he has talked about pedals like the Strymon Blue Sky, the Gamechanger Audio Plus pedal, and lots of tape emulation pedals including this recent one inspired by 70s/80s videotape:

A Video Stomp Box...Really?

Christian's most recent video built on all of his back catalogue of using pedals to make music:

Making Cinematic Music with Guitar Pedals

Which got me thinking - why not put together a suite of my Max For Live devices for Ableton Live, specifically targeted at the distortions and modulations that are found in tape machines, digital echoes and other audio storage/processing/playback devices? So not just tape, but ANY analogue or digital processor. And hey, I could dedicate it to Christian Henson!

The Synthesizerwriter M4L Tape (etc.) Suite

The obvious starting point was my Ironic Distortion M4L device, which produces distortions and perturbations like aliasing, intermodulation, and quantisation noise, as well as mains power modulation - that can all be used to degrade audio in a variety of ways that can emulate analogue or digital processors. 

Ironic Distortion - blog post

Ironic Distortion - M4L.com

There are plenty of Saturation devices in M4L and VST formats, so I leave that to your own preference, but there was one glaring hole in my plan. I was lacking something to do Wow and Flutter, essential for tape emulation, plus I didn't have anything that simulated a broken power supply driving a digital processor... So I created one: called Ferrous Modulation

Ferrous Modulation


If a layout works, then re-use it. This rule works for guitar pedals, so I'm quite happy to re-use the legendarily crazy user interface from the Ironic Distortion M4L plug-in in Ferrous Modulation. So from left to right, you have Wow, Flutter, Mains Modulation, and Input sections. In each section, there is a slider/meter that sets the output level for that section, complete with a huge Mute button. 

So the Wow section has a mute button with 'W' on it, for 'Wow'. Above it is a control strip, with controls for the Frequency of the Wow, how much smoothing is applied to it, a display of the smoother wow waveform, then a stereo skew switch and rotary control, to emulate tape not being guided accurately or pulled inconsistently by the capstan and pinch wheel roller, and then two switches to take us above and beyond normal tape systems: a phase switch that lets you put the wow in phase or out of phase (extreme skewing and tape stretching), and a 'Sideband' switch which lets you choose single or double sideband outputs (tape machines will normally be single). Finally, there's a Gain control which sets the amount of wow that is applied, from subtle to overkill. Underneath the control strip are two real-time displays: the spectrum of the processed audio signal, and the sonogram (where time is horizontal, and frequency ism vertical, and spectral amplitude is colour). 

Next up is the Flutter section, this time with an 'F' on it. I've categorised flutter as being more cyclic than the band-limited noise that I've used for wow - there isn't any really definitive classification that I could find (most modern approaches to measuring wow and flutter treat them as just two different aspects of the same frequency modulation), and so the first rotary control is for the Frequency of the wow waveform, then a Smooth control (which makes no sense for a sine wave, but there you go), then a waveform selector which provides 10 waveforms, plus smoothed variations, followed by a waveform display. Then there is the same Skew, Phase and Sideband controls as before, plus the Gain control. Oh, and of course, the slider/meter sets the amount of processed signal that goes to the output.

The third section is the Mains Modulation section, which mis-labels the slider/meter as 'Level' instead of 'Mains' (which I will fix in the next update), but still has 'M' in the mute button. The Control strip this time has a selector switch for 50 or 60 Hz mains frequency, and the Single/Double Sideband toggle underneath. Then there's a Frequency rotary control, for those people whose mains power is not 50 or 60 Hz, then a 'Drive' control to control how much mains frequency modulation is applied to the audio, and a band-pass filter with a Q control to fine-tune the mains waveform (so you can over-drive it, and then tune high to just get harmonics of the mains). Underneath are the same spectrum and sonogram displays as the other sections.

The final section is the 'Input' section, and this allows you to mix in the original unsullied audio signal - the dry signal. I didn't want to confuse users with a normal wet/dry control because there are three wet signals, so I re-used this unusual scheme from the Ironic Distortion. Above the slider/meter are 15 storage boxes, where you can shift-click save your favourite settings. I like to encourage people to develop their own presets, so I don't provide any at the moment. But I have made one device which has presets: Octave Remapper (blog) Octave Remapper - M4L.com

Audio Chain

My recommended chain of devices in your track strip in Abelton Live is:

[Ferrous Modulation] -> [Saturator, etc.] -> [Ironic Distortion]

Remember that there are many saturation and distortion devices that can be used to introduce your own preferred amounts of harmonic distortion, compression, saturation, waveshaping, etc. 

Not a Pedal!

The Suite isn't a hardware pedal, and it isn't available via Pianobook.co.uk (now there's an idea!), but it is free and it is capable of some horrendously bad 'tape'-influenced sounds, plus lots of other 'processed' sounds, many of which are not from equipment as we know it, and some subtle tones as well. 

I'm sure Christian (and you) will have a great time with it!

Getting Ferrous Modulation

You can get Ferrous Modulation here:

     https://maxforlive.com/library/device/7045/ferrous-modulation-ch

Here are the instructions for what to do with the .amxd file that you download from MaxforLive.com:

     https://synthesizerwriter.blogspot.co.uk/2017/12/where-do-i-put-downloaded-amxd.html

(In Live 10, you can also just double-click on the .amxd file, but this puts the device in the same folder as all of the factory devices...)

Oh, yes, and sometimes last-minute fixes do get added, which is why sometimes a blog post is behind the version number of MaxForLive.com...

Modular Equivalents

In terms of basic modular equivalents, then implementing Ferrous Modulation just requires three sections of frequency shifting, with appropriate modulation waveforms: band-pass filtered noise, a VCO or LFO, and an LFO for the mains. 

Overall, I reckon that Ferrous Modulation would require an ME of about 7 minimum. You may be able to find a frequency shifter that has built in modulation sources, in which case it might drop to 3 or 4 ME.

Links

Ironic Distortion - blog post

Ironic Distortion - M4L.com

Octave Remapper - blog post

Octave Remapper - M4L.com

Ferrous Modulation - M4L.com

I would like to thank Christian Henson for his ongoing inspiration, enthusiasm, and for founding Spitfire Audio, who make wonderful samples!

---

If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)


Synthesizerwriter's StoreSynthesizerwriter's Store
 (New 'Modular thinking' designs now available!)


  



 


Saturday, 27 February 2021

Not AI, and real AI - in music...

The people who post fake comments onto blogs annoy me. Except that many of them aren't even people - they are automated algorithms running on computers. An interesting example arrived in my Blog Comments Inbox recently, and it got me thinking about Artificial Intelligence (AI), and its more common predecessor: Artificial Stupidity.

The comment looked almost innocuous at first. it was reasonably well targetted to this blog because it talked about electronics, and the name was actually a link to a website that claimed to reveal places where you could learn about electronics. The first giveaway was that URL instead of a name, but the web-site itself was a dead giveaway. At first glance it seemed to be a couple of A4 pages-worth of text, talking about how to learn electronics via resources on the InterWeb. 

Not AI

But then I read the first paragraph, and then the second. Each seemed like a generic introduction to the topic, but they didn't actually get to anything like any recommendations for sites, or URLs... And the second paragraph didn't follow on from the first. In fact, they read like two different authors writing on two slightly different topics. The next two paragraphs were worse in their lack of linking, and in the divergence of styles. As I continued reading, I realised that each paragraph was just text extracted from a search term something like: "Where can I study electronics online?' and then assembled together on a web page, with lots of associated adverts. Nowhere did it actually get to anything useful like a real URL linking to online resources, nor were there any summary tables of good resources, and in fact, there wasn't any useful content anywhere on the page. The whole thing was designed to look good enough to fool someone into thinking it might be a useful thing for blog readers to know about, and to let the comment appear in the blog. Then the web-page would generate money for its owners whenever anyone clicked on the ads. In other words: a nasty parasite.

Well, I wasn't fooled, and I deleted the comment, and I would advise you to be cautious when you search for phrases like: 'How can I learn about electronics online?', because there are lots of leech sites like the one that I rejected. Alternatively, try these web-sites for proper learning relevant to this blog:

MIT circuits and electronics

MIT Practical Electronics

OU Intro to Electronics

Coursera EM Production

There, that's infinitely more genuine information that there was on that entire web-page. And there are lots more resources for you to find out there! Note that some of these are free, and some are not. The quality of some of the free ones (MIT, for example) is very high!

I reckon that the web-page that I rejected was probably not created by Artificial Intelligence (AI), it felt much more like a simple algorithm (Artificial Stupidity) with maybe some high level editing by a human being. So 'Not AI' rather than AI. But there are some interesting applications of real AI that are starting to appear that could affect how you make music in the future...or don't make music...

Real AI

The last couple of years have seen two big trends in electronic music: Cloud Sample Libraries, and AI Assistance. 

Subscription-based sample libraries like Splice , Noiiz , LoopCloud, and Roland Cloud provide access to huge numbers of ready-to-use samples, and mean that you don't need to fill a room with hardware synths, or even fill your computer's SSD or Hard Disk withVSTs. They aren't connected with AI, other than they use simple background algorithms to learn what you like and try to sell you more of that. But I'm not a fan of them because they typically require you to give them root-level access to your computer, which they justify by saying they have to protect all of the valuable content which you can download, but I'm not happy with something where you are giving them permission to do anything at all on your computer, After all, the news isn't full of repeated computer breaches where millions of User IDs, Passwords and Credit Card details are stolen by hackers, so there's no problem with giving deep unfettered access to your computer, is there? 

AI Assistance is more subtle, and I don't know of a generic word for it yet - there aren't enough similar instances of it for people to need a word for it, but this doesn't mean that there aren't lots of examples of it out there. It appears as drum machines, or melody generators, or chord suggestions, and it often provides easy access to generated patterns, melodies, chord progressions, etc. These are several steps up from the Randomisation generators that you got back in the 20th Century. 

AI Drum Machine - Algonaut Atlas

AI Beat Assistant - Rhythmic

VST Patterns - Sonic Charge Microtonic

One thing to be aware of is that a lot of the cheaper examples of 'AI Music' are actually just Machine Learning (ML), which has become very accessible recently to programmers, and allows a network of connected modes (a 'neural network') to learn from pre-prepared training materials and then to output lots of variations of it - give me more 'like this'...  ML is kind of 'entry-level' AI...

Unless you make movies, then you might not have seen another application for AI that has been gradually increasing the amount of advertising that they do, and that is AI-generated music for movies. In other words, if you have made a film or movie and you don't want to pay a human composer to write music for it, then you can get AI to do it for you... 

AI Music for Movies - Ecrett

Creative Assistant - Aiva

AI Music Generator - SoundRaw

Broad Application AI

To really appreciate where AI is going, then you need to look beyond the 'specific applications' and the often very obvious ML experiments, and go for something more generic. One very good example is OpenAI. When you go to their web-site they don't try and sell you a solution, Instead they show you just a selection of overviews of things that you might be able to use their AI software to do. This isn't just one or two possible applications - you scroll on and on through lots of things they can do. It's a bit like going into a DIY store to buy a screwdriver, and discovering that they sell jus a few other things as well... No, scrub that, imagine arriving at an out-of-town retail park where they have a car park surrounded on all sides by huge DIY, carpet, furniture, homeware, clothes, electricals, electronics, video gaming, computers, media... when you just wanted a screwdriver. OpenAI is an overwhelming place, and I think that it shows a glimpse not of the future, but of how lots of people will be making the future of what you do/buy/watch/play (etc.) next online.  

OpenAI

DeepMind

Nnaisence

AGI Innovations

Semantic Machines

I've tried to curate these so that the AI Robot-oriented companies aren't included, but if you want to see the state-of-the-art in robots, then Boston Dynamics are a good starting point...

And finally...

I'm always intrigued by companies whose advertising is based around a competitor. It doesn't seem to be a very 'Intelligent' thing to do. When I did a Google search for 'openai' then the top two paid slots, right at the top of the search, were from two other companies also selling AI 'solutions'. I have not included them in the list above. If I'm searching for 'Moog', then do I want the top result to be from another synthesizer manufacturer? (Oh, and when I did this search, then the top answers were all 'Moog'!)

(Oh, and why did I use those notes in the graphic near the start of this blog?)

---

If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)


Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)


Friday, 26 February 2021

Should I look at the Spectrum, or the Waveform? - [Single Cycle Part 4]

One of the 'useful things to remember' that I have always had in my mind is something that I learned reading through a pile of old 'Wireless World' magazines from a cupboard at the back of the Physics Lab at my school:

Spectra can be better diagnostics than waveforms 

(I'm using 'Spectra' here as the plural for 'Spectrum'. You can replace it with 'Spectrums' if you prefer... I won't tell anyone.)

It was from an article where they described how a project to recreate the sound of a church organ by reproducing the waveform failed because the result sounded totally different. From the first part of this series then you may not be suspecting that they probably only matched up the 'top' 30 to 40 dB of the sound (the visible bit on a 'scope) - the '30 dB Rule' as I call it. When I've experimented with A/S (Analysis/Synthesis), the iterative synthesis technique where you analyse the target sound/timbre, get a reasonably close synthesised version of it, then subtract the two to get a 'residual', and then synthesize that, and so on, then I wondered if you could use this to keep removing layers of 40 dB or so of visibility, getting a better approximation each time...

Anyway, a reasonably good spectrum analyser is going to show you a lot about the spectrum of a sound - and the harmonics that it shows will give you detail well below 40 dB down. But the spectrum isn't perfect either, because it shows the magnitude of the harmonics in a sound, but generally, not the phase relationships. As was shown in part two of this series (Single Cycle 2), then phase relates to the tiny timing difference between the same point on two waveforms - it could be zero crossings, or positive peaks: anywhere that is easy to compare. Although the horizontal axis is the 'time' axis, many people think of the phase more in terms of the shape of the waveform 'sliding' horizontally - which kind of removes the link that is implicit in a 'time waveform'! But this 'sliding' approach does explain how it is possible to have phase differences that are not directly related to time - if you take a waveform and invert it, the two waveforms are then 'out of phase' even though neither of them has moved in time (although it might take a finite amount of time for the inversion to happen, of course!)

Where this gets interesting is when the waveform is not symmetric. If you invert a sawtooth, then what does 'phase' mean? The zero crossing position gives a reasonably neat alignment of the sawtooth waves, but using the positive peak is confusing, and it would be better to use the fast 'edge' between the positive and negative peaks - but is this then ignoring the time for that fast edge. So should the zero crossing in the middle of the fast edge be used? 

When the waveform is even less symmetric, then neither peaks nor zero crossings may be a viable choice for a reference point. In the example above, inverting the waveform means that the positive peaks are different, and there are two candidate zero crossings. When waveforms are this different, then phase starts to lose any meaning or value for me... Of course, you could use the fundamental of the two waveforms, in which case the inverted waveform would be seen as out-of-phase or inverted.

Phase is important in filter design (like in loudspeaker crossovers, for example), in noise cancellation (two anti-phase signals will cancel out to give silence, although getting two precisely out-of-phase signals is not very easy in a large volume in the real world), and in creating waveforms (in additive synthesis, for example). It turns out that the phase can be very important as a diagnostic tool: so a visually smooth filter cut-off might well be hiding a phase response that goes all over the place. 

Why is Phase Important?

The standard example to show why 'phase is important' is to take a 'square'-ish waveform made from a few odd harmonics, and to change the phase of one of them. Suddenly the square wave isn't square any longer... 

What has always fascinated me is the number of harmonics that are required to get waveforms that are close to the mathematically perfect, sharp, linear wave shapes that you see in text books. In the example above, then 23 harmonics are used to make a 'wobbly' square wave - actually, of course, then because a square wave is made up of odd harmonics, then there are not 23 actual sine waves used to make up the square wave, since just under half of them have zero amplitude. 

So when the phase of the third harmonic (three times the frequency of the fundamental) changes, then two things happen. Most text books will show the changed waveform, and will note that it still sounds like a square wave (the harmonics are the same...). But it is more unusual for there to be any mention of the change in the peak amplitude - the 'F3 out of phase' waveform on the right hand side is about 50% bigger, peak-to-peak, than the 'conventional' square wave approximation on the left hand side. It turns out that changes in the phase of harmonics can affect the shape and the peak-to-peak amplitude, and more: the phase of the harmonics can be used to optimise a waveshape for some types of processing, although this is normally used in applications like high power, high voltage electricity distribution rather than audio.

But this 'phase is important to the shape of the waveform' principle applies to any waveform, and this can give surprising results. Take a triangle wave: it has only odd harmonics, and they drop off rapidly with increasing frequency, so the triangle really is what it sounds like: a sine wave with a few harmonics on top. Now you are probably intrigued by this, and rady to explore it yourself, so there's a very useful online resource at: http://www.mjtruiz.com/ped/fourier/ It is an additive synthesizer that lets you explore the amplitude (volume/size/value) of harmonics, as well as their phase! (This is called a Fourier Synthesizer, after the Fourier series, which is the mathematics behind adding different sine waves together to give waveforms...)

Here's a screenshot of a triangle wave produced using the Fourier Synthesizer from M J Ruiz:

I have edited the colours of the sliders to emphasize the harmonics which have zero amplitude (black), the harmonics which are 'in phase' with the fundamental (blue), and the harmonics which are 'out of phase' with the fundamental (orange). In-phase is shown as a value of 0 in the screenshot - meaning zero degrees of phase, where a complete cycle would be 360 degrees. Out-of-phase is shown as 180 degrees - half way round a cycle of 360 degrees.


 The screenshot above shows an unedited view of the same triangle wave, but with the phases changed so that all of the harmonics are in-phase. The result is more like a slightly altered sine wave than a triangle wave - but it sounds like a triangle wave...

Earlier I pointed out that the sound of a square wave with the third harmonic changed in phase was the same as a square wave with no phase change on the third harmonic. It turns out that your ears are not sensitive to phase relationships of this type, and so the square waves, and the triangle waves, all sound the same regardless of the phase relationships of the harmonics. BUT if you change the phase of a harmonic in real-time, then your ear  WILL hear it. Static phase relationships between harmonics are not heard, but changes in phase are...

If you think about it, then this is not as surprising as it might at first sound. Your ears are very good at detecting changes of phase, because that's how they know what frequency they are hearing! But fixed differences in phase just change the shape, and your ears don't pick that up. One possible explanation for this is that your ears evolved as they did because the harmonic content of sounds was important for survival (maybe locating sources of food, or danger!), but the shape of the waveform was not. Discovering that the human hearing system is not optimised for sound synthesis may be a disappointment for some readers...

One other thing that you may have noticed in the Fourier Synthesizer screen-shots is the small amplitudes of the harmonics for the triangle wave. The sliders used to control the amplitudes are linear, whereas the way that harmonics are typically shown in a spectrum analyser is on a log scale: as dBs. 


The spectrum above shows this quite nicely (plus some other interesting things as well, most of which are because this isn't a 'real' triangle wave, but one that I constructed inside Audacity). The fundamental frequency of the waveform is 50Hz and goes higher than the 0dB reference level (maybe +5dB), and the 3rd harmonic at 150 Hz is at about -24 dB which translates to -19 dB when you add that +5dB. But the Fourier Synthesizer showed this as 0.11 on the linear scale. It turns out that -19dB is a voltage ratio of about 0.11. Thhe 5th harmonic is at -33dB, which is -28dB when you add the +5 dB, and this is a voltage ratio of 0.04, which matches the Fourier Synthesizer value of 0.04. The 7th harmonic is -41dB, which becomes -36 dB which is  0.016, and the Fourier Synthesizer has 0.02. 

So the spectrum analyser harmonic levels are nice numbers, whereas the Fourier Synthesizer harmonics amplitudes are small numbers. I much prefer the spectrum analyser log scale, and here's a chart that shows how the spectrum analyser dBs down relates to the Fourier Synthesizer slider value:


Note that 0.5, which means that you set the Fourier Synthesizer slider to the half way point, corresponds to -6dB. Anything below -40dB is a slider value of below 0.01, which means moving that slider by 1 hundredth of its travel, which is a small distance. This table kind of reinforces why the 40dB Rule mentioned in part 1 of this series exists - the slider values are just tiny, and this means that the harmoncs are going to be tiny too. Probably too small to be seen on a screen!

So if I was going to be designing a Fourier Synthesizer, or an Additive Synthesizer, then I wouldn't use the slider values, because they are typically small and are going to be hard to set easily. Instead I would use the dB values, which are simple numbers and are going to be much easier to set correctly.

Conclusions (so far)

From this series so far, there's quite a lot of things that we now know about waveforms and spectrums:

- Anything below -40dB is going to be difficult to see on a screen

- A single cycle waveform might have unusual harmonics in it

- The waveform does not always tell you what harmonics are present

- The spectrum always tells you what harmonics are present

Spectra can be better diagnostics than waveforms 

- Phase is important for waveshapes, but not what they sound like

- Your ears can only hear changes of phase

- Controlling the level of harmonics should use a log (dB) scale

In the next part, I'm going to talk about noise in single cycle waveforms, and why it doesn't do what you might expect.

---

If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Buy me a coffeeBuy me a coffee (Encourage me to write more posts like this one!)


Synthesizerwriter's Store
 (New 'Modular thinking' designs now available!)