Sunday, 23 February 2020

DAWless or Ableton Live Only - Inspired by a Ricky Tinez YouTube Video

There are a lot of electronic music resources available on YouTube, and some of the best, in my humble opinion, are the videos published by Ricky Tinez on his channel:

Ricky Tinez channel screenshot

It was a video from Ricky published on the 4th of January 2020 that caught my attention recently, and it inspired a series of exploratory experiments. The video starts out by looking at the 1010 Music Blackbox sample player (and more - almost any short description is going to be a total understatement of what that box can do...), but it segues into a live jamming session using Ricky's live-performance-oriented modular synthesizer, where he syncopates a melody line, and uses rhythmic timbre variation to give a compelling groove with lots of easily accessible variations.

Ricky Tinez Video screenshot

Ricky's video uses the 'live performance, hardware only: no computer' buzz-word 'DAWless', which got me wondering if you could do something similar in Ableton Live 10 Suite, because I'm a contrariwise sort of person. As even more of a challenge, I wondered what would be possible without using Max For Live, because my immediate thought was that I could program up a Max For Live plug-in to do a lot of the rhythmic control. (I'm using the 'Suite' edition of Live because I like and use the software instrument plug-ins a lot, but you could easily replace them with VSTs or samples via Simpler, the sample player included in the Standard and Lite editions of Ableton Live.)

Limitations are the spur of creativity. That's what they always tell you in moments of desperation - like when the only sound source you have is a referee's whistle and you have to produce a TV soundtrack. So by deliberately limiting myself to the factory presets in Ableton Live 10 Suite and with no Max For Live, I was removing all of my usual 'Get Out of Jail Free' cards. Well, prop-less and some time later, I'm pleased to report that a DAW can produce some interesting emulations of some of the workflows that you would normally associate with a small modular synth. That strange whooshing sound that you can hear is me swimming against the tide, because this is the opposite of 'DAWless' - more like 'DAW Only'.

There's a YouTube video that shows a screen capture of me showing the results here (and I have made the Ableton Live .als file available for download as well). I'm going to follow approximately the same flow as the video in this blog post...

DAW Only

I started with a basic drum underpinning using the factory 'Core 808' drum kit the comes with Ableton Live 10, followed by a factory reverb, and just in case, a factory Auto-Filter that I might use later to give some variation to the drum sound (As it turned out, I didn't need to do this, but the effect is there, ready and waiting if I do ever need it.).

Drum synth screenshot

The drum pattern itself is not sophisticated, and is a single bar, repeated - for tech demos then I think the detail should only be where you need it.

Drum notes screenshot

The bass line is just more underpinning, and is just four notes, single bar, repeated. Just for fun, there's a little bit of MIDI velocity variation, but that's something I do automatically without thinking, and once I'd done the clicking and dragging, I left it in place.

The bass is just a modified factory preset for Ableton's 'Analog' VA 'Virtual Analogue' synth from AAS (Applied Acoustics Systems), followed by a little bit of factory reverb. I know that there's a specific factory 'Bass' synth plug-in, but I just like Analog... In front of the synth, there's a factory Velocity MIDI plug-in the is turned off, but it is there so I can rapidly give a bit of extra velocity variation if I feel that it is needed. Again, this is just an insurance policy against boredom setting in. Confidence in live performance is all about knowing that you have done the preparation, and you are ready for any eventuality. Foreseen difficulties mean that you can have pre-prepared mitigations waiting to be activated, and everything can procede calmly and smoothly.

Bass notes screenshot

The bass line isn't going to win any prizes, but it is just background supporting material, and so the more it lurks in the background, the better.

The final part of the backing tracks is a couple of chords, so that there's more character to the background than just drums and bass. I used a factory preset for the AAS Collision physical modelling synth that you get in Ableton Live 10 Suite - because I love marimbas and their ilk. (Let me know when you get sick and tired of me putting 'factory' in front of everything!)

Marimba synth screenshot

There's a MIDI 'Random' plug-in in front of the Collision PM synth, set up so that I can add random octave transpositions if I think that the chords are too boring.

Marimba synth effects screenshot

After the Collision PM synth, there are three effects, a delay to give the chords a bit of rhythmic interest, then a factory chorus to round the marimba sound out a bit, and finally a factory reverb to put the chords back in the mix.

Marimba synth notes screenshot

I so wanted to use my Max For Live chord utility here! But that would have been total overkill for two simple chords. Once again, there's nothing complex or clever here, just plain C major and an inverted C sus 4 (I've always had a weak spot for suspended chords!). That completes the backing tracks.


Which brings us to the start of what I hope is going to be the interesting bit, where I try to get the same sort of syncopated notes and rhythmic timbre variations as Ricky got using his modular synth. My starting point is a factory AAS Analog VA synth again, using a factory preset, and preceded by 'if I need it later' factory Random for octave variation, 'if I need it later' factory Velocity for MIDI velocity variations, and another factory Velocity plug-in for doing the gating for the syncopation:

Melody synth screenshot

After the synth plug-in, then since I'm going to only be using monophonic sounds, we can go West-Coast and have a factory Saturator plug-in to do 'wave-shaping' timbral changes, then a limiter to keep volume under control, followed by a factory Ping-Pong Delay and a factory Reverb. Once again, just in case, there's an LFO in there, but I'm not going to use it now since that would break the 'No Max For Live' limitation.

Melody synth effects screenshot

The 'Melody' line, and I hesitate to promote it to such dizzy heights, is more of the 'straight-forward' mind-set - just eight notes with an interval of an octave:

Melody synth notes screenshot

For the syncopation function, we need to be able to turn these notes on and off easily, using a simple control mechanism. For this, one easy approach is to use the factory MIDI Velocity plug-in, and to control the 'Out Hi' parameter. The 'Out Hi' rotary control sets the maximum MIDI velocity for a not that passes through the Velocity plug-in - but it you set it lower, then the maximum velocity goes lower, and if you se it to zero, then no notes pass thru - because a MIDI velocity of zero is the equivalent of a Note Off message. So if the 'Out Hi' parameter is set to 127, then all notes will pass through unchanged, whereas if it is set to zero, then no notes pass through.

The clip envelope is one of the newest features of Ableton Live, and is the key to doing the syncopation. All you do is create an up and down rectangular set of steps, gong between zero and 127, and set this to control the 'Out Hi' parameter in the Velocity plug-in:

Clip envelope note gating diagram 1

The diagram shows the clip envelope that is controlling the 'Out Hi' parameter in the 'Velocity plug-in, and just jumps up and down between zero and 127. The Velocity plug-in then processes the MIDI notes going to the Analog synth. If I annotate the clip envelope with ticks for when the value is 127, and with crosses for when the value is zero, then we get a clearer picture of what the clip envelope is doing:

Clip envelope note gating diagram 2

So the clip envelope drives the Velocity plug-in, which 'gates' the notes, and in the example shown, the fourth and eighth notes in the melody never get to the synth. So the clip envelope shape is 'gating' the notes before they are received by the synth. If the clip envelope shape changes, then the gating will change - and editing clip envelopes is easy!

One really easy way to change a clip envelope is to 'Unlink' it and then change the length:

Clip envelope example screenshot

In the screenshot above, the clip envelope is set to a length of slightly less than a bar: 3 beats and 3 'ticks'. So each time the bar repeats, the clip envelope will be a tick early. After four bars, the clip envelope will be a beat early, and after 16 bars, the clip envelope will be back in sync. As the clip envelope moves around in the bar, the notes that are allowed through the 'gating' function will change, so there will be 16 patterns of notes gated from the melody: one per bar.

If we set the clip envelope length to 3 beats, then there will be four patterns of notes gated from the melody, and so on. Just setting the clip envelope length, or moving one or two of those '127' high values, will change the notes that will be produced by the 'Melody' synth. And if you hover the mouse just underneath the top of one of those high clip envelope values, it will change colour (to blue) and you can then move it vertically as if it was a slider, so moving it between zero and 127 is easy. You can hear this note gating happening live in the video, as various lengths of clip envelope give changing patterns of notes played from the melody.

Timbral changes

For timbral variation, then a second clip envelope can be signed to the 'Drive' parameter in the factory Saturator plug-in audio effect. This time, the clip envelope looks much more conventional - just a sloping line or two:

Clip envelope example screenshot

Note tha this clip envelope is also unlinked, and it is set to 1 bar and 1 tick, so it is going to be later by one tick for each repeat of the bar, which gives 16 different times during the bar when that 'rise and dip' is gong to happen. Once again, changing the length of the clip envelope is going to give different timings for when the Saturator Drive hits the maximum value from the clip envelope.

Clip envelope example screenshot

The screenshot above shows a clip envelope with a length of one tick less than a bar, so this time the peak is going to happen earlier and earlier in the bar. The two clip envelopes (gating and saturator) are separate, so you can set them to different lengths and they will just repeat away, giving complex rhythmic changes in timbres and syncopated melody notes gated from the melody.

Controlling these variations is easy: change the length of one of the clip envelopes, or edit the clip envelope. All of this can be done, glitch-free (unless you over-drive the Saturator!), live during performance - which can be seen and heard in the video.

Marimba delay...

The Marimba chords have a little 'busy' or 'pickup' motif added by using a clip envelope to control the Wet/Dry mix of the Delay plug-in:

Clip envelope example screenshot

The blue highlighted section can be moved up and down as if it was a 'slider'-type control:

Clip envelope example screenshot

So turning it on an off is easy! But what is it doing?

Delay screenshot

 The clip envelope is controlling the Wet/Dry mix of the factory Ping-Pong Delay, and is the first time that a clip envelope has been 'linked' so that it runs at the same rate as the main bar timing! So for most of the bar, the wet/dry mix is set low, and so you only get faint ping-pong echoes, but for the last beat, the wet/dry mix goes very high and there are lots of echoes, and then at the end of the bar they all go away again. It adds an interesting variation to the playing of the chords. Can you figure out what would happen if this clip envelope was unlinked?

(In the .als file, there's a double 'pickup' example!)

Getting the YouTube video and the Ableton Live .als file

You can see the video here.

You can download the .als file here.

screenshot of ALS file contents

Modular Equivalents

Ricky's video is a good source for seeing one way to do similar transformations using a modular synthesiser, but there are many ways to achieve similar results in a modular or a DAW, and so assigning ME values is not a good indicator in this case.

Links in this blog post

1010 Music Blackbox

AAS (Applied Acoustics Systems)

My YouTube video

Ricky's video


I must thank Enrique Martinez for his help in making the video and this blog post possible. His reply to my initial email happened amazingly quickly!

If you find my writing helpful, informative or entertaining, then please consider visiting this link:

Friday, 21 February 2020

128 oscillators in a Max For Live plug-in...

Every so often, I produce a sound generator. When I was a kid it was neon relaxation oscillators or cascaded unijunction oscillators, but these days they tend to be implemented in Max For Live. Since BankOSC, I've been exploring some ideas to see if I could come up with another simple but powerful user interface to Max's 'ioscbank~' object, and I'm going to share the result with you now.


INSTframeOSCmr builds on INSTbankOSC, but quadruples the number of oscillators, has two independent sound generating channels, and as with many of my generators, there is is a ring modulator in the output stage. There are now 64 oscillators in the oscillator banks, and a simpler user interface -BankOSC was cool, but it wasn't straight-forward. FrameOSC has two channels (A and B), each of which contains two 'Frames'. The frames contain frequency and amplitude lists for the 64 oscillators, and you can control the rate at which the frames are loaded into the oscillator banks - as well as how long it takes for the oscillators to adapt to the new parameter lists. But unlike BankOSC, where the frequency lists were fixed to MIDI notes, the frequency range in FrameOSC can be varied - so you can have each of the pairs of 64 oscillators playing frequencies over a range of a few hertz, to a few thousand hertz, and you can tune all the oscillators with an offset rotary control. This means that you can make a lot of very rich, thick, swarmy textural sounds which can nicely cover the spectrum from ethereal to scary.

Oh, and the oscillators are in stereo, so there's actually 256 oscillators in total...

The two numbers in orange between the frequency and amplitude grids are NOT related to the horizontal indexing of the 64 oscillators - this space was just a convenient place to put the 'Span' limits of the vertical frequency axis (anywhere else gets in the way of other UI elements). The left number is the lowest frequency set by the frequency grid and the 'Span' rotary control. The right hand number is the highest frequency set by the frequency grid and the 'Span' vertical control. It all makes sense once you are used to the way the grids and the span control work!

When you first see FrameOSC, you will notice the two pairs of frames: channels A and B. Each pair of fames has an underlying left-to-right scrolling cursor that shows the position of the LFO as it scans across the frames. The rate of the LFO is controlled by the 'Rate' rotary control, and the border of the left frame of the A pair will light up in white as the cursor scans across from left to right. When it gets to the right hand edge of the left frame, then the right frame will get a white border and the cursor will then scan across that. When the cursor reaches the right hand edge then it jumps back to the left and repeats. So, at a glance, you can see which frame is current active, which one will be active next, and how long it is going to be before the cursor reaches the edge and triggers the change. Remember that the cursor scanning across from left to right is just a metaphor for the LFO timing - it does not mean that the oscillators at that position are affected by the cursor.

When the border jumps from one frame to another, the frequency and amplitude parameter lists are loaded into the oscillators. The 'Smooth' rotary control sets how long it takes for the parameter changes to happen. The range is from more or less immediately, through to multiple tens of seconds. If you set the LFO rate to be fast, and have a long smooth time, then you won't hear much happening, because the oscillators can't track the changes, but if you reduce the smooth time, then you will gradually hear the changes happening with more and more effect, until at very short smooth times, the oscillators will change from one set of parameters to another very quickly. Fo slower LFO rates, then the smooth control has much the same effect - if it is set too long then the changes occur very slowly (or maybe never reach the limits, whilst faster smooth times will give more abrupt changes.

The frames contain editable lists of frequency and amplitude for the 64 oscillators - you just click and drag in the rectangles. For the Frequency list, vertical is frequency, which for the amplitude grid, vertical is amplitude. Left to Right is the 64 oscillators. There are two rotary controls in each frame: 'Span' adjusts the frequency span of the frequency list, whilst 'Tune' adjusts the pitch up and down - just an offset control, really. At the far right hand side, there are volume controls for the A an B pairs of frames, plus a third volume control for the ring modulator (RM).

To encourage experimentation, and to avoid too much of a 'preset' mind-set, I have deliberately not included memories in INSTframeOCmr - think of it as being like a modular synthesizer that forces you to unlearn any reliance you may have developed for the instant recall of sounds. I await the first modded versions with memories added!


If you don't have any parameters set in the frequency and amplitude grids, then you aren't going to get very interesting output sounds. Zero Hertz at zero amplitude is not very useful. 

The LFO isn't quite what you might expect - it actually runs 16 times faster than the actual rate of crossing the frames, but this is so that it can drive the cursor. There is also a large amount of latency in the way it responds to slow rate settings - you may find that it appears to have stopped. You can press the 'Reset' button to reset the LFO cycle if this happens.

Ring modulators accentuate the difference between two sounds, so if you set very similar sounds in the A and B pairs of frames, then the RM output may not be interesting.

Remember that left to right is selecting oscillators in the grids, whilst vertical is frequency or amplitude. These are not waveforms or spectra.

If you like a set of parameters, use the 'Save' icon in the top right hand corner of the device in Ableton Live to save it as a 'preset', so that you can get it back later on!

INSTframeOSCmr does not respond to MIDI notes. It produces an output all the time, and this is not pitchable or triggerable using a keyboard. The frequencies are not on a scale, or a temperament - they are just raw frequencies.

Adding echo, delay or reverb after INSTframeOSCmr will thicken the output even more.

Remember that the cursor scanning across from left to right is just a metaphor for the LFO timing - it does not mean that the oscillators at that position are affected by the cursor. I'm repeating this because this was the second thing that people said in the beta testing.

Yes, you can make it sound like Louis and Bebe Barron's soundtrack to the movie: 'Forbidden Planet'.  (This is the first thing that people said in the beta testing.) But you can do lots more than that. (Oh, and the reason is sounds like 'Forbidden Planet' is because recording lots of manually tuned oscillators tends to produce a distinctive sound...) And yes, that is a young Frank Drebin played by Leslie Neilsen, or is it the other way round, or something else entirely?

As always, INSTframeOSCmr is free!

Getting INSTframeOSCmr

You can get INSTframeOSCmr here:

Here are the instructions for what to do with the .amxd file that you download from

(In Live 10, you can also just double-click on the .amxd file, but this puts the device in the same folder as all of the factory devices...)

Oh, yes, and sometimes last-minute fixes do get added, which is why sometimes a blog post is behind the version number of

Modular Equivalents

In terms of basic modular equivalents, then INSTframeOSCmr would require two LFOs, and 256 VCOs, a lot of stored parameters (blocks of 16?) to do the frame loads, a mixer and a Ring Modulator, giving a total of well over 260 ME, which is the biggest count so far, I think. You might want to use some specialist additive oscillators instead of brute-force VCOs!

If you find my writing helpful, informative or entertaining, then please consider visiting this link:


Tuesday, 18 February 2020

How to do a screen recording of Ableton Live (and other DAWs)

Progress is a strange thing. Sometimes you think you have moved forwards, but actually you have taken a step backwards. After using a 12-year old MacBook Pro as my Ableton Live and Max workhorse, I moved to a slightly more modern MacBook Pro that has USB C sockets, and only has a headphone output. I didn't think any more about the significance of that single audio output until I needed to do a screen recording of Ableton Live (which will be the subject of another blog post...).

On my 2008 MacBook Pro all I did was use a Y-shaped headphone sharing widget (from the days when two people could often be seen sharing a pair of earphones - each with one earphone) and connect a 3.5mm stereo jack (grey, on the left) into the headphone output and the line input. The headphone sharing widget was also connected to my amp so that you could hear the audio on the monitor speakers. This little bit of hardware magic allowed me to have Ableton Live running, start up QuickTime Player and select 'New Screen Recording...' and capture a video of the screen and the audio output of Ableton Live (or Max, or any other audio-emitting application... Quick, easy and simple - and all hardware!

But on my much newer MacBook Pro (with USB C sockets!) there wasn't any line input socket - just a headphone output. Okay, so all I needed to do was use my audio interface... Which is where things became slower, harder and more complex. Ah, the joys of unfamiliarity!

I have a Focusrite Scarlett 2i4 2nd Generation (the nearest current equivalent is the 2i2 3rd Generation), and it has served me very well - it just works. A simplified front panel is shown above, with functions shown underneath for reference. Most simple audio interfaces have similar layouts and functions. So I looked in the User Guide for 'Screen Recording' and didn't find anything. Searching on the Focusrite web-site didn't locate any relevant tutorial either, although there was a lot of help there - just not what I needed at this moment. So I Googled it, and, as usual, I found lots of YouTube videos and web-pages that were just nothing more than click-bait, suggesting that they would show me what to do, but not actually really delivering anything other than adverts that they could monetise. It may well be that amongst all of the eager 'Click me!' results there was some useful information on screen recording, but I didn't find it. It was at this point that I realised that I was going about it all wrong, and you are reading the result. I may even do a YouTube video that isn't just click-bait and contains actual real information at some stage...

Screen Recording should be easy

It seems like it should be easy. You just record what is happening on your computer's screen and the audio that it is producing. Unfortunately, recording the digital audio output of applications running on computers is not as straight-forward as you might hope. The Operating System audio input and output control panels only show devices that are connected to the external interfaces of the computer, not what is happening inside the computer. So you only see the built-in 'internal' microphone and speaker, and any audio interfaces - in this case, a Scarlett 2i4 connected via USB. Applications like DAWs or Screen Recorders do not appear in the audio control panels. If you delve deeper using the 'Audio MIDI Setup' utility program (in the 'Utilities' folder) then that gives more detail of the built-in microphone and speaker, plus audio interfaces connected to the computer via other interfaces (USB in this case, but older computers might have FireWire...), but nothing 'internal' at the application level is shown. There's no 'patch bay' or 'routing matrix' where you can set the connections between applications that have audio inputs and outputs.

There are a number of third party utilities that allow you to route digital audio 'internally' (inside the computer) from one application to another, and I have spent quite a lot of time trying to use them. It seems that a reliable and easy-to-use digital audio routing utility is not what I downloaded, several times, with several different operating systems, versions, fixes, forum visits, and lots of promises that each of them really was the solution. It is almost as if routing and making perfect digital copies of audio is deliberately made difficult... Hmmm...

Now I know that there are lots of these utilities out there, and I know that there are lots of YouTube videos, forum posts and tutorials on how to use them, but I just didn't manage to find one that worked. Maybe this is the Interweb that we deserve? Anyway, I went back to first principles.

DAWs and Audio Interfaces

An audio interface is a handy combination of a few different utility functions all put together into one box. There are audio inputs, which can amplify microphones or guitars, as well as accept line inputs from synthesisers, drum machines, etc., and digitise and send that audio over a USB cable to a DAW running on the computer, where it can be recorded. There are audio outputs, so that the digital audio produced by the DAW can be sent over a USB cable to the audio interface, where it is converted back into analogue audio, and listened to on speakers or headphones. There's also monitor switching which lets you listen to the inputs before they are digitised, or the outputs of the DAW, or even mix the two. And there's all sorts of things borrowed from mixing desks: pads, phantom power, selection switches etc. But the basic functionality is: sockets for audio input to the DAW, and sockets for audio output from the DAW.

On my Focusrite Scarlett 2i4 audio interface, there are 2 input sockets, and 4 output sockets. Actually the inputs can be balanced (on XLR) on unbalanced (on quarter inch jacks), and there are two balanced outputs on quarter inch stereo jacks and four unbalanced outputs on RCA/Phono sockets, but that's just detail. As advertisers say: there are other audio interfaces, with different numbers of inputs and outputs.

I need diagrams - my mind thinks in pictures. The user guide for the Scarlett has words, and a few pictures - but nothing like the diagram above. Audio (microphone, guitar, synth, etc.) comes from the left hand side, gets converted to digital, goes through the USB cable and appears at the input of the DAW. Audio from the DAW goes through the USB cable, is converted back to analogue audio, and is then connected to speakers or headphones so it can be heard. Now you know too.

To configure an audio interface so that it replaces the default internal microphone and speakers, you need to set some preferences. There are several ways to do this, so I will cover them all - you will find that changes in one may well affect others. System Preferences (the gear icon) is the first place to visit. The Audio control panel has buttons for Input and Output:

Notice that the sound input shows only the internal microphone and the Scarlett 2i4 audio interface. I'm running Ableton Live and the Screen Recorder (QuickTime Player) on the computer at this point, and they do not appear in the list of sound inputs (they both have audio inputs and outputs!), so this reinforces the 'You can only control audio inputs and outputs that go outside the computer' viewpoint of the Operating System. On my old previous MacBook Pro then the input control panel had that very useful line input as well, and there was also a hidden digital optical output interface (S/PDIF / TOSLINK) hidden inside the 3.5mm output jack - a Mini-TOSLINK connector (different to the TOSLINK connector that you might find on a CD player's rear panel).

Inside Apple's 'Audio MIDI Setup' utility application, then you get more detail and more control:

Notice that the System Preferences Audio control panel calls the audio devices the 'internal microphone' and 'internal speakers', whereas in the Audio MIDI Setup utility they are the 'built-in microphone' and 'built-in output'. I would prefer it if there was a little more consistency in the naming conventions...

Inside a DAW, then there are much the same options, but presented in the UI style of the DAW. So, for Ableton Live you go to the Preferences control panel and select the 'Audio' tab:

The audio input selection has the Scarlett audio interface and the built-in microphone as the available audio input devices. (Note that the terminology used by Ableton reflects the Audio MIDI Setup naming convention...)

The audio output selection has the Scarlet audio interface and the built-in output as the available audio output devices, again using the Audio MIDI setup naming convention.

Most of the time, you will just confirm that the audio interface is chosen...

Screen Recording

Screen recording is very different to just using an audio interface with a DAW to record and playback audio. Screen recording puts two applications in that 'Computer' box in the middle of the diagram: the DAW and the Screen Recorder. I use Ableton Live as my DAW of choice for most purposes, but you can use your own DAW of preference. I use QuickTime Player (which can also record!) as the screen recorder because it came free with the computer, but you can use your own choice if you prefer. Note that although I did this on a Mac, the way that audio interfaces work is much the same on Macs and PCs, and other than you might require a driver to be installed on a PC, and you will need a screen recorder other than QuickTime Player, then you should be able to do screen recording as I describe below on a PC as well.

In all the diagrams that follow, I only show audio connections (both analogue and digital audio, as well as digital audio carried over a USB connection). The video connection between the screen of the computer (displaying the DAW!) and the Screen Recorder is not shown - but you know it is there really! Adding it to the diagrams just makes them way too complex!

The digital audio output of the DAW needs to be sent to the input of the Screen Recorder (QuickTime Player, using 'New Screen Recording...') and it would be nice if it also went to the audio interface over the USB cable so that it can be heard. This is where those utility applications come in - they allow you to route the digital audio output of the DAW to the Screen Recorder and to the audio interface. If you try to do this with some Screen Recorders then they just show you the audio inputs and outputs that are available externally to the computer - which in this case would be the Scarlett 2i4 audio interface. So the 'internal' 'DAW output' digital audio may not appear on the list of audio sources for the Screen Recorder.

Here's the Screen Recording window that QuickTime Player pops up when you select 'New screen Recording...' from the File menu. The slider at the bottom is the audio output volume control, and notice that it does not allow you to choose which audio device it sends audio to - it uses the device set in one of the previous control panels. But remember that this control is only used when QuickTime Player is playing a video file that you have recorded - note that there is no output from the Screen Recorder inside the 'Computer' box on the diagram that shows how things are connected together when a screen recording is happening:

The red spot button in the middle is the 'Record' button, and the little down arrow gives a drop-down menu that allows you to select the audio input source:

As with all of the previous control panels, this one is set to the Scarlett audio interface over USB - which, as you can see from the diagram of the inside of the 'Computer' box, is completely wrong. The input of the Screen Recorder should be the audio output of the DAW! (Which isn't an option...) The utility routing applications that I mentioned would be used here to do the internal routing of the digital audio instead of this selector. I'm not going to show any of the utility routing applications here to keep things simple.

Of course, you might want to do more than just hear the output of the DAW in the screen recording. A spoken commentary is a popular way of explaining what is happening, and so the audio inputs of the audio interface can be used to send that microphone output to the screen recorder, where it can be mixed with the output of the DAW. One way to do this might be like this:

The required utility application that routes the audio, mixes and sends the DAW output is now more complex, and a lot of these utility functions could probably be done by the Screen Recorder - although QuickTime Player only allows you to choose one audio input to record at once, so you would need to have an external mixer and mix the microphone output with the DAW input, and then send that back to the Screen Recorder. It turns out that this is a much simpler way to achieve the same result as the complex utility routing application:

In this configuration, an external mixer is used to mix together the audio output of the DAW with the commentary from the microphone. Levels and panning can be adjusted by the mixer, and the setup of the DAW and Screen Recorder software is easy and requires no complex utility application - the DAW output is sent to the audio interface, as usual, whilst the Screen Recorder just records the output of the mixer via the audio interface. The DAW output can be monitored by the audio interface, which may mean that you need to select the source of the monitoring - on my Scarlett 2i4, there's a rotary control that lets you choose between monitoring the inputs or the outputs. I set it to monitor the outputs so that I hear the DAW audio, and not the microphone. (I hate hearing my own voice)

In terms of software configuration, the DAW output is sent to the audio interface (Scarlett 2i4 in my case: this is set in Ableton Live Preferences), whilst the Screen Recorder input is set to be from the audio interface (the Scarlett 2i4 in my case: this is set in QuickTime Player (the little drop-down menu next to the red record button):

So by not using a complex routing utility application to interconnect audio applications inside the computer, we have a simple solution that we can control using native control panels.

Here's a diagram showing how the mixer connects everything together. The microphone is probably going to be panned in the centre, and is on channel 1 of this example mixer. Channels 3 and 4 are used for the DAW left and right outputs (from the unbalanced outputs of the audio interface). The left and right outputs of the mixer go to the inputs of the audio interface.

The downside of this approach is that the Screen Recorder is not getting the direct digital output of the DAW - instead it gets the DAW audio converted to analogue audio, then mixed with the microphone, and then redigitised. So the audio will not have been digital from beginning to end, but the gain in simplicity is considerable.

Note that the mixer on the input of the audio interface can be very simple - it could even be passive .(A microphone that requires phantom power is going to need either a mixer that can provide the power, or one of those phantom power boxes if you want to use a passive mixer)

Here's a screen recording about to happen. Ableton Live is running (literally) and the QuickTime Player is waiting for the red 'Record' button to be pressed to start the screen recording (which records the video of the screen and the audio - since everything so far has only talked about the audio, then it seems like a good point to mention that the whole point of this type of screen recording is to capture the video and the audio outputs of the DAW!)

A quick tip: When you have the DAW full-screen, then the screen recorder window (QuickTime Player in this case) is going to be covered up. To get it back on top, you just go to the Dock at the bottom of the screen and click on the QuickTime Player icon. (There's an equivalent way to do this in Windows...)

One thing that does need to be mentioned is that the Screen Recorder output should not be connected to the audio interface - if you do this, then a feedback loop can sometimes be created. So when making a screen recording in QuickTime Player, the output slider is always left at zero:

There is a way to simplify things even more, and this is when no microphone commentary is used. Taking the original 'headphone sharing; approach, this can be implemented by using two cables to connect the output of the audio interface to the input - very much like the mixer setup above.

The DAW and Screen Recorder have been moved around to make the diagram simpler, and if we continue with the simplification, by splitting the computer and the audio interface into two separate parts, then we get this extreme clarification:

The digital output from the DAW goes through the USB cable to the audio interface, where it is converted to analogue audio, then this is input into the audio interface input section, is digitised, goes along the USB cable, and ends up being recorded by the Screen Recorder. So only two quarter inch Jack to RCA/Phono cables are required:

This shows the back of my 2i4 audio interface, but the two outputs on the rear panel are just the two main unbalanced outputs, and so any 2in, 2 out audio interface will have these outputs.

This really is the headphone adapter all over again, but translated so that it works with the audio interface - and the monitor function of the audio interface allows the output of the DAW to be heard on speakers or headphones by monitoring the input.

Simpler still

If you don't need stereo, then you could have only one analogue cable from the output to the input of the audio interface, and use the other input for the microphone! This will give you DAW audio in one channel, and the microphone in the other channel. Extreme and minimalistic - and useful in emergencies.


I mentioned my Focusrite Scarlett 2i4 2nd Generation audio interface because that is what I used for these experiments - but the nearest current equivalent is the 2i2 3rd Generation which is a 2 in, 2 out audio interface that will work fine for screen recording. It also only has two unbalanced outputs, so it is easier to get the connections correct! Full marks to Focusrite for making an audio interface that works perfectly for an unusual use case!

There are many mixers that can do the 'simple' mixer role - even a passive mixer. Here are some to research further:

Maker Hart Loop Mixer

Behringer MX400

Bastl Dude (monophonic...)

Rakit Rackimix 5 Channel Mixer

If you find my writing helpful, informative or entertaining, then please consider visiting this link: