Ku Ku Final Bounce

 

Advertisements

Editing Ku Ku

You may recall from a previous post that I had made a mistake in Barry Cockcroft’s Ku Ku whilst Tara recorded me in the corridor with a Zoom.

It was time to take action. I emailed Odilon to book the studio out for an evening, and visited him in his office to borrow the Zoom H2N. Unfortunately someone had the Zoom out already, so I instead borrowed the R09, safe in the knowledge that it too could record sound in 44.1/24kHz quality.

In the daytime before I went into the studio, the inimitable Dan Soley contacted me to ask if he could please use the studio from 7.30PM – 8.15PM for the composer’s concert in the Weston Gallery, and then we could both use it for our blogs. He would also help me with my editing. Not being one to turn down raw blogging material, I accepted his offer.

Come Friday prevening at 5.30, I was ready to meet Pete so he could help me record another take in the corridor. Pete didn’t show up on time, so I instead enlisted the help of Tara, who happened to walk past at the right moment. I showed her how to use the R09, and we were good to go. This time my playing was completely devoid of technical errors, so after only one take I thanked Tara and mentally prepared myself for the editing I was about to undertake.

Dan had started setting up at 7PM, but I really wanted to get a head start on my editing, and Tara was there in the studio to help.

Just before half 7, Dan asked me to open a new ProTools session, and I was only too happy to comply. Entering the WG a few minutes later, I discovered that Odilon was overseeing recording on an XY pair plus his own personal Zoom H6.

You can see there on the Zoom a little tiny XY pair for portable stereo recording.

One of the composers, Jason, was doing his own recording using a portable Yamaha EMX312cs mixer:

These are for recording smaller live gigs like jazz.

After the concert, I offered to help Dan pack away, but he said it would be handier if I went and stopped the recording, so this I did before continuing with my own editing.

In a new ProTools session, I imported my two recordings of Ku Ku into separate tracks. After multiple listens to different parts of the extract, I determined that my playing in the newer recording was better than the older one in every conceivable way, so I decided to work on that alone.

My main issue with this new recording was the breath I had taken in the middle of a phrase – it completely broke the flow of the music. Fortunately the breath came after a bar that was the same as the two or three before it, so I decided to copy one of those previous bars and splice it in with the breath cut out, in an attempt to make the phrase whole again. I zoomed in very very closely on the track and was still having trouble deciding where I should cut it when Dan waltzed back in and told me I could use markers on different parts of the track. This made my task a lot easier. After a long while we finally found the two bits of fast transient sound that we could cut. I deleted the bar before the breath and replaced it with a copy of the previous bar. I then cut out the breath and moved the remainder of the track to the left, and put in cross fades on either side:

My amazing edit

Although to my ears it sounded seamless, Dan said he could hear a very slight difference in the space of the sound, so we closed our eyes and listened again. As was the case when we edited the sax and trumpet duet as a group, you would only hear the difference if you knew it was there, and only if you listened extremely closely. Again, I’m not even entirely sure that I could hear it.

By this point we were rapidly running out of time, so we compressed and EQ’d the sound. I hadn’t fully understood EQing until this point. What it does is let you take those frequencies in the sound that stick out that little bit more than the others, and bring them down a bit so that certain notes which contain that frequency blend into the piece better:

EQ

Since it is editing only certain parts of the sound, EQ is an effect, rather than a process.

With just 15 minutes left in the studio, we decided to quickly normalise the track in Audacity:

Audacity

(Taking the highest part of the sound and making it the highest the sound can go).

I really wanted to upload the final bounce to Soundcloud, but that’s when the Internet decided to become frustratingly snail-like, so I’ll have to wait until a later date to be able to upload it. That will be my final blog post – it’s goodbye from me for now. Good bye.

Pete’s Percussion Recording

One evening I got a message from Pete saying that he and Lauren were going to record some percussionists playing a piece together, and did I want to help? So I did. I didn’t really have much input in this recording session, I was mainly helping set up and taking pictures and videos. And here they are!

 

Were we to do this again, I might suggest that we either use less microphones, or arrange our time better – we spent half an hour setting up, which gave us half an hour in which to record and pack-up again. This meant that there was only time for one take, although this did have its advantages as Jemma and Luke were largely improvising.

(As is recording protocol, I asked Jemma and Luke in advance if they minded being in my blog and they were okay with it).

Sound Recording Session 10

When I found out that this was to be our last Sound Recording session I had to fight back a tear or two. To soften the blow Odilon had brought us biscuits:

He had even taken into account Caroline’s coeliac and my vegan needs, and bought vegan gluten-free chocolate flapjacks! What a great day. Thanks Odilon.

To finish off we had a Q+A session. I don’t remember what everyone asked but I do remember asking if I could borrow a Zoom at some point to re-record Ku Ku and edit the mistake out of my previous session’s recording, and also for advice in recording an audition video next term for a competition I’m entering.

At some point someone asked about auxiliary outputs, which resulted in a highly detailed explanation and demonstration.

 

 

I didn’t really understand what was going on, but from what I could eventually gather an auxiliary output is something you send to whoever’s being recorded without actually recording that sound itself. E.g. when Tara did her overdubbing we sent her the click track over her headphones, but that click track didn’t actually make it into the recording itself.

Another practical use could be if two people were being recorded together in separate booths, one of them might want to hear more of the other person than themselves, so you’d send them more of the other person’s playing over headphones/a speaker. Of course if you sent them audio over a speaker you’d have to close the door on them to ensure there was no (or minimal) feedback.

As is seen in these demonstrations, you can add effects etc. to the auxiliary as you would a regular audio track, e.g. flanging, or using a pre-fader:

Well, that’s it for the sessions/lectures part of the blog, but there’s still more to come. Stay tuned.

Sound Recording Session 9

Today was all about portable recording, starting with…

BOOMS!

Rode is a company who makes booms – those big furry mics that you’re probably picturing as you read this. A boom without shield (the furry bit) is called a blimp:

Boom

The furry shield is known as a dead wombat.

Dead Cat
You can also get a little one, called a dead kitten (or troll, which I as a cat-lover prefer).
The blimp houses the mic itself.

What you see above is a hacked mic – two stuck together. It runs on phantom power. It can also run on batteries but you have to switch it on, which is a faff – it’s better to use an extension box.
As it is comprised of two mics (in an XY set-up), it produces stereo (consolidated) sound.

Any sound will more or less hit the capsules at same time, but it is true that one mic will pick up sound from one side better. Here it is important to remember that ‘left right perspective’ is of the listener and not the performer, i.e. the opposite of stage ‘left and right’.
The hoops operate as a shock mount, so it isolates it from rumble etc. from your hand so you can walk around.

Most stereo mics are absolute rubbish, but this one is good cause Odilon just jammed two mics together (as in it’s not really a stereo mic). What also makes it great is that it’s still small enough to slot into a blimp, which you can then cover with a dead wombat.
The plastic frame is essentially a fancy lightweight wind shield. It’s not completely windproof but it’s a big help, and leaves a lot of room around the mic. The wombat absorbs a bit of high frequency stuff, but is incredibly light because of the translucent material on inside, and the fake fur on outside (it even comes with its own hairbrush!)

After learning all about booms, we were shown some other types of portable recorders. The first was the dictaphone with horrendous sound quality that doesn’t even come with the option to set your gain:

Dictaphone plus mic
One step up is the  R09, which is ok if you know what you’re doing. I didn’t manage to get a picture of this so here’s a lovely stock photo from the Internet instead:

r-09hr_front_gal
The more modern one is the Zoom H2N, which can be covered by the dead troll. This one is really nice.

Zoom 2
The Zoom has pair of mics at the front, and at the back an XY system, which means it can produce two surround sound formats. It has a switch and a flashing light to help set your gains, and multiple audio format options, including the coveted 44.1/24kHz. But I’m getting ahead of myself…

WAV (or AIF which is the Mac equivalent) is the best format to use as they are very detailed, especially with a 44.1kHz sampling rate. This refers to how often it records sound per second.
16bit is CD quality audio, i.e. it’s really precise. So 44.1/16kHz is a completely acceptable resolution to record at. What’s even better though is 44.1/24kHz – It has roughly the same sample frequency as human hearing range, but a larger dynamic range. Why it’s better to use higher resolution audio is that you can record quiet sounds, so you can record at a lower level and it makes it better quality. It’s important to not record at too high a level so that you don’t clip the sound.
Higher sampling rates do exist, but it’s mainly for film, so we didn’t get into it too much. 192kHz is highest – it’s more than human hearing range.

At this point in the explanation someone asked ‘what’s the point then?’ and Odilon’s response was ‘there is a point, but don’t get me started or we’ll be here all day’.

We left it at:

Always record 44.1/24kHz, then after normalising etc. you can reduce it to 44.1/16kHz for uploading to SoundCloud, putting on CD etc.

Oh, and also NEVER RECORD IN MP3 – it relies on your brain to make up gaps, which gets incredibly tiring after a while, and just generally is not nice to listen to. It got popular back in the early 2000s when MP3 players were all the rage and audio needed to be highly compressed just so it could fit on them. We were advised to buy a memory card rather than use MP3 if we’re ever running short on space.

Nada MP3

After making sure that we knew that MP3 is the ENEMY, and that we should set the gains ourselves rather than being lazy and using autogain, Odilon paired us off and gave each pair one recorder each. I was with Tara using a Zoom H2N. We decided to keep it simple and record ourselves playing. We only had twenty minutes so we found a relatively empty corridor and each played a short extract. I recorded Tara first. To set the gain on the Zoom I asked her to ‘play loud’, and when it clipped I turned it down a bit. She did the same for me, and after we had checked that our recordings were saved on the devices, we returned to the studio.

Ours were the first recordings to be played. My recording of Tara was a bit quiet, and I realise that I should have asked her to play the loudest bit of her piece rather than to just play loud. Other than that our recordings were pretty good, as they were recorded in a quiet place with a fair bit of natural reverb. It’s hard to get quiet places to record in – a general rule is that there’s always more background noise than you think. That’s what studios are for!

It was interesting listening to the other people’s recordings – Lauren and Caroline had tried to interview someone outside then gone to the top floor in the other building and recorded the layers of noise of 20 different people practising separately, and Dan had even got a nice sound on the awful sounding dictaphone by flicking stones into the river and flipping through a book in the library.

When uploading our recordings to the Mac, we learnt that audio doesn’t take up a lot of space – you can get a lot of audio on an SD card. After locating mine and Tara’s recordings, Odilon saved them to the audio drive for us so we could edit them at a later date if we so wished. I had made a mistake in the piece I played, so I did so wish. So much.

Sound Recording Session 8

Due to unavoidable circumstances, I got to this session half an hour late, and when I arrived, the rest of my group were in the Weston Gallery getting ready to record Stan playing ‘Niggun’ by Philipe Hersant:

It was a pretty similar set-up to the day before when we recorded Dan talking Blunt, only we were using an XY stereo pair as well as a condenser (Odilon’s favourite Sony mic this time), so that we could pick up both the transient sounds of the bassoon and the resonant acoustic of the room.

Whilst Stan stayed in the WG with Odilon, the rest of us trekked back upstairs to the studio to do his soundcheck:

And then we got on with overseeing the audio recording:

We used JPG video compression, which focusses on background and edits unnecessary detail out to reduce file size, then synchronised the audio with the video in the same way as before.

There’s not much new to say for this session really, as it was basically a full-out execution of what we had learned the day before, in a situation that we as musicians are more likely to find ourselves in.

Sound Recording Session 7

In Session 7 we made a video recording of Dan reciting the lyrics of James Blunt’s Wise Men like a poem…

Well, anyway…

The first thing to consider when making a video is light. As we were making the video in the dark and dingy recording studio, we used two LED panel lights to create a daylight setting.

LED light 1.jpg

The light from these panels is made from RGB (Red, Green and Blue) channels. Red, Green and Blue lights combine to make white light, and this is what made it so bright when we put them on in the studio. You can probably also guess that projecting white light, the panels were very very hot. This is measured in degrees kelvin, which is a measure of temperature but also of light.

We wanted around 5400 K (daylight) for our video, which made it look like north facing Sun on a cloudy day. This is known as painter’s light or indirect light. To do this you can select different levels of R, G and B:

The next step was to set up the camera on a tripod.

Cameras disemenate a lot of stuff that make up the video and audio. When recording home videos, they often make up for low light by increasing the film speed, which results in a fuzzy picture. Home videos also most often have abysmal sound due to the built in mic.
Photo cameras are good for recording film because of the higher quality lens, but as long as you have a decent camera with standard changeable lens, you should be fine. As you can see in the pictures above, we used a Canon EOS 600D, which is a DSLR.  This stands for digital single sense reflex, which means it has a mirror on inside to reflect film. It usually uses manual rather than auto focus, and has a recording length of about 7 minutes of continuous video. This has nothing to do with the memory card – the optical sensor overheats after too long. It is possible to hack the software in the camera to run longer, but you risk damaging it. The Canon EOS 600D is a very good video camera, but has awful audio quality, so what we did was run mics into the camera.

We used a 414 because it was safe (i.e. it’s pretty much a good mic for any given situation), and a good speech mic – it picks up different things in the voice. Also, since it uses figure of 8 polar patterns, it has good side on rejection, i.e. it’s only going to pick up Dan’s voice plus whoever’s interviewing him (which was nobody, but I digress).
We also used a shotgun microphone, or a boom, which I will discuss more in a later blog entry.

We used the rule of thirds (grid on the camera) to line up Dan – we wanted him centre-ish but not exactly in the middle, because our eyes don’t see that as natural. So that  anything that wasn’t Dan would be slightly out of focus, we shot at wide aperture, which reduces depth of field.

Then it was time to record. There isn’t much to explain here, other than Odilon asked Dan to clap twice loudly before he started speaking, which we later found out was to help synchronise the audio.

After we finished recording, we connected the camera to the Mac, and saw that the video took up 434 mb – not very much. To transfer the video to the computer we used image capture, and switched the double monitors out of mirroring so we had more space to work with. This meant that we could compare the audio and video more easily.

Having recorded the audio in ProTools, we did a quick bit of editing – compression using RVox and EQing out the ring that we found on the sound. This was created by the speakers being on in the same studio that we were recording in, and we couldn’t completely eliminate it from the recording. Now usually in this situation you’d re-record the whole video, but as it was just an exercise for us to learn about recording video and audio to go with video, and we were running out of time, we left it. Finally we normalised the audio to make it more lively.

The final step was to synchronise the audio with the video.

Some programmes which are good for this activity are:

  • Final Cut Pro
  • iMovie
  • Windows Movie Maker
  • Sony Vegas
  • ProTools or Logic

We went for ProTools, as we have been using it all along.
So! The first step was to import video. The video was then on the start of the session. In production studios, there are banks of screens showing different angles.

To go through footage and mark points you can press enter. This is handy for jumping between points, e.g. when having a ‘spotting session’ to spot errors etc., and also for composers writing a score. They use hit-points to extrapolate the tempo track, and they then know when the big events should happen in the music.

At this point in the session Odilon told us something which blew my mind – Most music on TV is from electronic versions of orchestras – they only use orchestras for big budget films (Hollywood) or for things like Doctor Who!

We used Dan’s loud claps to sync the audio, zooming in really closely to get it exactly right (or as near as any mortal being could tell).

It was pretty simple for us, as we only had the one recording, but it is not so easy with multiple clips, unless you’re using Final Cut Pro, which have looks for similarity in clips and synced them automatically. This works 99 times out of 100, and allegedly the 100th time is fantastically and hilariously wrong.

We now had a decent clip with good lighting, focus, and separately recorded beautifully edited sound, so we were ready to bounce the file out as a movie using QuickTime movie format. Then we quickly used QuickTime trim mode to cut the video from after Odilon said ‘Action’ and we were good to go!

 

EditTING

We’re approaching the end of a two week gap between Sound Recording sessions 6 and 7, so yesterday I got the inimitable Dan Soley to give me a one-to-one ProTools tutorial, during which we made 2 edits of his composition Ting (the sax and trumpet duet that I played previously with Lauren).

First we booted up the Mac and opened ProTools, and he taught me the shortcut ‘cmd’ + ‘=’ to switch between windows.

Then we went to File>Open and found a folder named ‘Dan Trumpet Sax Duet’ on the audio drive, and opened Odilon’s edit from the week before last. Dan showed me the different channels on the screen that relate to the mixer, and how to add effects and processes. Then we messed around with adding extreme pitch bending and delay, which resulted in this:

Dan wanted a ‘proper’ edit for his blog though, so we went back to Odilon’s original edit, figuring out that to immediately return to the beginning of the tracks we needed to press Return. The edit was nicely done so we didn’t really add anything, but Dan used it to show me close up how the reverb plug-ins work (including how to bypass them), and also how to add fades and cross fades. The shortcut for fades is ‘cmd’ + ‘f’.

Then we needed to bounce the file. As far as I can understand this is another way of exporting the file, but only the tracks that you select, and including any mutes of tracks or bypasses of effects. We selected WAV format because it’s the most detailed, rather than MP3 or something similar, which compresses files down to lower quality. There was another drop down box with the options ‘multiple mono’, ‘mono summed’ and ‘interleaved’. We went for multiple mono first, but when it created two separate audio files Dan realised that it was stereo sound but with one file for the right speaker and another for the left. So then we tried mono (summed), which created a single mono file with absolutely no space in the sound. Finally we selected Interleaved, which was exactly what we were looking for, and bounced the file in realtime.

After bouncing the file we opened the normalising programme Audacity. Normalising is a process that takes the highest dynamic points and stretches them, thereby allowing more contrast with the lower dynamic levels.

IMG_7997

We saved the normalised file and were finally ready to upload to Soundcloud:

After that we were ready to go and Dan showed me how to reset the mixer, using the reset mixer icon on the Desktop. This was incredibly exciting:

All in all this session was incredibly helpful, as having had some hands on experience, I feel much more confident about using ProTools alone when I want to make more recordings in the future.

Sound Recording Session 6

OVERDUBBING

Tara's set up

I was late to this session so when I got there the microphone had already been set up for Tara to record her violin duet with herself. The mic was a Sony C48, which is a large diaphragm condenser. Sony has stopped producing microphones now, which is a shame because according to Odilon the C48 is ‘the best microphone ever’:

SONY C48

Tara had the mic above her so she could hear a similar sound to what she hears when actually playing the violin:

Tara playing with mic above

First we selected the channel for the first part of Tara’s duet so she could be connected with her mic and headphones.
The clunky safe way to this is to select view – fader – F2 on the mixer.

THE VIEW BUTTON (PAINTED RED)

view button

The elegant dangerous way is to put the fader on aux. If you use this way it is vital to remember to put it back on fader after you’ve finished. Another elegant way is to do it in ProTools.

When we tried sending sound through the speaker it transpired that the sound from the talkback was also coming through, so we adjusted the settings. The positive point about this was Tara could decide how much of herself she wanted to hear. Because we were going to be recording two tracks, Tara needed to be perfectly in time from the first take, so we used a click track (which is essentially a metronome) on ProTools.
We created a separate channel for this so it wouldn’t be part of the final recording. To send the click track to headphones, you can change to output 5, 8, 2, etc.

When we asked Tara if she could hear the click track, she said it was only coming out of one side of the headphones. The was because we were using TRS cables, which are mono auxillaries. If you want stereo sound, you can used to auxiliary cables.

Then Odilon decided it was a good idea to use a headphone amplifier:

headphone amplifier

This was for Tara to be able to control her own volume.

Before we could start recording, did a final check. There was a spill from the click track, so we had to turn it down.
We could have closed the door on Tara, but it feels more human and improves the acoustic of the room to leave it open.

We were very clear with Tara over the talkback about what we wanted to do and that she was okay with it. We decided to record three takes on different tracks. This was so that we could chose the best take, then if there were any parts where Tara felt she sounded better on a different take we could just splice it in afterwards. For this reason we used the same input.

Then we tried an experiment where we sent Tara the ProTools feed while she’s still listening to herself – monitoring through ProTools but not the mic. This didn’t work because there was a monitoring delay through ProTools.
There was not so much delay with playback, so she could then hear what she previously played. We used this to record the second part – using the first part + click track + what she was currently playing. (It’s polite to send the current feed through the recording musician’s headphones).

When we had recorded three takes of both parts of the duet, it was time to edit in ProTools. The huge advantage here was the click track. It kept the recording perfectly in time throughout so it was incredibly easy to splice in bits of different recordings – without even having to fade it in!

crotchet beat

The first thing we did was to put some space between the different tracks. We put most of one track in one speaker and most of the other track in the other speaker.
At this point in recording some people may decide to use the master fader to wobble the whole thing up and down, but it’s pretty sloppy to put reverb on master fader, so instead we set up an aux feed.

Then we realised that one note sounded louder than the others because of the acoustic of the room so we put an EQ on it.

The last thing to do was compress the track using RVox.

It was pretty speedy to be able to record all the different takes and edit them in the space of an hour and a half. In a commercial recording it would usually take about an hour and a half just to record the track. Then you would go in and examine the final take on a note by note level, to make sure everything is absolutely pristine.

200_s

Anyway. Until next time!

Sound Recording Session 5

Today we edited the sax and trumpet duet from the last session. After realising we couldn’t hear sound over ProTools with the faders up, it was quickly brought to light that faders are operated with your finger tips and not your  nails – they’re touch sensitive like a smartphone and you need to press down a bit to get anything to happen. We also learnt that in the BBC faders are up not down, which feels like a more natural way to do things.

The first edit we made on the video was to cut out a note that Lauren had split on the trumpet, and replace it with a cut from a previous take:

After the events of the above video we zoomed in and cut out the extra attack on the note.

zoomed in 2

We then put in a small fade on either side of the edit so it fit more smoothly into the different take. A good thing to do is to close your eyes or look away and listen really closely to check whether or not you can hear the edit. Personally I could only barely, barely hear it but it moved on slightly quicker due to a slight tempo difference. However this was only minor and most likely I wouldn’t have heard it if I hadn’t already known there had been an edit.

To prevent this from happening, if you need to redo a little bit of a take you can get the sound engineer to play the bit from the previous take down the talkback speaker so you can get the tempo. You can also use a click track (but more on this in my next post!)

A few key tips on editing:

-Edit on fast transcience – starts of notes rather than the middle of notes
– Start hard on the speech and not on the breath

-Save edited session separately so you can always go back to an earlier take (ProTools uses non destructive editing – the original audio files are always still there)

When we had tried editing the track in the last session, we had put the reverb on bypass – i.e. It wasn’t there. This was because it was recorded in the Weston Gallery which as I’ve mentioned before already has an overly resonant, mushy acoustic.

We realised that there was quite a distance between room pair and warm pair.
As the speed of sound is roughly 1000ft/s, and there was two metres (slightly less than 6 feet) between the mics it created a delay in the attack of the wave form.
To fix this we moved the tracks to match up – a slightly risky method but it might work so we went for it. The reason it’s risky is because the wave forms might cancel stuff out in a weird way:

When we played the recording back after re-aligning the tracks we discovered that we had created artificial harmonics, which sounded ever so subtly wrong. It was only a minimal difference – some people could hear it and some (like me) couldn’t, so it wasn’t too huge of a problem, but to prevent it from happening in the future we would decide on the optimal delay in spacing mics, unless they are a stereo pair because then you get mono sound – which defeats the whole point of a stereo pair.

When it is a huge problem is when there are loads of mics eg. with an orchestral recording. The tuned percussion can be 30 feet away from conductor – a very audible delay, so what you would do is anchor the mics for delay compensation. The speakers could be 40 ft away on one side and 2000 ft away on another, creating a delay of 2 seconds, so you would need to find a compromise. This is what makes live sound so complicated and messy. It can be dealt with but it’s a huge faff.

Another way to edit the track is to use a plug in to create delay of 4ms on the close mics.

The next step was to apply delay to the sound. Delay can be used as an effect, but here we were using it as a process, as we were applying it to the whole sound, rather than just part of it. Effects aren’t something you hear a lot in classical music, but processing is used a lot – especially compression, which I will come back to later. Here are some examples of processes and effects:

PROCESSES

  • EQ
  • Compressor
  • Limiter
  • Normalising
  • Expanding

EFFECTS

  • Reverb
  • Chorus
  • Phasing
  • Delay

Pitch shifting – can be either

Delay is an artificial version of echo. Here we were using it as a process because we were applying it to the whole recording. If you set delay to 100%, it becomes a loop that never stops.

Next we decided to compress the recording. This is when you make the dynamic range smaller by cutting off everything louder than what is known as the threshold, then make the whole sound LOUDER.

The two main aspects of compression are the ratio and the threshold.

A ratio of 2:1 means that if the volume exceeds the threshold by 10 dB, it will be compressed to 5 dB.
A ratio of more than 10:1 generally means it’s a limiter, i.e. the volume CAN’T exceed the threshold.

compression chart

Home recordings always sound quiet because they never have any compression.

Compression is viewed by some sound engineers as an art form, as it can be done in a way that sounds really horrendous. E.g. as is too often heard, the dynamic range can be squashed down to it’s smallest degree, then makes it all loud, even when it’s supposed to be pp. This is largely done to hear it over the car engine when people are listening to the radio and driving. But it’s gross.

Good compression is when you find the golden mean between dynamic range and loudness. There is such a thing as parallel compression, which is when you play two versions of a track (one compressed and one not) side by side. It’s not used in classical music much, it’s more for blues and soul. Classical compression is done at the end of the mixing process.

Here you can see that by sacrificing a tiny bit of dynamic range we gained a lot of loudness.

In the above video we were using the compressor that comes with ProTools, but just as with reverb, there is a whole host of third party plug-ins.An excellent compression plug-in is RVox, made by a company called waves. It uses one slider called compression, and is staggeringly easy to use:

The opposite of compression is called expanding. It increases the dynamic range by making quieter sounds quieter below a threshold (known as a gate).
A use for this would be on the sax mic to get rid of any trumpet sound that bleeds through.

After this video we realised that we had cut out the sax as well as trumpet so it wasn’t going to work.
ProTools has a feature called strip silence, which takes all every silence and turns it into a gap. This is good for splicing different takes together.

By this point we had got to the end of the two hours we had for the session so Odilon recommended we each visit the studio in our own time and complete our own separate mixes of the track. I’m hoping to book the recording studio at some point within the next couple of weeks to do just this, which I think will also be useful so I can get some hands-on experience with ProTools and the mixer.

The next session is overdubbing!