Ask The Musician: “How To Record All Instruments of a Multi-track Song Separately (and still have it come out right in the end)”

By Jim Fusco:

Welcome to another edition of “Ask the Musician” with me, Jim Fusco!

In lieu of recording another video tonight (I’m anxiously awaiting to record my first HD video, hopefully next week), I decided to finally respond to an inquiry I got on YouTube about how to record a multi-track song separately and still have it come out right in the end.  The YouTube user writes:

I have one big problem.  When we record, we obviously record them in different parts (by that, I mean we record the instruments separately).  But, we can’t record them at the same time and we have problems recording them apart.  When we try to mix them, something gets messed-up and we have to record over again and again.  Have any tips?

Why yes, I do!

People like Brian Wilson (of the Beach Boys) had many musicians at their fingertips.  So, it was easy to get all these professional musicians in the same room to record a track.  And these studio musicians never mess up.  They are the cream of the crop, so it was easy to say, “Play this,” and watch it get done.

For us that record alone, sometimes it’s hard to keep a beat constant through an entire song.  Actually George Harrison was known for having a great built-in timeclock when recording.  He could play a song in-time with no percussion behind him.  That’s one of the reasons why it was easy to finish his last album, “Brainwashed’ posthumously.

And that brings me to my first tip: the most important thing about a recording is to stay on-time and on-beat.  So, if you’re by yourself, make sure you lay down the drums first!  Of course, you have to have a drummer that won’t speed up or slow down on you, so that’s an important step, too.

Now, knowing that everyone’s human, you should also consider keeping even your drummer in-time by using a metronome.  Just lay down a track of a metronome in the right tempo first (you can always delete it or silence it later) and then have your drummer go to work.  Actually, at that point, you can lay down any instrument you want.  The only time this gets tricky is when the song changes tempo.  One thing you can do is program a very simple beat as a MIDI track (I used to use a program called Noteworthy Composer way back in the day- wonder if it’s still around?).  Then, you can map the song out, put in your tempo changes, and then just play it into your recorder as a track.

Another thing to keep in mind is software and hardware latency.  If you’re recording on a computer, you’ll run into this.  Even the fastest computers fall victim to it.  Have you ever recorded a video on a webcam and seen the audio/video sync go off?  Well, your computer is having trouble recording everything at the same time and it’s not making up for the latency (time lag) in either the software or the hardware you’re using.  And, like I said, even on the best computers, you can run into this.  I have a top of the line Mac Pro here and I get hiccups in my videos sometimes because my backup machine will kick in or a popup box will interrupt.  It’s that little blip in the continuous stream of processing power that can really screw things up.

Now, you may have good luck for one or two tracks, but consider this- each time you record another track on the computer, you’re playing back each additional track.  So, you can be playing back 24 tracks and recording another one at the same time- a recipe for audio latency.

That’s one of the reasons why I’m still recording on a DAW (digital audio workstation).  I never run into those problems because recording 24 tracks is the machine’s sole purpose.  There’s no internet, no downloads, no popups- just pure recording power.  I’ve never had a problem with it, unless it’s my own bad timing that screwed it up.

So, I hope that gives you something to think about.  It’s so difficult to record a whole song alone- only the best can really be great at it (Paul McCartney comes to mind).  If anyone else has any suggestions, I’d love to hear them- leave a comment below!  I’ll see you all next week- hopefully in full high definition- for another Laptop Sessions acoustic cover song music video.

How To EQ Your Recordings – Tips on Equalization from a Music Producer

By Jim Fusco:

Welcome new and longtime fans of the Laptop Sessions to this very special article that I believe will help a lot of aspiring musicians and recording artists make their recordings sound professional while recording them at home.  This article isn’t just in lieu of recording my usual Tuesday night Laptop Sessions acoustic cover song music video- my list of covers to do is actually longer than ever- I just had the urge to write an informative article that many people will find interesting and useful.  Before getting down to business, let me note that I’m hoping to record an extra-special cover song music video this week for inclusion on the music blog next Tuesday night, so stay tuned!

I’ve always battled with trying to make my home recordings sound professional.  I went out and spent hundreds of dollars on acoustic foam that I hung on the wall (and by “hung”, I mean attached to the wall by spray glue, permanent wall tape, and Gorilla glue), invested in some computer processing plugins for my music, and bought great microphones and amplifiers.  But, no matter how hard I tried, I couldn’t get that “home recording” sound out of my songs!  I do have a few tricks now (one secret way of getting the most volume out of my recordings and another to clear everything up), but that’s after the mixing occurs.

This article is meant to focus on the tweaking that should be done while mixing a song down to a 2-track stereo pre-master.  After you’re done recording, go into the EQ (or equalization) settings on your workstation, which I’m assuming is digital nowadays.  I use a DAW (or Digital Audio Workstation)- a Tascam 2488 24-track recorder.  I love it and I really can’t see myself upgrading for any reason for a very long time.  My brother prefers to use the computer and uses Sony’s ACID.  Some use Pro Tools, but I never got into it, even though I’m a huge Mac fan.  To be honest, I use Final Cut Pro for video editing, but I’m not really a fan of that, either.  I used to LOVE Sonic Foundry’s Vegas (before Sony bought them out) for video editing.

One more point before we get to the EQ settings- back when I recorded using analog equipment, I never had to deal with equalizing the multitracks of my songs.  Truth be told, I actually still love analog recording, even though I always worked towards removing that hiss that goes along with recording on old fashioned cassette tapes.  You see, with digital audio recording, you reach a peak at “0”- if your volume goes above zero, it’ll “clip” and you won’t hear anything- maybe a bit of digital distortion.  But, with analog recording, you could allow the levels to go into the red a couple of decibels and still get clear recordings (to a certain degree).  Thus, all my old analog masters are much louder and fuller.  Plus, since there was more room for me to boost levels, individual instruments stood out in the mix more.  Of course, I realize now that what I was doing really wasn’t the “proper” way to record and mix, but honestly, the results were there, so “proper” isn’t really a good argument for me.

Onto EQ- basically, I’m going to give you some pointers on how to EQ certain tracks so that your audio doesn’t sound muddy when you mix it down to a 2-track stereo pre-master.  The theory behind cutting some of these “bands” of equalization (say all the sound below 50Hz) is this: Say you have 24 tracks like I have on my DAW.  Well, when you record, you’re recording ALL the possible sound spectrum that your microphone or pickup can handle.  Then, the DAW records every possible piece of sound information it hears because DAW’s are digital and can pick up any sound, especially when it’s uncompressed PCM files (like .wav or .aiff files on your computer).  The theory here is much like the file-size savings you get when you convert something from a .wav or .aiff file to an .mp3 file.  You get essentially the same sound quality, but at a tenth (or less) of the file size!  How is that accomplished?

Well, with mp3s, it’s a combination of a couple things- first, it compresses the data in a special format that’s smaller in file size than a standard uncompressed .wav or .aiff.  That part doesn’t matter to us here.  What matters is that all-important second piece to the mp3 compression- mp3s don’t carry ALL of the sound information that uncompressed files do.  So, for instance, a high quality mp3 file will have all the sound frequencies, minus the very, very high and very, very low frequencies.  The vast majority of humans don’t hear these sound frequencies anyway, so shaving them off the sound file doesn’t alter the sound we hear that much.  But, since there are less frequencies (and thus, less information in the file), the file size gets smaller.  Now, if you have a lower quality mp3, one of the ways it gets the file size down is to limit the sound frequencies in the file further.  That’s why you get a low quality mp3 that sounds like it’s coming through a phone- there’s not as many frequencies in the file, so the size is smaller, but the sound is affected more.  Once you start cutting into sound frequencies that humans can actually hear, you start altering the sound of the music file.

So, how does all that relate to EQ-ing your music?

Well, we’re essentially trying to do that same second-piece of the mp3 process, but track-by-track.  And, we’re not trying to save file size, we’re trying to save from that “muddy” sound that home recordings get.  So, why does that “muddy” sound happen when you mixdown your recordings?

Well, I’ve always noticed that the sound coming out of my 24-track is pristine during regular playback.  But, when I mix it all down to a pre-master, I notice the difference.  That’s when the recorder tries to blend all your tracks together and fit 24-tracks worth of sound into just two tracks- a left channel and a right channel for stereo.  As you can imagine, that’s not an easy task and there are a lot of “assumptions” your recorder makes when mixing down.  For instance, if two tracks have a sound playing at the same frequency and at the same volume, your recorder may decide to give once precedent over the other during the mixdown process, which brings one sound out and drowns the other one out.

Also, have you ever noticed that live acoustic recordings, such as one person singing with an acoustic guitar and nothing else, always sound so much clearer and louder, especially when it comes from a home recording?  Well, that’s because you only have two tracks competing for their share of the sound space.  And since a guitar and one vocal track don’t compete for as much sound space as, say, a guitar and bass would (without EQ, that is), you get a much clearer recording.

So, the idea is this: we have 24 tracks of sound that use every single possible frequency.   That means that the bass guitar track, even though the part you really want to hear (most of the time, unless you’re Brian Wilson) is in the low frequencies, it still contains a recording of ALL frequencies, from low to high.  Now, say you had a vocal track.  Vocals take up a very specific range of EQ frequencies, as the human voice can only go so high or low- most of the time, we’re right in the middle.  Well, the recorder also records ALL possible frequencies on this track, as well, including ones that would conflict with your bass track.  Now, add two acoustic guitars, electric guitar, piano, drums, etc. and you have every single one of these tracks with sound information in every single possible EQ band.

But, the point is- Every instrument or vocal track only needs certain frequencies! So, why would you have 24 tracks all have hum in the 80HZ range (say from a furnace that was on next door that your microphone happened to faintly pick up) and drown out your bass drum, which thrives in that frequency?  (Just a note- that furnace sound at 80Hz may sound very faint on one track, but multiply it 24 times over and you’ve got a major problem that you wouldn’t have been able to fix without EQ)  So, every instrument needs its own sound space to live in.  If you reduce the number of tracks competing for a certain EQ frequency band, you’ll give every instrument its own “pocket” of sound space in the mix and nothing will get drowned-out.

I will also point out that this is my least favorite part of the recording process- it’s tedious, there are SO many options (do I cut by 3 db or 4 db?), and since you have to go track-by-track, it takes forever.  But, this process is the single biggest reason why my recordings don’t sound “homemade” anymore, so it’s definitely worth the effort.  I just have to remember to go back and read that sentence the next time I go to mix a song…

I figure the best way to go is by instrument:

Vocals: Ah, a very important part.  For vocals, especially recorded at home, you’re definitely going to want to make them brighter and to remove those bassy undertones that appear in the recordings.  For each vocal track (which for me, is plenty) I reduce sound at the 225Hz mark (most EQ setups will allow you to pick a frequency and when you either boost or reduce that frequency, it’ll boost or reduce the frequencies immediately around it, too).  I reduce at 225Hz a lot, up to -10 db, but make sure to listen back in case you’re altering the sound too much.

Then, I boost at 4kHz (that’s kilohertz, as opposed to Hz, or hertz- Hz (hertz) are lower frequencies and kHz (kilohertz) are high frequencies) to bring out the main range of the vocals, as that’s where most of the sound information in a vocal track lies.  I’d give a boost of about 3db.

Finally, if you don’t have a great condenser microphone, don’t worry!  You can breathe some life into your vocal tracks by giving a 1 or 2 db boost at the very high 10kHz frequency.  This will help brighten your vocal tracks.

Guitar: For guitars, especially acoustic guitars, I cut everything below 100Hz, as this will interfere with our bass drum sound- something that should be avoided at all costs.  I cut to -10db here.  Then, you can boost about 3db anywhere between 150Hz and 5kHz, depending on your guitar and the sound you want.  If I have two acoustic guitar tracks, I’ll EQ one with a boost in the lower frequencies and the other with a boost towards the high frequencies to give a balanced, different sound to each.  I like bright acoustics most of the time, so I’ll go towards 3kHz, but for some mean electric guitar, you may want to keep it around 1000Hz.

Bass: Again, you’d think this would be the “lowest” EQ space in your mix, but it’s not- you need that space for the bass drum or your song won’t have a beat!  So, give a cut at 250Hz and below of about 3db.  If you’re like me and like a “crunchy” bass (listen to the bass on “Sloop John B” by the Beach Boys and you’ll hear what I’m talking about), you can brighten the string noise of the bass by adding a couple decibels to about the 3.5kHz range.

Bass Drum: The all-important bass drum lives in the “bottom” of your EQ mix.  Increase the 80Hz frequency (by as much as you want, but start at 3db) or you can go up to the 100Hz mark, if you think it sounds better.  Between 150Hz and 600Hz, though, you’ll want to cut the EQ so it doesn’t interfere with your bass or possibly your guitars, depending on your decision.  So, here, cut quite a bit: up to -10db.  For this, you can also add a bit of “bite” at the 3.5kHz range.

Snare Drum: Also important, you’ll want to get rid of that “boxiness” sound at around 900Hz and maybe give a boost (we’re talking a couple of decibels here) all the way up at 9kHz for some brightness.

Cymbals: Cut anything below 200Hz on these almost completely- why would you EVER need those low frequencies from a cymbal?  This is the perfect example of useless sound information that would muddy-up and get in the way of your bass drum.  Give another cut (slight- maybe 1 or 2db) at 1.5kHz to take away some of the annoying ring and loudness from the cymbals that will cut through your mix too much.  You can also apply these changes to a tambourine track.

Some other tips:

– Cut at 50Hz to reduce microphone “pops” on your audio tracks- I hate when a great take is ruined by a popped “P”, so this should help.

– Piano is a tough one because it actually uses many of the frequencies in the sound spectrum.  But, to make it sound more “aggressive” (Jerry Lee Lewis, anyone?), boost your EQ at around 2kHz.

– To give some “sparkle” to your guitars, especially acoustic, you can give a 1 or 2 db boost to the 10kHz region, as well.

I hope these tips help you out while mixing your recordings.  I know they certainly helped me!  But, just like the old “leading a horse to water” adage, I figured it was best to first educate you on why this process is so important and why it works before giving you the info. you’ll need to get great sounding recordings, even if you’re rocking out in your home studio.

If you have any questions/comments, arguments/beefs, let me know by leaving a comment below!

Insight on acoustic video covers for the Laptop Sessions

By Jeff Copperthite:

When the idea came across to do The Laptop Sessions as a free video series, Jim started it all off by watching a video of Let It Be that was covered highly incorrectly, and Jim recorded himself playing it correctly. He wanted to put a good name to music covers on Youtube – especially covers of one of his favorite bands The Beatles. Also, in spirit of “The Bathroom Sessions”, which was a free music video series by two members of Barenaked Ladies, I helped coin the term “Laptop Sessions”, since Jim was using the camera on his laptop to record the video.

After the positive response to sessions that Jim put out, Chris and I decided to help Jim with the “Session a day” project starting in 2008. Initially, Jim enlisted Chris to put up one video between each of them, so that the site had a new music video each day. I had tried ten sessions in late 2007, but I had stopped doing them due to the low quality web camera I owned. In general, I was still new to the realm of video as well.

After nearly 60 new video sessions recorded for The Laptop Sessions this year, I have recording videos down to a science. Here is a typical rundown of what it takes for me to make each session.

1) Practice the song

This is the obvious one. As a songwriter, I know the more comfortable I am with the song, the better the video will come out. I grew up hearing a lot of music in the 90’s, so I tend to be most comfortable when I decide to do a song by bands such as Pearl Jam, Stone Temple Pilots, and Radiohead. However we do try to diversify our recordings across the years, and I know I cannot limit myself to alternative music bands. Therefore, some songs require up to a week of practice. Others I can learn and play comfortably in an hour or two. I will usually begin practicing the song regularly up to two-three days before I record it.

2) Set up the video recording station

Lately I have four common locations for my videos. The biggest problem I have is that I do not have a tripod for my camera (yet – I do plan to buy one). Therefore, I have to rest my camera on a makeshift stand. I also have to make sure there is enough light. Even during the day, I have to have at least one lamp on so the video doesn’t come out dark. After that, I position the camera, set the zoom, tune the guitar, and do a practice run of the song on the acoustic. Music tends to be easy for me – it’s singing and remembering lyrics that is the most difficult. For this reason, I have to put a small sheet of “notes” that remind me what verse or line to sing next. Sometimes, I have to include the entire lyric sheet, but that is rare. “Round Here” comes to mind as a song I just needed that entire lyric sheet by the camera for.

3) Record the video

When I am satisfied that I can record the song, I psych up for the performance. Lately, I have been able to record the song in about three or four takes. I don’t worry about what happens before or after the performance, since I can edit that out in the next step. As you have seen on our site, recording acoustic guitar video covers is real easy some days, other days you want to throw your guitar against the wall because something minor keeps messing takes up. “Jane” comes to mind with this (despite it being a piano cover). I had played it on the first take and was very happy to have satisfactorily made it through the song. That is, until I discovered the battery had died in the middle of the recording. I charged the battery, and then it took me another 20 or so takes to get it again. Other times, the performance comes so naturally you wonder why you practiced the song so much beforehand.

4) Edit the Video

This is probably the easiest step, despite it taking up to 30 minutes. I transfer the video to my laptop (as you can tell, I don’t own a laptop with a built-in camera, so technically I should be doing “The Powershot Sessions”). Once the video is transferred, I split the clip to the parts I want (usually this involves trimming out the beginning and end of the clip), then add on two title screens and a credit roll. Then, depending on the length of the performance, I render the video, which can take up to 12 minutes for long songs.

5) Write the description, and upload the video.

While the video renders, I write the Youtube description and tags. Usually I will comment on the song I chose, why I chose it, the album it is from, and any comments on the performance itself. I will also usually throw in some current news and other tidbits of info. My descriptions tend to be at least 100 words long. I can usually get both descriptions on the blog and Youtube before the video finishes rendering. Then I upload the video, copy the embedding information onto the blog, and publish!

What keeps me fresh for the sessions is when I try to listen to new music that I may like to cover. I found on Chris and Jim to introduce me to bands and songs I’d otherwise not know, but some people I know also help me out in that department. It is also fun to use this as a springboard to get people to hear our independent music. That is why we do “Original Wednesday”, and slowly we are building up some excitement from our subscribers when that day rolls around. At least we know everyone watching will be listening to something they’ve never heard before.

I hope you enjoyed getting some insight on the process on my end. As always, if you have questions please email admin@fusco-moore.com, and direct your questions to me, Jeff Copperthite. Have a great evening!

Video Blog: How Songwriter Jim Fusco Records A Song in His Home Studio

By Jim Fusco:

Originally done in four parts, this remastered video (originally recorded in 2007 and remastered in 2020) is about the triumphs and pitfalls of recording a song in my home studio. You’ll see techniques, advice, and most of all, a bunch of bad luck. The song I’m recording is called “Go Back To Him,” written by Jim Fusco and Alberto Distefano.  The song was featured on my album “Halfway There”, available at http://jimfusco.com and on iTunes!  Make sure to stick around for the end of the video and check out the music video for “Go Back To Him”, complete with remastered stereo audio for the first time!  For those who saw the original video, you’ll recall how dark the video looked, as the overhead lighting in my home studio at the time was not conducive to great video quality.  Thankfully, through the magic of Final Cut Pro X, I’m able to breathe new life into this documentary, which captures what it was like to record a song at 4am (which was easily done at 23 years of age…).