Ask The Musician: “How To Record All Instruments of a Multi-track Song Separately (and still have it come out right in the end)”

By Jim Fusco:

Welcome to another edition of “Ask the Musician” with me, Jim Fusco!

In lieu of recording another video tonight (I’m anxiously awaiting to record my first HD video, hopefully next week), I decided to finally respond to an inquiry I got on YouTube about how to record a multi-track song separately and still have it come out right in the end.  The YouTube user writes:

I have one big problem.  When we record, we obviously record them in different parts (by that, I mean we record the instruments separately).  But, we can’t record them at the same time and we have problems recording them apart.  When we try to mix them, something gets messed-up and we have to record over again and again.  Have any tips?

Why yes, I do!

People like Brian Wilson (of the Beach Boys) had many musicians at their fingertips.  So, it was easy to get all these professional musicians in the same room to record a track.  And these studio musicians never mess up.  They are the cream of the crop, so it was easy to say, “Play this,” and watch it get done.

For us that record alone, sometimes it’s hard to keep a beat constant through an entire song.  Actually George Harrison was known for having a great built-in timeclock when recording.  He could play a song in-time with no percussion behind him.  That’s one of the reasons why it was easy to finish his last album, “Brainwashed’ posthumously.

And that brings me to my first tip: the most important thing about a recording is to stay on-time and on-beat.  So, if you’re by yourself, make sure you lay down the drums first!  Of course, you have to have a drummer that won’t speed up or slow down on you, so that’s an important step, too.

Now, knowing that everyone’s human, you should also consider keeping even your drummer in-time by using a metronome.  Just lay down a track of a metronome in the right tempo first (you can always delete it or silence it later) and then have your drummer go to work.  Actually, at that point, you can lay down any instrument you want.  The only time this gets tricky is when the song changes tempo.  One thing you can do is program a very simple beat as a MIDI track (I used to use a program called Noteworthy Composer way back in the day- wonder if it’s still around?).  Then, you can map the song out, put in your tempo changes, and then just play it into your recorder as a track.

Another thing to keep in mind is software and hardware latency.  If you’re recording on a computer, you’ll run into this.  Even the fastest computers fall victim to it.  Have you ever recorded a video on a webcam and seen the audio/video sync go off?  Well, your computer is having trouble recording everything at the same time and it’s not making up for the latency (time lag) in either the software or the hardware you’re using.  And, like I said, even on the best computers, you can run into this.  I have a top of the line Mac Pro here and I get hiccups in my videos sometimes because my backup machine will kick in or a popup box will interrupt.  It’s that little blip in the continuous stream of processing power that can really screw things up.

Now, you may have good luck for one or two tracks, but consider this- each time you record another track on the computer, you’re playing back each additional track.  So, you can be playing back 24 tracks and recording another one at the same time- a recipe for audio latency.

That’s one of the reasons why I’m still recording on a DAW (digital audio workstation).  I never run into those problems because recording 24 tracks is the machine’s sole purpose.  There’s no internet, no downloads, no popups- just pure recording power.  I’ve never had a problem with it, unless it’s my own bad timing that screwed it up.

So, I hope that gives you something to think about.  It’s so difficult to record a whole song alone- only the best can really be great at it (Paul McCartney comes to mind).  If anyone else has any suggestions, I’d love to hear them- leave a comment below!  I’ll see you all next week- hopefully in full high definition- for another Laptop Sessions acoustic cover song music video.

How To EQ Your Recordings – Tips on Equalization from a Music Producer

By Jim Fusco:

Welcome new and longtime fans of the Laptop Sessions to this very special article that I believe will help a lot of aspiring musicians and recording artists make their recordings sound professional while recording them at home.  This article isn’t just in lieu of recording my usual Tuesday night Laptop Sessions acoustic cover song music video- my list of covers to do is actually longer than ever- I just had the urge to write an informative article that many people will find interesting and useful.  Before getting down to business, let me note that I’m hoping to record an extra-special cover song music video this week for inclusion on the music blog next Tuesday night, so stay tuned!

I’ve always battled with trying to make my home recordings sound professional.  I went out and spent hundreds of dollars on acoustic foam that I hung on the wall (and by “hung”, I mean attached to the wall by spray glue, permanent wall tape, and Gorilla glue), invested in some computer processing plugins for my music, and bought great microphones and amplifiers.  But, no matter how hard I tried, I couldn’t get that “home recording” sound out of my songs!  I do have a few tricks now (one secret way of getting the most volume out of my recordings and another to clear everything up), but that’s after the mixing occurs.

This article is meant to focus on the tweaking that should be done while mixing a song down to a 2-track stereo pre-master.  After you’re done recording, go into the EQ (or equalization) settings on your workstation, which I’m assuming is digital nowadays.  I use a DAW (or Digital Audio Workstation)- a Tascam 2488 24-track recorder.  I love it and I really can’t see myself upgrading for any reason for a very long time.  My brother prefers to use the computer and uses Sony’s ACID.  Some use Pro Tools, but I never got into it, even though I’m a huge Mac fan.  To be honest, I use Final Cut Pro for video editing, but I’m not really a fan of that, either.  I used to LOVE Sonic Foundry’s Vegas (before Sony bought them out) for video editing.

One more point before we get to the EQ settings- back when I recorded using analog equipment, I never had to deal with equalizing the multitracks of my songs.  Truth be told, I actually still love analog recording, even though I always worked towards removing that hiss that goes along with recording on old fashioned cassette tapes.  You see, with digital audio recording, you reach a peak at “0”- if your volume goes above zero, it’ll “clip” and you won’t hear anything- maybe a bit of digital distortion.  But, with analog recording, you could allow the levels to go into the red a couple of decibels and still get clear recordings (to a certain degree).  Thus, all my old analog masters are much louder and fuller.  Plus, since there was more room for me to boost levels, individual instruments stood out in the mix more.  Of course, I realize now that what I was doing really wasn’t the “proper” way to record and mix, but honestly, the results were there, so “proper” isn’t really a good argument for me.

Onto EQ- basically, I’m going to give you some pointers on how to EQ certain tracks so that your audio doesn’t sound muddy when you mix it down to a 2-track stereo pre-master.  The theory behind cutting some of these “bands” of equalization (say all the sound below 50Hz) is this: Say you have 24 tracks like I have on my DAW.  Well, when you record, you’re recording ALL the possible sound spectrum that your microphone or pickup can handle.  Then, the DAW records every possible piece of sound information it hears because DAW’s are digital and can pick up any sound, especially when it’s uncompressed PCM files (like .wav or .aiff files on your computer).  The theory here is much like the file-size savings you get when you convert something from a .wav or .aiff file to an .mp3 file.  You get essentially the same sound quality, but at a tenth (or less) of the file size!  How is that accomplished?

Well, with mp3s, it’s a combination of a couple things- first, it compresses the data in a special format that’s smaller in file size than a standard uncompressed .wav or .aiff.  That part doesn’t matter to us here.  What matters is that all-important second piece to the mp3 compression- mp3s don’t carry ALL of the sound information that uncompressed files do.  So, for instance, a high quality mp3 file will have all the sound frequencies, minus the very, very high and very, very low frequencies.  The vast majority of humans don’t hear these sound frequencies anyway, so shaving them off the sound file doesn’t alter the sound we hear that much.  But, since there are less frequencies (and thus, less information in the file), the file size gets smaller.  Now, if you have a lower quality mp3, one of the ways it gets the file size down is to limit the sound frequencies in the file further.  That’s why you get a low quality mp3 that sounds like it’s coming through a phone- there’s not as many frequencies in the file, so the size is smaller, but the sound is affected more.  Once you start cutting into sound frequencies that humans can actually hear, you start altering the sound of the music file.

So, how does all that relate to EQ-ing your music?

Well, we’re essentially trying to do that same second-piece of the mp3 process, but track-by-track.  And, we’re not trying to save file size, we’re trying to save from that “muddy” sound that home recordings get.  So, why does that “muddy” sound happen when you mixdown your recordings?

Well, I’ve always noticed that the sound coming out of my 24-track is pristine during regular playback.  But, when I mix it all down to a pre-master, I notice the difference.  That’s when the recorder tries to blend all your tracks together and fit 24-tracks worth of sound into just two tracks- a left channel and a right channel for stereo.  As you can imagine, that’s not an easy task and there are a lot of “assumptions” your recorder makes when mixing down.  For instance, if two tracks have a sound playing at the same frequency and at the same volume, your recorder may decide to give once precedent over the other during the mixdown process, which brings one sound out and drowns the other one out.

Also, have you ever noticed that live acoustic recordings, such as one person singing with an acoustic guitar and nothing else, always sound so much clearer and louder, especially when it comes from a home recording?  Well, that’s because you only have two tracks competing for their share of the sound space.  And since a guitar and one vocal track don’t compete for as much sound space as, say, a guitar and bass would (without EQ, that is), you get a much clearer recording.

So, the idea is this: we have 24 tracks of sound that use every single possible frequency.   That means that the bass guitar track, even though the part you really want to hear (most of the time, unless you’re Brian Wilson) is in the low frequencies, it still contains a recording of ALL frequencies, from low to high.  Now, say you had a vocal track.  Vocals take up a very specific range of EQ frequencies, as the human voice can only go so high or low- most of the time, we’re right in the middle.  Well, the recorder also records ALL possible frequencies on this track, as well, including ones that would conflict with your bass track.  Now, add two acoustic guitars, electric guitar, piano, drums, etc. and you have every single one of these tracks with sound information in every single possible EQ band.

But, the point is- Every instrument or vocal track only needs certain frequencies! So, why would you have 24 tracks all have hum in the 80HZ range (say from a furnace that was on next door that your microphone happened to faintly pick up) and drown out your bass drum, which thrives in that frequency?  (Just a note- that furnace sound at 80Hz may sound very faint on one track, but multiply it 24 times over and you’ve got a major problem that you wouldn’t have been able to fix without EQ)  So, every instrument needs its own sound space to live in.  If you reduce the number of tracks competing for a certain EQ frequency band, you’ll give every instrument its own “pocket” of sound space in the mix and nothing will get drowned-out.

I will also point out that this is my least favorite part of the recording process- it’s tedious, there are SO many options (do I cut by 3 db or 4 db?), and since you have to go track-by-track, it takes forever.  But, this process is the single biggest reason why my recordings don’t sound “homemade” anymore, so it’s definitely worth the effort.  I just have to remember to go back and read that sentence the next time I go to mix a song…

I figure the best way to go is by instrument:

Vocals: Ah, a very important part.  For vocals, especially recorded at home, you’re definitely going to want to make them brighter and to remove those bassy undertones that appear in the recordings.  For each vocal track (which for me, is plenty) I reduce sound at the 225Hz mark (most EQ setups will allow you to pick a frequency and when you either boost or reduce that frequency, it’ll boost or reduce the frequencies immediately around it, too).  I reduce at 225Hz a lot, up to -10 db, but make sure to listen back in case you’re altering the sound too much.

Then, I boost at 4kHz (that’s kilohertz, as opposed to Hz, or hertz- Hz (hertz) are lower frequencies and kHz (kilohertz) are high frequencies) to bring out the main range of the vocals, as that’s where most of the sound information in a vocal track lies.  I’d give a boost of about 3db.

Finally, if you don’t have a great condenser microphone, don’t worry!  You can breathe some life into your vocal tracks by giving a 1 or 2 db boost at the very high 10kHz frequency.  This will help brighten your vocal tracks.

Guitar: For guitars, especially acoustic guitars, I cut everything below 100Hz, as this will interfere with our bass drum sound- something that should be avoided at all costs.  I cut to -10db here.  Then, you can boost about 3db anywhere between 150Hz and 5kHz, depending on your guitar and the sound you want.  If I have two acoustic guitar tracks, I’ll EQ one with a boost in the lower frequencies and the other with a boost towards the high frequencies to give a balanced, different sound to each.  I like bright acoustics most of the time, so I’ll go towards 3kHz, but for some mean electric guitar, you may want to keep it around 1000Hz.

Bass: Again, you’d think this would be the “lowest” EQ space in your mix, but it’s not- you need that space for the bass drum or your song won’t have a beat!  So, give a cut at 250Hz and below of about 3db.  If you’re like me and like a “crunchy” bass (listen to the bass on “Sloop John B” by the Beach Boys and you’ll hear what I’m talking about), you can brighten the string noise of the bass by adding a couple decibels to about the 3.5kHz range.

Bass Drum: The all-important bass drum lives in the “bottom” of your EQ mix.  Increase the 80Hz frequency (by as much as you want, but start at 3db) or you can go up to the 100Hz mark, if you think it sounds better.  Between 150Hz and 600Hz, though, you’ll want to cut the EQ so it doesn’t interfere with your bass or possibly your guitars, depending on your decision.  So, here, cut quite a bit: up to -10db.  For this, you can also add a bit of “bite” at the 3.5kHz range.

Snare Drum: Also important, you’ll want to get rid of that “boxiness” sound at around 900Hz and maybe give a boost (we’re talking a couple of decibels here) all the way up at 9kHz for some brightness.

Cymbals: Cut anything below 200Hz on these almost completely- why would you EVER need those low frequencies from a cymbal?  This is the perfect example of useless sound information that would muddy-up and get in the way of your bass drum.  Give another cut (slight- maybe 1 or 2db) at 1.5kHz to take away some of the annoying ring and loudness from the cymbals that will cut through your mix too much.  You can also apply these changes to a tambourine track.

Some other tips:

– Cut at 50Hz to reduce microphone “pops” on your audio tracks- I hate when a great take is ruined by a popped “P”, so this should help.

– Piano is a tough one because it actually uses many of the frequencies in the sound spectrum.  But, to make it sound more “aggressive” (Jerry Lee Lewis, anyone?), boost your EQ at around 2kHz.

– To give some “sparkle” to your guitars, especially acoustic, you can give a 1 or 2 db boost to the 10kHz region, as well.

I hope these tips help you out while mixing your recordings.  I know they certainly helped me!  But, just like the old “leading a horse to water” adage, I figured it was best to first educate you on why this process is so important and why it works before giving you the info. you’ll need to get great sounding recordings, even if you’re rocking out in your home studio.

If you have any questions/comments, arguments/beefs, let me know by leaving a comment below!

“Shine” by Trey Anastasio Chords, Lyrics, and How to Play: Ask the Musician

By Jim Fusco:

Welcome back again for another edition of “Ask the Musician” with me, Jim Fusco!  Tonight, I answer Meagan’s question, who posted on my cover song music video of “Shine” by Trey Anastasio.  Meaghan left a cool message here on the blog, so I had to type out the chords to this great tune for her.

“Shine”

Trey Anastasio

Intro: C         Dsus2        A

A                            Em
You know, all of you know
D                        A
To grow, what to feel
A                     Em
And so, follow me low
D                      A
You are what you lean on
A                            Em

Come out of the cold
D                A
And drift, into water

E               D
Ooooooohh
A                                   G
And the light shines on
D           A
While we all ride on
A                         G
When the days come and gone
D                A
You know we all ride on

Post-chorus:        C              Dsus2            A

Lines thicker than ground
You surf and its real
To soar over and down
To bend and to breathe on

CHORUS

C              Dsus2
Through water when
C            Dsus2
We are falling
C                                Dsus2
The sounds of bells are ringing out
A
We’ll ride on

C             Dsus2
Slipped over
C              Dsus2
The blue lighting
C                            Dsus2
Springs alive to circle down

E               D
Ooooooohh

CHORUS (2X)

Ask the Musician: “Do You Record Video & Audio Together on Your Acoustic Music Videos?”

By Jim Fusco:

Well, back earlier than expected for another edition of “Ask the Musician” with me, Jim Fusco!  If you’ve never seen this column before, let me give you a brief introduction.  I’m Jim Fusco of the Laptop Sessions acoustic cover songs and original music video series on YouTube and on our blog here at guitarbucketlist.com.  Every day, I get thousands of views on my cover songs and original music videos on YouTube.  With that popularity comes some great questions from people all around the world.  They want to know technical questions, music theory, and other great general music tidbits.  I’ve completely submerged myself into making music these past few years and I’m happy to share what I’ve learned- hundreds and hundreds of hours reading, researching, playing, writing, singing, and recording.  I’m glad to answer these questions here on “Ask the Musician”!

Tonight, we have an email from christianamagic on YouTube.  She writes:

Hi.  Just wanna ask. So when you record song, do you only record the audio? or record both audio and video??
and what did you use to edit it all together? Thanks

Well, that’s a good question.  She’s asking about the acoustic music videos I perform on YouTube.  You know, sometimes I’ve considered going back and recording the audio over the video to make a better take, but I can never bring myself to do it.  To answer your question: I record the audio and video at the same time!

That’s not to say that I don’t have to tinker with it sometimes.  I remember on a Ben Folds song, “Time”, I recorded the video with the microphone facing backwards!  So, you could hear a LOT of piano and a little of my voice.  Well, I had recorded this in a remote location, so I couldn’t go back and re-record the video.  My solution was to work some EQ magic.  I brought out the vocals in the mix.  It’s not perfect, but it made due for the video.  I guess I could’ve gone back and re-did the audio, but believe me, that would be much more of a burden than it seems.  Imagine syncing the vocals up perfectly to a live performance like that?

For hardware, I use a ZOOM H2 microphone that plugs into USB.  This thing is great.  I used to use the built-in microphone on my Macbook laptop.  It’s actually a pretty decent condenser microphone, but my computer’s fan would run so much (and SO loudly), that I had to think of a better solution.  Sometimes, you’d hear more fan than song!  So now, I keep the microphone close to me (and only turn the front mics on as to not hear the fan noise behind it) and record on the laptop.  For ease of production, I use iMovie to edit the video and audio.  If I didn’t do so many videos, I might use Final Cut Pro, but even still, it’s nice to have a simple solution to get the song out there in a nice neat package.

I hope this helps and answers your question.  Just like Peter Griffin’s advice on Family Guy is “To grow a beard,” my advice to you here is, “To get a Mac!”  It’ll make your life a lot easier when putting music videos on YouTube.

Submit your question to admin@guitarbucketlist.com and comment below to tell us how you record your videos!