It looks like you're new here. If you want to get involved, click one of these buttons!
Subscribe to our Patreon, and get image uploads with no ads on the site!
Base theme by DesignModo & ported to Powered by Vanilla by Chris Ireland, modified by the "theFB" team.
Comments
Bandcamp
Spotify, Apple et al
I’ve got about 15 tunes to record and about 4 hours per week to work on them. I’ve paid for a session drummer so my plan is to record all the instruments over the next 6-12 months while at the same time practicing the vocals.
Then record the vocals when I’m ready. At that point I may well just book a studio and sing.
Depending on how it all goes I might even pay someone to do the mix. Certainly for the mastering process.
https://speakerimpedance.co.uk/?act=two_parallel&page=calculator
Bandcamp
Spotify, Apple et al
Bandcamp
Spotify, Apple et al
Same for reverb and delay busses. It's common to have 3 vocal reverb busses with long, middle and short reverbs, and mix them. I recently listened to an interview with a top engineer (don't remember which one) who mixed 7 different reverbs for a voice, to get the sound he was looking for. For some artists/productions, an almost completely dry sound works best, for others, it's the opposite - the layered reverb spaces on Lana Del Rey's voice are a work of art in itself..
If you have a singer with a quiet voice, being close to the mic may actually be better than a foot away, to get some additional warmth from the proximity effect.
Nothing's too much or too little as long as you know why you're doing it. It's easy to be dogmatic about the process.
Bandcamp
Spotify, Apple et al
If people here are trying to LEARN how to mix vocals then approaching it from the basics is the way to do it.
When I teach audio mixing to people, as I sometimes do, the number of people who want to bypass the basics and go straight into Brauerizing, side chain compression and all the advanced stuff is nearly 100%.
No one wants to learn how to do the basics.
None of the pro guys who developed and use these advanced techniques did so without learning the basics.
Walk before you run.
Studio: https://www.voltperoctave.com
Music: https://www.euclideancircuits.com
Me: https://www.jamesrichmond.com
I haven't advocated over complexity as the correct way to do things. I just offered my experience as an alternative to an all-in-one vocal chain plugin. A lot of what's being offered as a counter to that feels like it's an argument against stuff I didn't say. I'm not the one who told someone else their way was a sign that something was wrong, and trying to make a moral stance on the number of plugins someone once used in a chain to get the sound they wanted doesn't help anyone.
My advice to anyone starting out is to dive in and make mistakes. Loads of mistakes. Over compress things, undercompress things, make things too bright, muddy, boomy, scooped, boosted. Drown vocals in reverb. Try everything.
Nobody will die. The mistakes will teach you in a way production tips on a guitar forum never could. And maybe in a year you'll start to understand when to use the 1176 emulation, or the La2a, or the built in compressor on the vocal channel strip.
With that, I will bow out and look forward to hearing what @fretmeister puts together!
Bandcamp
Spotify, Apple et al
In fact I discovered one that I really should not have made.
I got my drum track and stuck it in Reaper. Recorded guitars and bass to it, not as a final cut or anything but as part of guitar tone testing. Was very happy with the results as a first pass. But as I started to play with it more...
Now.... did I, or did I not set the DAW BPM to match the drum track? Did I or did I not get increasingly frustrated because I couldn't work out why the waveforms didn't appear to be landing on the right beat...
https://speakerimpedance.co.uk/?act=two_parallel&page=calculator
FWIW ... I absolutely agree with this and it's how I work now.
I think there is a place for all in ones however from a learning point of view. Having everything there so you can see SORT of what you'll need and turn on and off at will to judge what will happen I think is really helpful. When I started I had no clue! So opening (my old favourite, the Butch Vig vocal plugin) I could see "ok, de-esser ... some EQs, compression, ooooh saturation" it gave me a feel for what I liked and new what to go for to reach that later.
But yeah ... once I understood what was needed at a "standard" level I could then explore other options to more accurately control the sound to my taste. And then add crazy shit no one else wanted, ever.
Do still occasionally use these tools though as they do have a sound of their own, and that's no bad thing if it's the sound you're after.
Channel strips are just another tool in the box, not inherently better or worse than separate EQ/Compression/Saturation plug ins. You may even mix and match - use a surgical, neutral, subtractive EQ, followed by only the compressor from an SSL channel strip, followed by a Fairchild emulation compressor, followed by tape emulation from Goodhertz Wow control.
That's just an example, not a 'real' suggestion.. I find that a good approach is to listen to the uneffected track, and imagine what I'd like it to sound like, then work towards that sonic image.
It also depends on the source material. If you have a great sounding track, and just want to sprinkle some gold dust on it, minimal processing may be the best approach. If the source track sounds like crap, excessive, creative processing may turn it into something salvageable and interesting.
As Andrew Scheps says - the only thing that matters is what's coming out of the speakers...
Some people say just use your ears but you need to be aware a lot of the time you need to cut stuff your ears can't hear because your speakers can't reproduce it .. but it's still there eating up headroom in the mix. In a live scenario this is even more important.
That looks right up my street!
https://speakerimpedance.co.uk/?act=two_parallel&page=calculator
I’ll have to buy it.
https://speakerimpedance.co.uk/?act=two_parallel&page=calculator