Labels

BASS (50) COMPRESSION (32) DRUMS (45) EFFECTS (49) EQUALIZATION (30) GUITAR (112) HOME RECORDING (95) IMPULSES (21) INTERVIEWS (19) KARAOKE (1) LIVE (10) MASTERING (61) MIDI (21) MIXING (179) REVIEWS (156) SAMPLES (69) SONGWRITING (19) SYNTH (3) VOCALS (31)

Saturday, February 26, 2022

How loud should a master be in 2021 (with free VST plugin)

 



Hello and welcome to this week's article!

Today we're going to integrate and expand our article about Mastering levels for streaming, cd and club play (click here to read it) updating it to the latest requirements and tools.

Levels in mastering are an evolving thing, it's something that has been changing through time with the evolution of the supports, and any mastering engineer needs to be up to date with the industry standard requirements, if he/she wants to nail the perfect levels.

Let's start by saying that times are changed from when we were talking about the Loudness war (click here for a dedicated article), a time in which everyone was competing in having the loudest (and often distorted) master, and the answer to that has been the fact that the main streaming platforms (Youtube, Spotify, Apple music and so on) apply a sort of normalization to the track to make them sound coherent, with the result that oversquashed masters gets a big penalty in terms of db.

In short: the louder (and less dynamic) your master is, the more db the platform will subtract to make it sound quieter; you can see how many db your master will be reduced by uploading it in this page: 

Loudness Penalty

The page will analyze for free your master and tell you with good approximation how many db will be reduced in all the main streaming platforms.

In order to reach the correct levels for the modern times the best approach is to check out the streaming platforms help section and read the suggested levels, in this case we are going to check out the Spotify ones:
they suggest -14db of Integrated LUFS (integrated means average, not peak) and -1db of True peak (-2db in case of a master louder than -14db of Integrated LUFS).
You can of course upload hotter masters, they will just be reduced in volume, so that they will sound at the same volume of the other songs, but more compressed and distorted.

LUFS are loudness units full scale, which is a modern value used to calculate the perceived loudness, and it's used mainly in streaming platforms, while True peak is the maximum absolute level of the signal waveform, and both values needs to be monitored after the limiter, since even if you put -1db of ceiling, the true peak can be louder, for example -0.3db in reality.

How to measure those values for free? With Youlean Loudness Meter plugin, a free Vst that does exactly that (and monitors also the dynamic range!), and based on that you can export your songs to make them sound the best they can be in every platform.

Happy holidays from Guitar Nerding Blog!! 


Become fan of this blog on Facebook! Share it and contact us to collaborate!!

Saturday, February 19, 2022

Volume inconsistency: solve it with the clip gain/event gain tool

 



Hello and welcome to this week's article!

Today we are going to talk about an ongoing topic, the one of gain staging, and the problem we're going to tackle is the one of the volume inconsistency.

When recording an instrument with high dinamic excursion (mainly vocals, for example), the professional recording engineers usually make the signal pass through a minimum of processing to make sure it arrives in the daw in such an optimal condition that it will already sound decent and require less mixing/digital processing, and usually we're talking about a little bit of compression (just to shave off a couple of db to reduce the difference between the loudest and the quietest parts), a little bit of de-essing and a high pass filter to rolloff the unnecessary subsonics; this is usually done in hardware preamps or good mixing boards.

Unfortunately in the world of home recording a good hardware compressor is not always at hand, and the vocals are recorded directly in the input of the audio interface, and often the level of the interface is set in order not to clip, so the engineer asks to the singer "scream as loud as you can", adjusts the volume so that it doesn't clip and leave it like that for the whole recording session. 

This leads to a HUGE dynamic excursion, to the point that in the same track there will be parts inaudible (and invisible graphically) and others that will take all the headroom possible, and we will have to compress the s*** out of the track to obtain a little bit of consistency in volume, with the downside that when the compression is too strong it unavoidably will end up coloring/deteriorating our track a lot.

How to solve this problem? It's easy, with a tool called Clip Gain (in some DAW, for example in Pro Tools) or Event Gain (in Studio One):  it consists in cutting the track in sections with the same gain and adjusting them, by raising the parts that are too quiet and lowering those which are too loud in order to make them more consistent, so that when the track will hit the compressor it will do its job in a clean and pleasant way.

In order to change the event gain you need to click and hold the little square in the upper middle part of the event and drag it up or down to add or remove gain.

This operation belongs more to the Editing phase than to the mixing one, and it's somehow a bit boring and long, but believe me, if you arrive to the mixing phase with a volume that is consistent throughout the track before reaching the compressor, it will make a world of difference, also because riding the volume, which is something that was done before the existence of compression to keep the volume stable, is actually a cleaner way to set the right gain rather than compressing, and this is just a way to do it not in real time. 

I hope this was helpful!


Become fan of this blog on Facebook! Share it and contact us to collaborate!!

Saturday, February 12, 2022

Review: Zoom GFX 707

 


Hello everyone and welcome to this week's article!

Today we are reviewing another legacy product, which was on the market around the year 2000: the Zoom Gfx 707 (often called just Zoom 707)!

In a time in which most of the beginning guitar players were starting with an ultra cheap guitar and a Zoom 505 (click here for the review), the 707 was already a step forward, the multieffect with the expression pedal to make solos with wah, like Jimi Hendrix or Kirk Hammett, and whoever had an expression pedal was destined to be unavoidably the lead guitarist in any high school band.

When researching information about this unit, I have found out (here) some detail that I didn't expect at all: this unit is not fully digital, it has an analog part (which is composed by the CompressionLimitingNoise Gate, Gain, Distortion, Sustain, FuzzEQ and Amp simulatorand a digital one which comprehends all the effects (Reverb, Modulation, Pitch shift, Harmonizer etc).

The unit can run with batteries or with dc adaptor and unlike the 505 offers mono and stereo otuput, to play with stereo effects.

So, how does it sound? Well, by today's standards, it doesn't sound well.

The characteristic of Zoom back then was to cram these ultra cheap devices with as many amps and effects as possible (and a drum machine too, which was actually quite useful for practicing!), aiming clearly to sell quantity over quality, plus this unit came out just one year before the revolutionary POD 2.0, and it had an interface which was all but intuitive.

Is it possible to obtain good sounds? Probably for some genre yes. The problem is that you need to pass through an interface that is not easy at all and to know your way around, and it is probably better to start from zero than from one of the patches, since they are just very, very caricatural, with the high gain ones so scooped and compressed that are usable probably just for some industrial rock riff.

Does it deserve to be bought today? Only if you find a very good deal and you are only looking for effects and maybe some clean tone, in this case the unit can be useful if found for 30/50 bucks, but if you need some overdrive or distortion there are many other units on the market that can get you a much better tone, also for cheap.

Thumbs down!


Specs:

- 74 guitar effects  (up to 10 can be used simultaneously)

- 30 Drive effects with VAMS amp modeling 

- 3 acoustic guitar simulators 

- 120 patches (60 preset/60 user) 

- Fully programmable on-board expression pedal 

- Onboard drum machine for practice or recording (60 preset patterns with tempo control)

- 6-second onboard sampler (you can slow down sampler re-play by up to 25% without changing the pitch for learning intricate guitar parts) 

- Smart Media card slot ready for expanded phrase sampler and patch memory

- Analog control knobs for fast editing

- Large LED display

- Dual footswitches for live performance 

- Onboard chromatic tuner.

Saturday, February 5, 2022

ADSR on synths? Envelope explained!



Hello everyone and welcome to this week's article!

Today we continue our exploration of the synth world talking about the concept of Envelope, also known as ADSR.

ADSR stands for Attack, Decay, Sustain and Release and it is something that we have already seen when talking about compressorsbasically it is the way a sound evolves from its beginning to its end, and it is called Envelope.

We are splitting a single sound, for example a snare, a vocal part or in this case a synth note in these 4 moments because each one lets us change radically the result, and it's fundamental when shaping our tone: especially on a synth, changing the ADSR can make the difference between a quick snap that disappear immediately and a super slow and long violin-like note.

Attack: decides the time that it takes for our sound to go from zero to the maximum level, 

Decay: is the time needed from our sound to go back from the maximum level to any set amount we decide.

Sustain: regulates how long the sound keeps playing (at the same level at which the decay ends) as we keep pressing the key on the keyboard, it can be a set amount of time (e.g. 1 second), or indefinite.

Release: decides how long a sound takes to go back to zero, once we release they key on the keyboard.

By choosing the right type of wave and fiddling with the envelope we can give our tone a shape that assigns it (very rudimentally, at this stage) a role, so that we can create music the same way they use to in 70s and 80s videogames, a genre still existent today called "chiptune", in which drums, bass and all the other instruments are created from scratch this way, and this was a very popular way of making music, before the invention of samplers.

Drums were creating by using sounds with very fast attack and decay, very quick release and zero (or almost zero) sustain, and the main difference between the drum parts was in the initial wave and the pitch. 

Same thing for bass, but with different notes and a little more of sustain (the sound doesn't have to be a percussion anymore), and then we could go on creating more high pitch tones with a bit longer sustain and start playing with the effects to create the harmonic content of the song, trying to imitate real instruments or going full synth with leads and pads (pads are synth sounds used for the background harmonies, like orchestral parts, and leads are what we can consider the synth version of a lead guitar part, or a vocal one).


Become fan of this blog on Facebook! Share it and contact us to collaborate!!