by Fletch | Oct 24, 2015 | Uncategorized
It wasn’t that long ago that music tracks were recorded to analog tape. Although that method has been superseded by recording to hard disks for most recordings, many engineers still use analog equipment when tracking and mixing. Why continue to use analog equipment when digital is so much more convenient? Simply put, analog devices color sound in ways that we like. Analog transforms the recording signal by saturating and distorting the sound in pleasing ways. Digital recording technology doesn’t impart the same kind of sonic footprint. It captures it in a colorless way. But we like color. We like the imperfections that shade what we are hearing.
Fortunately, for those of us who don’t have access to a vintage Neve console and a wide array of classic gear, there are many plugins that can approximate this in the digital realm. We have plugins that saturate, distort, com- press and amplify our signal paths in an attempt to recapture the magic of analog audio, many of which may be included with your DAW. Analog emulators can bring a bit of grit, grime and dirt to sounds that otherwise sound too pristine. Guitars, keyboards, synths, drums and even vocals often benefit from a bit of analog smear and using these plugins can degrade the sound in ways that give them character, allowing them to exist more harmoniously in the mix and create a more organic sound. Strange as it may seem, to get a better overall sound it is sometimes necessary to make the sounds more low fidelity.
Many times when I load up a virtual instrument I find there is something too perfect about it. It may be beautifully sampled but it sounds too clean, too flawless. When I attempt to put it next to an electric guitar (a grungy instrument if there ever was one) in an arrangement, it doesn’t sit well because the virtual instrument lacks the flaws and imperfections inherent in the guitar. That is where distortion comes in. I love distortion. I love the big, in your face, brain-melting distortion that you hear on hard edged tracks, but more often I use it in a subtle way. I will add a plugin on the channel strip that allows me to futz up the sound ever so slightly. You wouldn’t describe it as being distorted, in fact, you might not even notice the distortion unless you listened very closely because I’m not trying to make the part sound distorted. I’m simply trying to give it a little edge and add some harmonic excitement.
There are many plugins that can be used to add distortion. Some have the word “distortion” in the name but they may use words like “saturation,” “overdrive” or “lo- fi.” Generally, they are found in the harmonic section of your DAW’s plugins because they add harmonic content to the sound. Your DAW probably has some good built-in distortion effects available to you. For example, Logic has a whole section of distortion plugins. For subtle effects, I prefer ones that emulate analog distortion such as Distortion II over the extreme digital destruction of Bitcrusher. And a little goes a long way. I will often add just enough that the sound is warmed up but not so much that the distortion calls attention to itself.
Another way to add a bit of grit is to run a sound through an amp simulator. You may have done this with a guitar or bass but virtual amps sound great on keyboards, percussion, and drums. Even vocals can sound cool run- ning through an amp. Most of the DAWs now come with amp simulators and there are plenty of good third party plugins, such as Native Instrument’s Guitar Rig and IK Multimedia’s Amplitube. A little knowledge of what different types of amps sound like is helpful. Typically the plugin amp’s interface will closely resemble the type of amp it is supposed to sound like and a Google search on “guitar amplifier” can give you an idea of what the real amp is. Unsure of what those amps sound like? Take a look at what guitar players use them and then give a listen to their sound. You will soon discover that an amplifier like a Marshall is well suited for harder edged sounds, whereas Fenders tend to give a cleaner less distorted tone. Amplifier plugins also may include guitar stompbox emulators that can modulate, delay, filter, compress and further mangle the sound in ways that are gloriously lo fi. If you want just a little bit of the amp or stompbox effect in the sound, try loading it on an aux channel and using a send on your channel strip so you can easily blend in as much of the amplified sound as you wish.
Creative use of compression can also add character to otherwise lifeless sounds. Although compressors are primarily used to tame the amplitude envelope of an instrument they can also be used to impart some analog warmth. Mixers sometimes run sounds through analog compressors without compressing the signal, simply to add the compressor’s character to the sound. This approach can be used with compressor plugins that emulate classic analog compressors. For example, Logic’s new compressor has buttons at the top of the interface which allow you to apply models of compressors with tube, optical, VCA and FET circuitry. These models all have different sonic characteristics that will change the timbre of the sound in subtle ways, in addition to compressing the signal. There are good articles on the Internet which explain the differences between the compressor types and what kinds of instruments they tend to be used on. Filters are used to limit the frequency range of sound. Equalizers typically have high-pass and low-pass filters built into them, but when I want to give a sound more character I prefer to use filters that function like those found on synthesizers. Synth filters have an additional parameter that is not generally found on EQ filters— resonance. Resonance emphasizes the frequency at the filter’s cutoff point. Adding resonance gives some nice harmonic emphasis to a sound while the filter helps to carve out its own frequency space within a mix. This works great with low-pass filters, but band-pass and high-pass filters can also yield interesting results. You will find that different manufacturers’ filters have their own unique qualities and characteristics to them, some sounding more analog than others. One of my favorites is FilterFreak by SoundToys, which is incredibly flexible and sounds fantastic.
Reverb can also be used to add some dirt to the sound. I particularly like using convolution reverbs like Audioease’s Altiverb and Logic’s Space De- signer for this purpose. Rather than load an impulse response of a beautiful sounding space, I will run a keyboard or sample of a mellotron through an IR of a dusty plate reverb or a spring reverb of the type found in old guitar amps. These typically have some grittiness to them and add a lot of character to the sound being reverberated. Altiverb even has a whole section of odd spaces such as dustbins, cheap toys and nuclear cooling towers that can be used as echo chambers for your sounds. It is even possible to load wave files of any sound into convolution reverbs to use as resonators. The results are often interesting, but turn down your outputs when testing these out as the levels can be unpredictable.
Adding imperfections to tracks with distortion, filters, amps, compressors and funky reverbs can add a lot of life to the sound. Virtual instruments and sounds we track in digital studios can sound boring if you don’t inject some anomalies into the channel strip. The thing we love about analog is the coloration and smear that analog technology imparts on sound. You can use plugins that simulate the effect of analog gear as tools to color and shade your mixes. Fortunately, with clever approximations of analog techniques and a good ear you can inject a lot of that character into your arrangements to create a more organic sound.
Originally published in THE SCORE magazine Vol. XXX No. TWO
by Fletch | Oct 12, 2015 | Uncategorized
Every composer knows that adding a real musician or seventy to a track really brings it to life. Samples just don’t have the nuance and expression of real instruments. But if you aren’t trained as an engineer or haven’t done a lot of recording in your studio it may be daunting to bring in a player to record in your project studio. Let’s look at some of the factors that contribute to getting a good result.
The sound of the recording space
When recording an acoustic instrument, the sound of your room plays a roll in the way that the instrument will sound in the final recording. Treating your space with acoustical material will not only make it sound better for accurate monitoring but will make instruments sound better in the recording space. Applying acoustical treatment does not require a PhD in acoustics. There are lots of good articles available on the internet about treating rooms for sound and Focal Press recently published a fantastic book, The Studio SOS Book, that demystifies the process.
Most rooms in which we set up project studios are box shaped. This presents acoustic problems because sound waves bounce back and forth between parallel surfaces and cause certain frequencies to be either too loud or too quiet. We need to use acoustic treatment in order to combat these parallel waves and reflections that create an unflattering acoustic environment for listening and recording. Companies such as Auralex sell complete room treatment packages for taming problematic acoustic spaces.
Microphone preamps
A microphone preamp amplifies a microphone signal so that it is loud enough to be audible in a recording. Your audio interface has a microphone preamp built into it but most studio owners will buy a dedicated mic preamp so that they can control the level of signal being recorded as these preamps have a gain knob while many audio interfaces do not. High-end mic preamps also color the mic signal in a pleasing way. I’m sure you have heard of the much sought after preamps made by companies such as Neve and API.
Now I am going to suggest something a bit radical here. In the context of project studio recording, the preamp you use doesn’t matter that much and the one that you already have (provided it has level controls to change the incoming signal level) is probably fine for your purposes. Don’t get me wrong, at some point you will want to invest in a high quality mic preamp, but the differences between less expensive mic pres made by companies such as ART and the expensive ones made by Grace Designs and Rupert Neve are subtle. You can hear the differences when ABing them, but when mixed in a track with other sounds, the mic preamp will play less of a role than the microphone you use or the way that that instrument was miked.
The microphone
The microphone you use to record the instrument you are tracking plays a big role in the sound you get. One of the benefits of going to a professional studio is being able to choose from the large collection of mics that most studios own. For the project studio owner, purchasing a good quality, versatile microphone is an important investment. This is a great time to buy a microphone, as there are many good microphones that can be found for less than $1000.
If you are buying your first microphone, you should look for a large frame condenser (also known as capacitor) microphone, ideally one that has multiple polarity patterns to switch between. The polarity pattern refers to the direction in which the microphone picks up a signal. Most mics will have a cardioid pattern. A cardioid pattern means the microphone picks up signal from one direction and rejects it from others. It is the most common microphone pattern because it allows relative isolation of the sound source.
Some microphones have a switch that allows you to choose different polarity patterns. The most useful polarity pattern after cardioid is omnidirectional, which means that the mic will pick up a signal from the back and sides in addition to the front. The omni pattern is very useful if you want more of the sound of the instrument in the context of the space you are recording. For example, I have got good results using an omni pattern on violin because that is an instrument that we typically hear more room sound on the recording.
I recommend investing in at least one good quality condenser microphone. If you are planning on recording a number of different instruments look for one, such as the AKG 414, which is versatile and gets good results on a variety of sources. On the other hand, if you are a singer and primarily planning on recording your own voice, then find a microphone that sounds best with your voice. The microphone that sounds best on a specific singer can be highly variable and some singers have found that an inexpensive microphone works best with their instrument.
The recording process
When you are recording in a space where the instrument is in the same room that the speakers are located, it is important to turn off the speakers and monitor on headphones while recording. You don’t want the recorded signal to come through the speakers and get rerecorded onto the track.
Make sure you adjust the input level on your microphone preamp so that it doesn’t exceed 0 db in your DAW, as this will cause the signal to distort and there is no way to rectify that after it has been recorded. I recommend setting the level between -12 and -18 db, as this will give you enough headroom for most instruments. You need to look at the meter on the track’s channel strip in your DAW to see what the incoming level is. Once you have set the mic pre so you have a good level, don’t change it between takes. Otherwise it will make it difficult to comp between different takes.
If you haven’t recorded a particularly instrument, doing a little research is very helpful. An internet search can yield a lot of great ideas for approaching miking the instrument. It isn’t always obvious where the best mic position will be. For example, you might think that sticking a microphone in the bell of clarinet would be produce a good result, when, in fact, a clarinet is generally sounds best when miked about a foot above the player’s hands.
Players can be good resources for miking their instrument. They have been on a lot of sessions and have seen where engineers will place the microphone. Many players also do their own recording and have experimented with what sounds best with their instrument. I find this to be a good starting point from which I can make adjustments as needed.
Ideally, I will adjust the microphone position by moving it while the player is warming up and listening to how the change in position is affecting the tonal quality. In a small studio in which you are tracking and recording in the same space it can be difficult to make adjustments while the player is playing, since you are listening to the signal through headphones and hearing it acoustically at the same time. In that case, making a recording, then listening and making adjustments is the way to go.
I like to think in terms of adjectives to describe the sound I am looking for as it helps focus my attention on what I am hearing. Does the instrument sound natural? Does it sound warm? Is it thin or brittle? Is there enough of the room sound in the recording or is there too much? These assessments will inform where you move the microphone. Moving the microphone closer to the instrument will typically give you a warmer sound with less room reverberence, because a microphone with a cardioid polarity pattern will emphasize lower frequencies the closer it is to the instrument and pick up less of the room.
I like to record most instruments in my studio using two microphones. Though I sometimes do this to create a stereo recording, I often use the two mics to take advantage of tonal differences found from different mic positions and different mics. On violin, I have used a cardioid mic close to the player and an omnidirectional microphone placed about ten feet away. I can get a great balance between the focus of the close mic, and the more ambient sound of the omni. This technique also works well with woodwinds. I found a great flute sound using one mic near the player’s mouth and another a foot or so from the footjoint of the flute.
Another benefit of recording with two mics is that I am able to get an idea of what microphone sounds best on each instrument. Sometimes I record two signals and end up using just one of the mics. It all depends on what sounds best in the context of the music.
I always take photographs and notes at each session so that I learn from my experiments. I use a note taking application called Evernote for this purpose and have a folder called “Recording” where I keep all my notes. This gives me a self-generated resource for future sessions.
In conclusion
I am not formally trained as an engineer and wouldn’t be comfortable setting up the microphones for an orchestral session. I leave that to the talented and experienced engineers that we have here in Hollywood. But there are many projects in which I am bringing in players to my studio to work their musical magic and replace my sterile samples. With a little knowledge and a willingness to adjust and experiment, I have been able to get great results. You can use these basic principles as building blocks for enhancing your tracks with live players. For further reading, I highly recommend The Studio SOS Book from Focal Press.
by Fletch | Oct 2, 2015 | Uncategorized
There are a lot of great virtual instruments (VIs) available for composers these days. Most of these include an impressive array of presets that sound great and provide good sounding results with very little work. Many of us don’t have the time or desire to design sounds from scratch, even though most VIs offer tremendous sound sculpting capability. But using presets all the time can lead to homogeneous sounding music. With a little understanding of what the knobs, sliders and drop down menus on the interface control, it is easy to tweak the sounds to make them your own.
Many of the parameters and terminology that VIs use for sound modification have their roots in analog synthesis. Understanding the terminology and the parameters they control will help you better understand how to get the sonic results that you want. Let’s break down some of the terminology and parameters you will encounter on many VIs.
Oscillators
An oscillator is an electronic waveform that serves as the sound source for a synthesized sound and is abbreviated as OSC. No oscillator, no sound. These waveforms are usually sawtooth, square, sine or triangle waves and are distinguished from each other by their harmonic content. A sampled sound source is not considered an oscillator but it functions in the same basic way. It is the genesis of the preset’s sound and can be manipulated in a variety of ways.
Sound sources can be swapped or changed quite easily in most VIs. They are generally changed via a dial or a drop down menu in the VI’s interface. In Omnisphere, for example, clicking on the sample window in any layer will allow you to choose from thousands of soundsource samples, greatly expanding the sonic possibilities of the instrument with a single click. Changing the oscillator or sampled waveform is the most basic way of changing the sound.
Filters
A filter is a tone control. Most VIs include a low pass filter (LPF) and a high pass filter (HPF) to allow you to shape the frequency content of your sound. They may also include a band pass filter (BPF) which allows you to target a specific frequency band. If your sound has too much high frequency content, lowering the cutoff of the low pass filter will make it sound less bright. A sound that is too bassy can be thinned out by changing the cutoff of the high pass filter. The treble and bass controls on your stereo are examples of low and high pass filters that you are probably already familiar with.
Filters also include a resonance control, abbreviated as RES. The resonance control increases the intensity or volume of the frequency at the point of the filter cutoff. A high amount of resonance applied to a low pass filter is used to achieve the barking or squelchy sound heard on many synth sounds.
Envelopes
Envelopes are used to control the volume (amplitude envelope) or frequency (filter envelope) contour of a sound. They change the sound over time. Envelopes have stages to control the sound at various points in time. They are expressed as attack, decay, sustain and release and abbreviated as ADSR. Whenever you see the letters ADSR, you can be sure that that is an envelope.
On an amplitude envelope, the attack value determines how quickly the sound increases in volume from silence. A percussive sound has a very fast attack, whereas a legato sound has a slower attack. Decay controls how fast the sound lowers in volume to its sustain level and the release value determines how long it takes the sound to fade out after the note ends when the keyboard’s key is released. The envelope is often the first aspect of the sound you wish to tweak, so knowing where to find it is very useful.
Modulation and LFOs
LFO stands for low frequency oscillator LFOs used to change aspects of the sound in a repetitive or cyclical fashion. An LFO uses a waveform such as a sine or square to modulate a parameter. It is called a low frequency oscillator because it oscillates or cycles below the range of human hearing. Human hearing extends down to 20 Hz, so any waveform below that is inaudible and can be used as modulator.
The speed of the LFO determines how fast the parameter is being modulated. On most VI’s, the LFO can be synced to the tempo of the DAW’s session. Common parameters to modulate are volume, panning, pitch, filter cutoffs and resonance. If I use an LFO to modulate the volume of a sound I can create a tremolo effect. If I use an LFO to modulate pitch, I can create vibrato.
LFOs are great for generating sonic variety because static sounds become boring to the ear. When a cellist plays, he makes slight variations in the timbre, pitch and volume of the note, even when playing held notes. Synthesizer and samples don’t do this unless you use a modifier such as LFO to give the note some variety.
Many VIs have tremendous modulation capabilities, allowing the user to manipulate virtually any parameter with an LFO or envelope. These modulators are the basis for many of the complex changing sounds we hear in VI’s like Omnisphere, Massive or Absynth. In fact, a great way to learn more about your VI, and synthesis in general, is to deconstruct one these complex sounds.
All the functions I have mentioned so far are found in the VI itself, so they require bringing up the VI’s interface and manipulating a virtual knob or slider to change the sound. But we can also change the sound of a VI from the DAW itself. The best way to do this is with a MIDI continuous controller.
Continuous Controllers
Most electronic musicians are aware that you can change the volume of a sound in real time by using a MIDI volume message. The MIDI parameter that changes volume is a continuous controller, in this case, MIDI CC7. Your MIDI controller might have faders built in to it that are assigned to send MIDI continuous controller messages. Continuous controllers can be used to automate many parameters in VIs.
VI manufacturers usually program their sounds to respond to specific MIDI continuous controllers to give additional expressive capabilities, so make sure you peruse your manual to find out what MIDI CCs your synth or sample libraries have already been programmed to respond to. String libraries are often programmed so that if you move your modulation wheel, which sends out MIDI CC1, the sound will change in dynamic intensity.
Most VIs have a MIDI learn capability which allows you to assign a MIDI CC of your choosing to any parameter on the instrument. A common way of implementing this is by having you control click on the parameter you wish to automate, selecting MIDI learn and moving the knob or fader on your MIDI keyboard or controller that you wish to use to automate the parameter. I recommend setting the faders or knobs to send the MIDI CC of your choosing on your keyboard’s interface. Just make sure that the one you are learning on your VI is not being used to automate something else. For this reason, you should avoid MIDI continuous controllers such as 1 (modulation), 2 (breath), 7 (volume), 10 (panning) and 11 (expression) to customize your, as these have usually already been assigned.
The parameter that is now automated via MIDI will be effected by moving the assigned fader. You can record those movements as you play a part or draw them in to the sequence with your mouse. Assigning and using continuous controllers is one of the best ways of getting greater expressiveness out of your VIs.
A little knowledge of the common parameters used on VIs goes a long way to helping you shape the sounds more to your liking. You may never wish to program VIs from scratch, but at least you can tweak them so they represent more of what you wish to hear. I always recommend reading the manuals, viewing the manufacturer’s tutorial videos and checking online forums to further your knowledge of an instruments’ specifics. However, VIs share more similarities than differences, so a firm grasp of the commonalities can take you pretty far.
Originally published in THE SCORE magazine Vol. XXIX No. ONE
Recent Comments