Fletcher Beasley is a composer for film, television and other media. He released his first album of cinematic electronic music called Fictional Radio in 2015.
Updates, News and Musings
New track of cinematic music – Precipitous Sublimation – now on Soundcloud.
Fletcher has been named one of 2015’s Outstanding Instructors in UCLA Extension’s Department of the Arts.
Hive by u-heU-he is a small developer from Berlin run by Urs Heckmann that makes software synths and effects processors. Among the software synths that u-he has developed are Zebra2, a wireless modular synth capable of an impressive array of sounds and enhanced by the Dark Zebra upgrade which features patches used in Han Zimmer’s Batman scores, Bazille, a modular system with patch cables that allows for extremely flexible patching, and Diva, a virtual analog synth with a high degree of sonic authenticity.
U-he’s latest release is a software synth called Hive. Hive is built around a relatively simple concept – a dual oscillator engine that allows the user to layer two voices in the tradition of synthesizers like the Yamaha CS80. But don’t let the sound of that fool you. Hive is capable of a wide variety of sounds ranging from lush pads to fat basses to electronic rhythmic sequences. In fact, the simplicity may be welcome, as Hive is significantly easier to program than Zebra2 or Bazille and has a smaller CPU footprint. Nevertheless, the sound quality is great and the synth comes with many presets that cover a wide range of synthetic sounds.
Media composers will probably find Zebra2 and Bazille the most intriguing of u-he’s synths due to their flexibility in programming and wide sonic range, however you really you can’t go wrong with any plug-in in the u-he product line as they all sound phenomenal. Hive is great choice for someone without much experience in synth programming who wants to get good sounding results quickly. For those with a bit more background in synthesis, Zebra2 and Bazille offer endless possibilities for new sounds.
Signal by Output
LA based Output has recently released a unique sample library in Kontakt format called Signal. Signal was conceived with the idea that rhythmic pulses are an important facet in music making and a library focusing on pulsing sounds was lacking in the world of sample libraries.
Signal comes over 700 presets ranging from aggressive distorted sounds to ambient ethereal ones. Patch selection can be filtered by keywords so if you need an epic organic sound with a triplet feel you can select on the appropriate adjective to select patches matching those descriptions, a great time saver for those working under fast deadlines. The sounds tend to be electronic in timbre, though there are some acoustic sources such as felt piano, harp and marimba, and if you have find a sound you like the feel of it is easy to change the sound source.
Each preset features four macro sliders to quickly change the basic characteristics of the patch and the sliders can be easily assigned to MIDI faders for real time morphing. For those looking for more control, all parameters and pulses can be adjusted by clicking on the “pulse engines” tab. This brings up an intuitively laid out interface that is easy to navigate, which is a good thing as Output doesn’t come with any documentation. The lack of documentation can make some more advanced editing a bit frustrating.
Signal is a great library for any media composer who has a need for pulses, arpeggiators and morphable rhythmic sounds in their music, which probably covers most of us. There is no danger in trying it out and risking disappointment since Ouput offers a 14-day money back guarantee for anyone who is unhappy with Signal.
Omnisphere 2 by Spectrasonics
If I had a desert island synth plugin, it would be Omnisphere. Omnisphere has everything you could ask for in a synthesizer: a huge library with over 8000 presets covering a wide variety of genres, incredible programming power, and a beautifully laid user interface the makes programming and editing a joy. Now they have made it even better.
Omnisphere 2 adds 4500 new patches and soundsources, over 400 new DSP waveforms, new arpeggiator features, an enhanced interface and the ability to use your own audio files as a soundsource, to name but a few. The new patches and soundsources alone would be worth it, but with all the other features this upgrade is a must.
For those who don’t own the original Omnisphere, this plugin is an essential part of a media composer’s arsenal. The sounds are stellar and cover a huge range of styles from hardcore electronic to very organic. The user interface is easy to navigate and it is simple to make the presets unique with programming features such as the “orb”, which allows a user with no programming background to drag a circle around to change the sound in interesting ways. For those who want to dive in deeper, the synth engine in Omnisphere is extremely powerful and customizable. Omnisphere 2 just makes one of the best synths on the market that much better.
Nugen MasterCheck and ISL2
I don’t really like to master my own music given the choice, but sometimes it is necessary to do some DIY mastering when sending tracks out for demos, submitting to music libraries or releasing music online. For a long time, the general mastering principle has been to make a track sound as loud as possible through compression because listeners will perceive a louder track as sounding better when compared to other pieces within the same genre. That paradigm appears to be changing with the advent of streaming music services.
Streaming services such as Spotify, iTunes Radio and YouTube are all using loudness normalization to match volume levels between different tracks in their catalogs. The basic idea is that their software analyzes each track and matches the volume level so that if a listener is playing a Beethoven symphony and then switches to a dance track, they won’t have to adjust the volume control on their playback device, the software will do it for them. This is a big deal because it means there is no longer any value in overcompressing tracks to make them sound louder. In fact, overcompressed tracks will sound lifeless in comparison to tracks with a wider dynamic range. I suspect this approach will be adopted across all music streaming services in the not too distant future.
Nugen Audio’s MasterCheck is a plugin you insert on the master buss of your DAW to monitor your mix levels and allows you to determine how your mix will sound when played back through various streaming formats. MasterCheck has several displays which provide useful information for mixing for targeted delivery platforms. The LKFS (Loudness, K-weighted, relative to full scale) meter displays the peak level of the track allowing one to easily monitor mix levels, while the PLR (Peak to Loudness ratio) meter monitors the peak level of a track relative to normalization which helps to give a sense of the overall dynamic content of the mix.
MasterCheck includes a number of presets so you can hear what your mix will sound like played back on various streaming services. By clicking the “offset to match” button, you can hear the results of Spotify, iTunes Radio and others’ loudness normalization on your mix in real time. You can even compare your mix to other tracks using the external reference feature.
For those in need of a high quality meter to assist with the mastering process, it is hard to beat MasterCheck.
Originally published in The Score, Volume XXX
Sustained sounds need subtle changes and movement to remain interesting to our ear. When an instrument like a violin sustains a note, there are subtle pitch and timbral variations imparted by the player that keep the note from remaining static. Electronic sound sources, on the other hand, don’t change or evolve without some help from parameters within the synth. Textures that are static sound lifeless and flat, particularly when mixed against dialogue and sound effects and played back on a medium like television under less than ideal listening conditions. Giving your synth sounds movement and evolving timbres will make them sound richer and add more depth to your mixes.
Let’s take a look at some techniques for making a sound evolve.
1. Use multiple LFOs set at different rates to modulate instrument parameters.
Low frequency oscillators are key to creating timbral movement because they can provide cyclical change to a parameter. LFOs are often used to simulate vibrato and tremolo by modulating pitch and volume but can modulate virtually any parameter in many plugins. A classic way to provide sonic evolution is to route the LFO to the filter cutoff, set the LFO shape to a sine wave for a smooth transition and the LFO frequency to a low number (1 Hz or lower) so the change happens slowly. This makes the sound get brighter and duller over time as the LFO moves through its cycle.
To make complex evolving sounds you need multiple LFOs to modulate discrete parameters at different rates. Many synths only have a single LFO, but instruments such as Omnisphere, Massive, Zebra2 and Logic’s Alchemy (brand new in version 10.2) feature up to six LFOs that can modulate almost any parameter on the instrument. By setting different rates for each LFO, you create a sound that is constantly changing and non repetitive since each parameter cycles through its changes at a different speed. It isn’t necessary to provide a lot of modulation (usually represented as depth or amount) as subtle changes are often most effective on pads and sustaining tones such as drones. Good candidates for modulation are filter cutoff, resonance, panning, timbre shift, and volume.
2. Use envelopes in addition to LFOs.
Envelopes modify the attack, decay, sustain and release of a parameter. The limitation of an envelope is that, unlike an LFO, it doesn’t repeat, so when an envelope reaches its sustain level, it no longer modifies the parameter until the note ends and the envelope goes through its release portion.
Synth envelopes have traditionally been used to modify the amplitude or filter. In many plugins, envelopes, like LFOs, can be routed to modulate virtually any parameter. For pad sounds, envelopes are useful for movement as their shapes are more complex than the cyclical LFO. Omnisphere and Alchemy are examples of synths that feature very long envelope times. These plugins can have attacks, decays and releases of up 20 seconds, which translates to 60 seconds for an envelope to go through its entire cycle. For a drone that needs to sustain for minutes on end this may not be enough for a constantly evolving sound, but for many sustained tones those lengths are more than sufficient.
3. Automate parameters to precisely control the way the sound changes over time.
A plugin’s parameters can be automated with MIDI continuous controllers and via track automation. Most plugins allow you to assign a MIDI continuous controller by right clicking on the parameter you wish to automate, clicking learn MIDI CC and moving a knob or fader that sends that MIDI CC on your MIDI controller. You can then record or draw in any fader moves you wish to make. Most plugins’ parameters can also be automated using your DAW’s track automation if you simply want to draw the automation in with your mouse. Automation is a great choice for sonic variation if your instrument doesn’t provide many modulation options or if you want very precise control over the way the parameters change over the course of the cue.
4. Use more than one sound source to create evolving textures.
Synths that use more than one sound source are great for evolving textures. By changing their relative levels over time, you can create an ever-shifting soundscape. A cool effect is to apply independent LFOs to pan the sound sources at different speeds. I generally like to use subtle panning, as a small amount gives gentle movement without calling too much attention to itself.
Orbit is an example of a great sounding Kontakt instrument built around the idea of using multiple sound sources to create shifting textures. Orbit uses four sound sources that it cycles between. Each source can be filtered, panned and tuned independently and the movement between sources creates a constantly evolving timbre.
Alchemy presents another method for switching between its sound sources and parameters. In Alchemy’s performance section there are eight boxes representing different parameter settings for a given patch. The boxes are bounded by a blue rectangle that can be dragged smoothly from one box to another, incrementally changing preset parameters and morphing the sound. You can add these movements to a recorded MIDI track by setting your sequencer to overdub MIDI and recording the mouse movements.
5. Use sampled sound sources rather than electronic oscillators.
Synths that can play samples are useful for creating rich textures since samples of acoustic sounds are more timbrally complex than electronic oscillators. Omnisphere, Alchemy and Izotope’s Iris 2 are examples of plugins that can load samples as sources. You can quickly make a sound your own by finding a preset you like and swapping out the sound source. I often use this method in Omnisphere when I want to create a unique sustain sound. I create a basic sustaining sound, use LFOs and other modifiers to morph the sound as described above, then replace the sound sources to create a number of variations of my original sound.
6. Use insert effects to create variation.
If you have a patch you like on a synth that doesn’t feature many modulation options, insert effects can be used to create variation. I often will give my pads a subtle pulsing effect by using tremolo. My favorite plugin for this is Tremolator by SoundToys. Tremolator has a number of shapes you can use and it syncs to tempo. I add a small amount of depth to give the sound some movement. A similar effect can be achieved with Logic’s Tremolo plugin, which can create either mono or stereo tremolo effects depending where you set the phase settings.
Autofilter plugins can be used to filter a sound with an envelope or LFO that syncs to tempo. Your DAW probably has one as part of its stock arsenal of effects. I like to use FilterFreak2 by SoundToys. FilterFreak2 has two filters that can run independently of one another. The filters sound amazing and can be used to create long slow changes by unsyncing the plugin tempo from the DAW’s tempo, setting it to its lowest setting of 30 bpm and letting it cycle through 16 bars. At this setting it takes the filter over two minutes to complete its cycle.
As media composers, our work is deadline driven. While I love getting lost in the sonic possibilities available to me through plugins, I rarely have the time to program a unique sound from scratch in the middle of a project. A lot of presets sound great, but may not evolve in the way that you need them to in the context of your track. These techniques will help give your sounds movement and keep them from sounding dull and lifeless. Small changes can go a long way to keeping a sound interesting over time and add richness and depth to the electronic textures you use in your music.
Originally published in THE SCORE magazine, Volume XXX Number FOUR.
Reverb is a beautiful thing. It creates a spatial context for sound and serves as a unifying force for tracks recorded in disparate spaces. While pop mixing allows for great variety and choice in reverb settings, orchestral musicians play together in a large concert hall and this can present some challenges when mimicking this musical setting with samples. When used well, the reverb acts as a unifier for getting all the samples to sit together in their own virtual space, even when using libraries created by different sample manufacturers.
Like many composers, I own a number of orchestral sample libraries.
I have my favorites for different sections of the orchestra and I find that some libraries are more suited for certain styles of music and orchestration. The libraries are most flexible when they are recorded with a minimal or adjustable amount of reverb. When I apply my favorite reverb setting, it can serve as a sonic “glue” for my virtual orchestra and make it sound as if all the instruments are playing in the same hall together.
The prototypical way of using reverb in a digital audio workstation is to create an auxiliary track, insert a reverb plugin and use post-fader sending to send level from the track that you want reverberated. Post-fader means that the signal gets sent to the bus after it hits the fader. If you lower the volume of the track, the level of reverb diminishes by the same amount. To adjust the amount of reverb you want on your track, you change the send level that is bussed to the reverb. Post-fader sending works great in musical situations where we want to hear a dry, unaffected signal mixed with the reverb. In most mix scenarios, we hear more of the dry than the reverberated signal.
Post-fader sending is how we hear reverb applied in most pop situations. The instruments are generally recorded with a microphone close to the source and reverb is added to give a sense of spatial depth using the technique described above. If one moves a microphone further from a source, more of the reverberation of the space that the source is recorded in will be captured on the recording. Close miking gives an engineer a lot of mixing options because we can add reverb to a sound to give it depth but we can’t remove it if it is found on the recording.
This approach of adding reverb to a dry recording works quite well in mixes where we are used to hearing a lot close miked instruments. Guitars, drums, keyboards and vocals are typically miked close to the source. But this approach doesn’t sound natural on orchestral samples. The reason is that, as listeners, we don’t hear an orchestral section from a close vantage point. We listen from a distance and, as a result, the sound of the room plays a big role in the sound of the sections. Simply applying more send level using post-fader sending doesn’t really work because we still hear too much dry, close miked sound.
A workaround for this is to use a type of sending that is generally reserved for headphone mixes. Pre-fader sending sends the signal down the bus to the reverb before it hits volume fader. This allows you to create a track where the reverb is more prevalent then the dry signal, because the fader now controls the level of the dry signal. As you lower the fader, you decrease the level of the dry signal but leave the reverberated signal intact since the bus level is unaffected by the change to the fader. This, in effect, reverses the function of the send and the fader from the post-fader scenario. The send now sets the level of reverberated signal and the fader sets the ratio of dry/wet signal.
This approach works well for orchestral instruments because if an orchestral sample is recorded with minimal room sound I can essentially put it in the hall by removing a greater amount of dry signal than would be possible with post-fader sending. If I want one section to sound further away from the listener than another, I just lower the fader for the more distant section. The result is that the tracks sound as though they are placed in the hall rather than the hall being added to the sound.
The downside to pre-fader sending is that since the fader no longer controls volume you cannot use the automation lane of the track to automate volume. With virtual instruments (VIs) there is a simple workaround – you simply use a MIDI continuous controller (either cc7 or cc11) to change the volume level of the VI. Audio tracks don’t respond to MIDI controllers, but you can to insert a plugin that can change the gain of a track as a substitute for your missing volume control. Most DAWs have a built in plugin designed for this purpose. Logic’s “Gain” and Pro Tools’ “Trim” are examples of plugins that perform this function.
I find that the best reverb plugins to use for orchestral samples are convolution reverbs. Convolution reverbs use impulse responses (IRs) to “sample” acoustic spaces and, as such, are more natural sounding than algorithmic reverbs. My favorite convolution reverb is Altiverb by Audioease, but there are convolution reverbs made by other manufacturers and the principle is the same. Many DAWs include their own convolution reverbs such as “Space Designer” which is found in Logic. These plugins come with IRs of different spaces and give you the ability to load impulse responses of your own.
An interesting result of using a convolution reverb with pre-fader sending is that it has a noticeable effect on the timbral quality of a sampled instrument. Pre-fader sending makes the choice of reverb instrumental to the sound of the VI itself. I’ve been amazed at how running string samples through a good sounding IR can warm up the sound and take off the harsh edges that I often hear in string libraries. It can improve the sound of the samples without the need for corrective EQ.
Many of the newer orchestral libraries allow one to adjust the level of recorded reverb through the control of different microphone positions. The library manufacturers provide samples recorded with microphones at different distances from the section. These are often named something like close, stage and room and you can mix in different amounts of the mic position within the sampler’s interface. If you have a library that is recorded in this manner (Spitfire Audio and Cinesamples are sample manufacturers that take this approach) you don’t need to apply additional reverb as the VI instrument has all the reverb control than one might need. However, each mic position requires additional samples so loading all mic positions for a preset can take a considerable amount of RAM. For this reason, I usually just load the stage mic setting and add additional reverb using pre or post-fader sending, depending on the sound I am trying to get.
If you are mixing libraries that have reverb in their recording with libraries that are recorded dry, it is important that the your reverb matches the one on the recording so that the instruments sound like they are in same space. Choosing a similar type room and matching the reverb decay works quite effectively in most cases. You can figure out the decay time by listening to a percussive sound and timing how long it takes for the reverb tail to die away.
Film music mixers often add some high end digital reverb when mixing orchestras for film. Lexicons or Bricastis might be used to add some additional depth and sheen to the sound. This can simulated with great effect on an orchestral mockup by adding a convolution reverb with an IR (IRs can be made from audio equipment like reverbs and delays in addition to acoustic spaces) of one these units to the master bus. IRs of high end reverbs can be found for purchase on the internet if your convolution reverb doesn’t have a preset for the unit you like. You can then add a little Lexicon shimmer to your mix (I find a 10% mix ratio is a good starting point) to get that film score sound.
Getting samples to sit together in a virtual acoustic space is a key to creating convincing sounding orchestral music electronically. Pre-fader sending can help to create more depth for the orchestral samples in your mix and integrate them convincingly into your reverb’s hall. Used in conjunction with a good convolution reverb, I have found this approach to yield more convincing results with my mockups than conventional post-fader sending.
Originally published in THE SCORE magazine, Volume XXX Number ONE.
It wasn’t that long ago that music tracks were recorded to analog tape. Although that method has been superseded by recording to hard disks for most recordings, many engineers still use analog equipment when tracking and mixing. Why continue to use analog equipment when digital is so much more convenient? Simply put, analog devices color sound in ways that we like. Analog transforms the recording signal by saturating and distorting the sound in pleasing ways. Digital recording technology doesn’t impart the same kind of sonic footprint. It captures it in a colorless way. But we like color. We like the imperfections that shade what we are hearing.
Fortunately, for those of us who don’t have access to a vintage Neve console and a wide array of classic gear, there are many plugins that can approximate this in the digital realm. We have plugins that saturate, distort, com- press and amplify our signal paths in an attempt to recapture the magic of analog audio, many of which may be included with your DAW. Analog emulators can bring a bit of grit, grime and dirt to sounds that otherwise sound too pristine. Guitars, keyboards, synths, drums and even vocals often benefit from a bit of analog smear and using these plugins can degrade the sound in ways that give them character, allowing them to exist more harmoniously in the mix and create a more organic sound. Strange as it may seem, to get a better overall sound it is sometimes necessary to make the sounds more low fidelity.
Many times when I load up a virtual instrument I find there is something too perfect about it. It may be beautifully sampled but it sounds too clean, too flawless. When I attempt to put it next to an electric guitar (a grungy instrument if there ever was one) in an arrangement, it doesn’t sit well because the virtual instrument lacks the flaws and imperfections inherent in the guitar. That is where distortion comes in. I love distortion. I love the big, in your face, brain-melting distortion that you hear on hard edged tracks, but more often I use it in a subtle way. I will add a plugin on the channel strip that allows me to futz up the sound ever so slightly. You wouldn’t describe it as being distorted, in fact, you might not even notice the distortion unless you listened very closely because I’m not trying to make the part sound distorted. I’m simply trying to give it a little edge and add some harmonic excitement.
There are many plugins that can be used to add distortion. Some have the word “distortion” in the name but they may use words like “saturation,” “overdrive” or “lo- fi.” Generally, they are found in the harmonic section of your DAW’s plugins because they add harmonic content to the sound. Your DAW probably has some good built-in distortion effects available to you. For example, Logic has a whole section of distortion plugins. For subtle effects, I prefer ones that emulate analog distortion such as Distortion II over the extreme digital destruction of Bitcrusher. And a little goes a long way. I will often add just enough that the sound is warmed up but not so much that the distortion calls attention to itself.
Another way to add a bit of grit is to run a sound through an amp simulator. You may have done this with a guitar or bass but virtual amps sound great on keyboards, percussion, and drums. Even vocals can sound cool run- ning through an amp. Most of the DAWs now come with amp simulators and there are plenty of good third party plugins, such as Native Instrument’s Guitar Rig and IK Multimedia’s Amplitube. A little knowledge of what different types of amps sound like is helpful. Typically the plugin amp’s interface will closely resemble the type of amp it is supposed to sound like and a Google search on “guitar amplifier” can give you an idea of what the real amp is. Unsure of what those amps sound like? Take a look at what guitar players use them and then give a listen to their sound. You will soon discover that an amplifier like a Marshall is well suited for harder edged sounds, whereas Fenders tend to give a cleaner less distorted tone. Amplifier plugins also may include guitar stompbox emulators that can modulate, delay, filter, compress and further mangle the sound in ways that are gloriously lo fi. If you want just a little bit of the amp or stompbox effect in the sound, try loading it on an aux channel and using a send on your channel strip so you can easily blend in as much of the amplified sound as you wish.
Creative use of compression can also add character to otherwise lifeless sounds. Although compressors are primarily used to tame the amplitude envelope of an instrument they can also be used to impart some analog warmth. Mixers sometimes run sounds through analog compressors without compressing the signal, simply to add the compressor’s character to the sound. This approach can be used with compressor plugins that emulate classic analog compressors. For example, Logic’s new compressor has buttons at the top of the interface which allow you to apply models of compressors with tube, optical, VCA and FET circuitry. These models all have different sonic characteristics that will change the timbre of the sound in subtle ways, in addition to compressing the signal. There are good articles on the Internet which explain the differences between the compressor types and what kinds of instruments they tend to be used on. Filters are used to limit the frequency range of sound. Equalizers typically have high-pass and low-pass filters built into them, but when I want to give a sound more character I prefer to use filters that function like those found on synthesizers. Synth filters have an additional parameter that is not generally found on EQ filters— resonance. Resonance emphasizes the frequency at the filter’s cutoff point. Adding resonance gives some nice harmonic emphasis to a sound while the filter helps to carve out its own frequency space within a mix. This works great with low-pass filters, but band-pass and high-pass filters can also yield interesting results. You will find that different manufacturers’ filters have their own unique qualities and characteristics to them, some sounding more analog than others. One of my favorites is FilterFreak by SoundToys, which is incredibly flexible and sounds fantastic.
Reverb can also be used to add some dirt to the sound. I particularly like using convolution reverbs like Audioease’s Altiverb and Logic’s Space De- signer for this purpose. Rather than load an impulse response of a beautiful sounding space, I will run a keyboard or sample of a mellotron through an IR of a dusty plate reverb or a spring reverb of the type found in old guitar amps. These typically have some grittiness to them and add a lot of character to the sound being reverberated. Altiverb even has a whole section of odd spaces such as dustbins, cheap toys and nuclear cooling towers that can be used as echo chambers for your sounds. It is even possible to load wave files of any sound into convolution reverbs to use as resonators. The results are often interesting, but turn down your outputs when testing these out as the levels can be unpredictable.
Adding imperfections to tracks with distortion, filters, amps, compressors and funky reverbs can add a lot of life to the sound. Virtual instruments and sounds we track in digital studios can sound boring if you don’t inject some anomalies into the channel strip. The thing we love about analog is the coloration and smear that analog technology imparts on sound. You can use plugins that simulate the effect of analog gear as tools to color and shade your mixes. Fortunately, with clever approximations of analog techniques and a good ear you can inject a lot of that character into your arrangements to create a more organic sound.
Originally published in THE SCORE magazine Vol. XXX No. TWO
Every composer knows that adding a real musician or seventy to a track really brings it to life. Samples just don’t have the nuance and expression of real instruments. But if you aren’t trained as an engineer or haven’t done a lot of recording in your studio it may be daunting to bring in a player to record in your project studio. Let’s look at some of the factors that contribute to getting a good result.
The sound of the recording space
When recording an acoustic instrument, the sound of your room plays a roll in the way that the instrument will sound in the final recording. Treating your space with acoustical material will not only make it sound better for accurate monitoring but will make instruments sound better in the recording space. Applying acoustical treatment does not require a PhD in acoustics. There are lots of good articles available on the internet about treating rooms for sound and Focal Press recently published a fantastic book, The Studio SOS Book, that demystifies the process.
Most rooms in which we set up project studios are box shaped. This presents acoustic problems because sound waves bounce back and forth between parallel surfaces and cause certain frequencies to be either too loud or too quiet. We need to use acoustic treatment in order to combat these parallel waves and reflections that create an unflattering acoustic environment for listening and recording. Companies such as Auralex sell complete room treatment packages for taming problematic acoustic spaces.
A microphone preamp amplifies a microphone signal so that it is loud enough to be audible in a recording. Your audio interface has a microphone preamp built into it but most studio owners will buy a dedicated mic preamp so that they can control the level of signal being recorded as these preamps have a gain knob while many audio interfaces do not. High-end mic preamps also color the mic signal in a pleasing way. I’m sure you have heard of the much sought after preamps made by companies such as Neve and API.
Now I am going to suggest something a bit radical here. In the context of project studio recording, the preamp you use doesn’t matter that much and the one that you already have (provided it has level controls to change the incoming signal level) is probably fine for your purposes. Don’t get me wrong, at some point you will want to invest in a high quality mic preamp, but the differences between less expensive mic pres made by companies such as ART and the expensive ones made by Grace Designs and Rupert Neve are subtle. You can hear the differences when ABing them, but when mixed in a track with other sounds, the mic preamp will play less of a role than the microphone you use or the way that that instrument was miked.
The microphone you use to record the instrument you are tracking plays a big role in the sound you get. One of the benefits of going to a professional studio is being able to choose from the large collection of mics that most studios own. For the project studio owner, purchasing a good quality, versatile microphone is an important investment. This is a great time to buy a microphone, as there are many good microphones that can be found for less than $1000.
If you are buying your first microphone, you should look for a large frame condenser (also known as capacitor) microphone, ideally one that has multiple polarity patterns to switch between. The polarity pattern refers to the direction in which the microphone picks up a signal. Most mics will have a cardioid pattern. A cardioid pattern means the microphone picks up signal from one direction and rejects it from others. It is the most common microphone pattern because it allows relative isolation of the sound source.
Some microphones have a switch that allows you to choose different polarity patterns. The most useful polarity pattern after cardioid is omnidirectional, which means that the mic will pick up a signal from the back and sides in addition to the front. The omni pattern is very useful if you want more of the sound of the instrument in the context of the space you are recording. For example, I have got good results using an omni pattern on violin because that is an instrument that we typically hear more room sound on the recording.
I recommend investing in at least one good quality condenser microphone. If you are planning on recording a number of different instruments look for one, such as the AKG 414, which is versatile and gets good results on a variety of sources. On the other hand, if you are a singer and primarily planning on recording your own voice, then find a microphone that sounds best with your voice. The microphone that sounds best on a specific singer can be highly variable and some singers have found that an inexpensive microphone works best with their instrument.
The recording process
When you are recording in a space where the instrument is in the same room that the speakers are located, it is important to turn off the speakers and monitor on headphones while recording. You don’t want the recorded signal to come through the speakers and get rerecorded onto the track.
Make sure you adjust the input level on your microphone preamp so that it doesn’t exceed 0 db in your DAW, as this will cause the signal to distort and there is no way to rectify that after it has been recorded. I recommend setting the level between -12 and -18 db, as this will give you enough headroom for most instruments. You need to look at the meter on the track’s channel strip in your DAW to see what the incoming level is. Once you have set the mic pre so you have a good level, don’t change it between takes. Otherwise it will make it difficult to comp between different takes.
If you haven’t recorded a particularly instrument, doing a little research is very helpful. An internet search can yield a lot of great ideas for approaching miking the instrument. It isn’t always obvious where the best mic position will be. For example, you might think that sticking a microphone in the bell of clarinet would be produce a good result, when, in fact, a clarinet is generally sounds best when miked about a foot above the player’s hands.
Players can be good resources for miking their instrument. They have been on a lot of sessions and have seen where engineers will place the microphone. Many players also do their own recording and have experimented with what sounds best with their instrument. I find this to be a good starting point from which I can make adjustments as needed.
Ideally, I will adjust the microphone position by moving it while the player is warming up and listening to how the change in position is affecting the tonal quality. In a small studio in which you are tracking and recording in the same space it can be difficult to make adjustments while the player is playing, since you are listening to the signal through headphones and hearing it acoustically at the same time. In that case, making a recording, then listening and making adjustments is the way to go.
I like to think in terms of adjectives to describe the sound I am looking for as it helps focus my attention on what I am hearing. Does the instrument sound natural? Does it sound warm? Is it thin or brittle? Is there enough of the room sound in the recording or is there too much? These assessments will inform where you move the microphone. Moving the microphone closer to the instrument will typically give you a warmer sound with less room reverberence, because a microphone with a cardioid polarity pattern will emphasize lower frequencies the closer it is to the instrument and pick up less of the room.
I like to record most instruments in my studio using two microphones. Though I sometimes do this to create a stereo recording, I often use the two mics to take advantage of tonal differences found from different mic positions and different mics. On violin, I have used a cardioid mic close to the player and an omnidirectional microphone placed about ten feet away. I can get a great balance between the focus of the close mic, and the more ambient sound of the omni. This technique also works well with woodwinds. I found a great flute sound using one mic near the player’s mouth and another a foot or so from the footjoint of the flute.
Another benefit of recording with two mics is that I am able to get an idea of what microphone sounds best on each instrument. Sometimes I record two signals and end up using just one of the mics. It all depends on what sounds best in the context of the music.
I always take photographs and notes at each session so that I learn from my experiments. I use a note taking application called Evernote for this purpose and have a folder called “Recording” where I keep all my notes. This gives me a self-generated resource for future sessions.
I am not formally trained as an engineer and wouldn’t be comfortable setting up the microphones for an orchestral session. I leave that to the talented and experienced engineers that we have here in Hollywood. But there are many projects in which I am bringing in players to my studio to work their musical magic and replace my sterile samples. With a little knowledge and a willingness to adjust and experiment, I have been able to get great results. You can use these basic principles as building blocks for enhancing your tracks with live players. For further reading, I highly recommend The Studio SOS Book from Focal Press.
There are a lot of great virtual instruments (VIs) available for composers these days. Most of these include an impressive array of presets that sound great and provide good sounding results with very little work. Many of us don’t have the time or desire to design sounds from scratch, even though most VIs offer tremendous sound sculpting capability. But using presets all the time can lead to homogeneous sounding music. With a little understanding of what the knobs, sliders and drop down menus on the interface control, it is easy to tweak the sounds to make them your own.
Many of the parameters and terminology that VIs use for sound modification have their roots in analog synthesis. Understanding the terminology and the parameters they control will help you better understand how to get the sonic results that you want. Let’s break down some of the terminology and parameters you will encounter on many VIs.
An oscillator is an electronic waveform that serves as the sound source for a synthesized sound and is abbreviated as OSC. No oscillator, no sound. These waveforms are usually sawtooth, square, sine or triangle waves and are distinguished from each other by their harmonic content. A sampled sound source is not considered an oscillator but it functions in the same basic way. It is the genesis of the preset’s sound and can be manipulated in a variety of ways.
Sound sources can be swapped or changed quite easily in most VIs. They are generally changed via a dial or a drop down menu in the VI’s interface. In Omnisphere, for example, clicking on the sample window in any layer will allow you to choose from thousands of soundsource samples, greatly expanding the sonic possibilities of the instrument with a single click. Changing the oscillator or sampled waveform is the most basic way of changing the sound.
A filter is a tone control. Most VIs include a low pass filter (LPF) and a high pass filter (HPF) to allow you to shape the frequency content of your sound. They may also include a band pass filter (BPF) which allows you to target a specific frequency band. If your sound has too much high frequency content, lowering the cutoff of the low pass filter will make it sound less bright. A sound that is too bassy can be thinned out by changing the cutoff of the high pass filter. The treble and bass controls on your stereo are examples of low and high pass filters that you are probably already familiar with.
Filters also include a resonance control, abbreviated as RES. The resonance control increases the intensity or volume of the frequency at the point of the filter cutoff. A high amount of resonance applied to a low pass filter is used to achieve the barking or squelchy sound heard on many synth sounds.
Envelopes are used to control the volume (amplitude envelope) or frequency (filter envelope) contour of a sound. They change the sound over time. Envelopes have stages to control the sound at various points in time. They are expressed as attack, decay, sustain and release and abbreviated as ADSR. Whenever you see the letters ADSR, you can be sure that that is an envelope.
On an amplitude envelope, the attack value determines how quickly the sound increases in volume from silence. A percussive sound has a very fast attack, whereas a legato sound has a slower attack. Decay controls how fast the sound lowers in volume to its sustain level and the release value determines how long it takes the sound to fade out after the note ends when the keyboard’s key is released. The envelope is often the first aspect of the sound you wish to tweak, so knowing where to find it is very useful.
Modulation and LFOs
LFO stands for low frequency oscillator LFOs used to change aspects of the sound in a repetitive or cyclical fashion. An LFO uses a waveform such as a sine or square to modulate a parameter. It is called a low frequency oscillator because it oscillates or cycles below the range of human hearing. Human hearing extends down to 20 Hz, so any waveform below that is inaudible and can be used as modulator.
The speed of the LFO determines how fast the parameter is being modulated. On most VI’s, the LFO can be synced to the tempo of the DAW’s session. Common parameters to modulate are volume, panning, pitch, filter cutoffs and resonance. If I use an LFO to modulate the volume of a sound I can create a tremolo effect. If I use an LFO to modulate pitch, I can create vibrato.
LFOs are great for generating sonic variety because static sounds become boring to the ear. When a cellist plays, he makes slight variations in the timbre, pitch and volume of the note, even when playing held notes. Synthesizer and samples don’t do this unless you use a modifier such as LFO to give the note some variety.
Many VIs have tremendous modulation capabilities, allowing the user to manipulate virtually any parameter with an LFO or envelope. These modulators are the basis for many of the complex changing sounds we hear in VI’s like Omnisphere, Massive or Absynth. In fact, a great way to learn more about your VI, and synthesis in general, is to deconstruct one these complex sounds.
All the functions I have mentioned so far are found in the VI itself, so they require bringing up the VI’s interface and manipulating a virtual knob or slider to change the sound. But we can also change the sound of a VI from the DAW itself. The best way to do this is with a MIDI continuous controller.
Most electronic musicians are aware that you can change the volume of a sound in real time by using a MIDI volume message. The MIDI parameter that changes volume is a continuous controller, in this case, MIDI CC7. Your MIDI controller might have faders built in to it that are assigned to send MIDI continuous controller messages. Continuous controllers can be used to automate many parameters in VIs.
VI manufacturers usually program their sounds to respond to specific MIDI continuous controllers to give additional expressive capabilities, so make sure you peruse your manual to find out what MIDI CCs your synth or sample libraries have already been programmed to respond to. String libraries are often programmed so that if you move your modulation wheel, which sends out MIDI CC1, the sound will change in dynamic intensity.
Most VIs have a MIDI learn capability which allows you to assign a MIDI CC of your choosing to any parameter on the instrument. A common way of implementing this is by having you control click on the parameter you wish to automate, selecting MIDI learn and moving the knob or fader on your MIDI keyboard or controller that you wish to use to automate the parameter. I recommend setting the faders or knobs to send the MIDI CC of your choosing on your keyboard’s interface. Just make sure that the one you are learning on your VI is not being used to automate something else. For this reason, you should avoid MIDI continuous controllers such as 1 (modulation), 2 (breath), 7 (volume), 10 (panning) and 11 (expression) to customize your, as these have usually already been assigned.
The parameter that is now automated via MIDI will be effected by moving the assigned fader. You can record those movements as you play a part or draw them in to the sequence with your mouse. Assigning and using continuous controllers is one of the best ways of getting greater expressiveness out of your VIs.
A little knowledge of the common parameters used on VIs goes a long way to helping you shape the sounds more to your liking. You may never wish to program VIs from scratch, but at least you can tweak them so they represent more of what you wish to hear. I always recommend reading the manuals, viewing the manufacturer’s tutorial videos and checking online forums to further your knowledge of an instruments’ specifics. However, VIs share more similarities than differences, so a firm grasp of the commonalities can take you pretty far.
Originally published in THE SCORE magazine Vol. XXIX No. ONE