Mar 182013
 

Hi, I’m Rich from North Carolina, and today I’m going to talk about my project checklist for Propellerhead Software’s Reason 6.5.3, which is my digital audio workstation of choice.

1. File Handling. Name It and Save It Properly. Reason Handles the Rest.

Reason projects are saved as individual files that end with the .reason extension. The .reason file contains all of the sequencing, mixing, and mastering data, as well as any external files you have imported or recorded that aren’t being used as samples.

If I record some vocals in Reason and lay them down on a track, that vocal track will automatically be saved inside of the .reason file. Also, if I import some vocals I recorded elsewhere and lay them down on a track, that will also be saved inside the .reason file.

This is great if I’m the only person working on these tracks. It’s a really clean setup. There’s only one file to lose.

But if I want to be able to send the project I’m working on to someone else, they may not have all of the same patches and loops that I have. Reason makes it easy to work around this. Just go to File-> and select Song Self-Contain Settings.

A dialog box will pop up, showing which patches need to be included in the .reason file. Tick the check boxes next to the patches you want to include, hit OK, and you’re done! (You won’t be able to do this if only use the Factory Sound Bank patches, because everyone has those!)

Unlike other DAWs, you only need to use folders in Reason if you want to use them as a way to keep your stuff organized. I have a “Reason Projects” folder, and whenever I work in Reason and save something, I create a new subfolder based on the date. So today’s folder is 20130318. I’ll add tags to the folder name if I want to remember what I was working on and find it quickly, but the .reason file name can do the same thing.

2. Set Digital Audio Preferences.

Setting the sample rate is pretty easy in Reason. In Windows, go to the Edit menu, then all the way down to Preferences, and select it. On Mac, just go the Reason menu and select Preferences. Now the Preferences window will open up.

reason audio prefs

From here, you can select the Sample Rate from the drop down box. Available Sample Rates are based on your hardware. If your machine can handle it, then it will be available to you.

Selecting bit depth is a bit unique in Reason. Reason will automatically select the highest bit depth available to you based on your hardware. If it can do 24-bit audio, then it will record in 24-bit. If it can only do 16-bit, then it will do 16-bit.

If you want to check your bit depth, then click on Control Panel button. You can see what’s possible in the resulting dialog box.Reason isn’t too picky about sample rate or bit depth when it comes to working inside a project. It will automatically convert whatever you bring in to your selected sample rate and your possible bit depth. (So if you can only do 16-bit depth, 24-bit samples will be downsampled!)

3. Setting the Recording File Type

You don’t have to. Reason does this automatically for you in the background. If you want to export your recorded tracks, you can do so at any time, and you can choose from WAV or AIFF. (No support for Broadcast WAV files yet.) Also, when you export, you can choose whatever sample rate and bit depth you want.

4. Hardware Settings For Audio

Go back to the Audio preferences dialog box I showed above. This is where you can check to make sure your hardware is playing nicely with Reason. If you use the Audio Card Driver drop down menu, it will show you what’s available. A big green check mark means it’s working. (You may still not hear anything if you’re not routed correctly!)

The Control Panel button will show you the available bit depth.

Generally, for generic Windows sound cards and devices, the ASIO drivers are the most stable. The latency is terrible, though.

5. Set the Buffer Size

Once more, go back to the Audio preferences dialog box, and you can see the sample buffer slider bar. The number of samples you can use is totally dependent on your hardware and the driver you’re using. For my setup, the lowest number of samples I can use and still get usable audio is 2,048. That’s because I have a really cheap audio card on this computer. The latency is really high, and it’s distracting. I will fix this soon by getting a new interface with some decent speakers.

Generally, you should aim for 128 samples for recording to reduce latency, and 1,024 or higher for mixing and post-production.

Reflection

This was a bit tricky to write, because Reason isn’t like most conventional DAWs. A lot of the stuff I’m talking about here is stuff I don’t usually think about, because it’s all taken care of in the background. The only thing I usually need to check is the sample rate and the buffer size, so this was a good exercise for me. Now I feel like I know more about my DAW.

Mar 102013
 

Hi, I’m Rich from North Carolina in the United States, and I’m going to talk a little bit about visualizing sound.

We all have various reasons for wanting to visualize sound at different times. We may want to understand a natural phenomenon, or find an annoying harmonic so we can get rid of it,  or figure out if the rhythm guitar is too loud, and needs to have its levels adjusted.

So how do we “see” sounds so we can understand them better?

Oscilloscopes

At first, we had oscilloscopes. What a hardware oscilloscope measures is the changes in a voltage signal over time. The signal strength is measured along the y-axis, and time progresses along the x-axis.

In terms of software, rather than as a voltage, we usually see the image of amplitude along the y-axis over time on the x-axis, since that’s more useful than voltage in terms of audio editing.

Either way, the oscilloscope is useful for measuring what is going to be sent out to your speakers, whether it’s in terms of voltage or amplitude. In our case, we’ll be using the amplitude variety more often.

Oscilloscopes are useful for finding the starts and ends of sounds, and for determining the average level of a sound, but they don’t give you any frequency or timbre data.

Spectrum Analysis

Next came the spectrum analyzer. The spectrum analyzer does a really good job of showing us an instantaneous snapshot of frequency and amplitude information. Generally, audio frequencies are lined up along the x-axis, and the y-axis measures the amplitude.

What the spectrum analyzer lacks is a way to show changes in frequency and amplitude over time.

In a DAW, you can often find spectrum analyzers available as add-on software.

Spectrogram/Sonogram

The spectrogram or sonogram has the richest presentation of audio data. It shows us frequency and amplitude information over a fixed period of time. If we were to take the individual snapshots of a spectrum analyzer and stack them one on top of another really, really fast, and turn the stack sideways while rotating it 90 degrees, then that’s what we would get with a spectrogram.

Along the y-axis, we have the same range of frequencies that a spectrum analyzer has. Along the x-axis, we measure the passage of time. We add a z-axis to show frequency, which is represented by color or brightness changes in a 2D display.

Because Words Will Only Get You So Far

I took a sound file and ran it through Sonic Visualiser, which is an excellent program for looking at sounds in a lot of different ways. From the snapshot I took of the sound file, I created this graphic:

Note: The waveform in (1) is a stereo waveform, that’s why there are two waveforms in the oscillator view.

waveform-analysis-image

What’s interesting about the sonogram is that you can see the changes in the note sung very clearly as bands, but it would be almost impossible to do that with the oscilloscope or the spectrum analyzer. The oscilloscope can show you the attack and decay of the notes being sung, but it can’t tell you what notes were just sung. The spectrum analyzer can tell you what notes are being sung and how loud, but you have to look at a lot of slices.

I’m Reflecting, I’m Reflecting!

This took some time to put together. I had to set up an old domain I had lying around and hook it up to a new server, then install WordPress, then Suffusion, then customize everything. I even created a favicon.ico file in Photoshop. The favicon didn’t take too long, though.

I dug out Sonic Visualiser from my previous class on digital sound design and used Windows’ screen snapshot tool to get the screen grab from a random audio file I had lying around. I dumped that into Photoshop to create the other visuals and text. I tried to use Sketchbook, but the results were kind of nasty.

Then I went over the lecture, wrote up this post, and spent some time trying to disambiguate the info on oscillators.

Overall, I think I managed to lock the information into my head pretty well. I like the idea of thinking of a sonogram as a series of stacked spectrum analyzer snapshots, turned and rotated.

I put a lot of thought and work into this, but did I get it right? What parts do you like/not like? Let me know through the class! (Comments are disabled to dissuade comment spam-bots.)

Thanks for reading this. I hope you got something out of it!

-Rich.