Guitar Australia

Please login or register.

Login with username, password and session length
Advanced search  


"Hey buddy, how late does the band play?" "Oh, about a half a beat behind the drummer!"

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Nick

Pages: [1] 2 3 4
Articles / The power of limits.
« on: April 21, 2017, 02:12:13 PM »
In the 1940s a band was recorded with a couple of mics, direct to wire or maybe wax. Mixing was accomplished by positioning the musicians  around the mic(s). Editing didn't exist. Plugins didn't exist. Fast forward to 1966 and Brian Wilson and co are working on a modern masterpiece, Pet Sounds. Once again, a lot of live recordings, skilled musicians and not much post production. There was mixing, but it was done in pre 24 track style.
In the mid 1960s, mixing was pre planned and pre production was everything. This was due to limitations. Having a four track tape machine meant that you could either keep things very simple and mostly live, or plan out a bounce down process. Planning was required because a bounce to another four track machine meant track balances and relative EQ settings were frozen. Once a bounce had been carried out to another tape machine, more recorded tracks could be added to the production. Another reason for careful planning of this type of production was due to quality loss on each bounce due to any unwanted signal noise being sent to the next generation recording. The problem would be compounded with each bounce, so the most important elements would be recorded last for maximum quality.
Today, none of these issues exist! There is no need to plan a session due to lack of technical resources. There is no need to decide in advance in which order elements will be tracked. There is no need to even decide on which piano sound you want. There's no need to nail the timing of your drum or bass or guitar part. There's no need to record a chorus part more than once! There are no limits.

Back in the old days, how did producers like George Martin, Eddie Kramer and Phil Spector (to name only a few) achieve such brilliant results? They were wrought with limitations! When they were working though, they probably weren't lamenting a lack of technical prowess. They probably thought they had it pretty darn good! Their thought process was different from one we might adopt in the modern realm. One huge difference was the need for a strength of vision. Productions would be soon dead in the water if the vision was lost, because the means of recovery didn't exist. That vision of the final product needed to be firm and CLEAR. The work was towards that vision and it had to survive as the production traversed the limitations of the day.

So, what's the point? What am I getting at? As an exercise, I suggest imposing artificial limits on your next production. Below I'll list a couple of ideas that might get you started:
  • Use one mic for the entire project
  • Use one bussed delay or reverb for the entire project
  • Produce something entirely in mono
  • Do not use EQ at all! Track items to sound how you want them to in the final mix
  • No edits! Every track must be performed in a single pass
  • Record no more than three tracks before bouncing them for use in the final mix

There are many more ideas that you can try. How about trying to simulate life in 1965? What limitations would there have been with regards to signal routing, compression, EQ? Did they even have delays back then??

There is another useful side effect of limiting your options. You will be forced to make the most of the gear you actually use. In the case of only using one mic, you're going to need to milk it for all it's worth so that in different situations, you can still record something usable. You might learn more about that mic. The same for using just one compressor, or EQ. You'll need to put them through their paces and wring all that you can from them. In the process, you might discover something about your gear that you'd previously missed.

Here's one final example of a limitation that you might find really useful for expanding your production thought process. Give yourself a severe time constraint! In four hours, starting from scratch, write, record and mix EIGHT songs. You can have as many or few recorded tracks for each song as you like, but make sure they are proper songs, not just sounds. Gibberish lyrics are fine. Be brutal, give yourself no breaks! The idea is to avoid all the second guessing and selection of samples that we often find ourselves doing these days. You have no time for that. Get the idea out, and get the idea recorded.

See how you go and please post the results in this thread so we can all share thoughts.

BeatBuddy / [SONG] Lotus Flower - Radiohead (with bass)
« on: May 08, 2015, 01:56:49 PM »
Lotus Flower - Radiohead

This Song utilises the fantastic drum set produced by Guitar Stu and I think it'll work fine with any drum set.

Intro Lotus Flower Intro
Main Drum Loop 1 Lotus Flower Verse
Main Drum Loop 2 Lotus Flower Verse

Please let me know of any issues and feel free to ask me anything you wish!



BeatBuddy / [SONG] Fortunate Son (with bass)
« on: May 08, 2015, 01:53:48 PM »
Fortunate Son - Creedence Clearwater Revival

This Song utilises the fantastic drum set produced by Guitar Stu and I think it'll work fine with any drum set.

Intro FortunateIntro: Four bars with just drum and bass and then the guitar riff should come in.
Main Drum Loop 1 FortunateMain: Just let it loop for the verses.
Main Drum Loop 2 FortunateChorus: Just let it loop for the chorus.
Drum Fill 1 FortunateBridge2: This is for the break down.

Please let me know of any issues!



BeatBuddy / [SONG] Berlin Chair - You Am I (with bass)
« on: May 08, 2015, 01:48:49 PM »
Berlin Chair - You Am I

This Song utilises the fantastic drum set produced by Guitar Stu and I think it'll work fine with any drum set.

Intro Berlin C Intro (click start when band kicks in)
Main Drum Loop 1 Berlin Chair Verse ---> Drum Fill Berlin Chair Chorus (this makes the transition between verse/chorus easy)
Main Drum Loop 2 Berlin Chair Chorus (for guitar solo/outro)

Please let me know of any issues and feel free to ask me anything you wish!



Your GIGS / Railway Hotel Bannockburn - Sunday FEB15
« on: February 04, 2015, 12:25:22 PM »
2 High Street Bannockburn

2pm - 5pm

Your GIGS / Keilor Hotel - Friday FEB27
« on: February 04, 2015, 12:11:08 PM »
670 Old Calder HWY, Keilor

7pm - 10pm

Your GIGS / Carters Public House - Saturday FEB28
« on: February 04, 2015, 12:10:09 PM »
300 High St, Northcote

9pm 'til LATE!

Your GIGS / Bridge Road Brewers, Beechworth - Sunday MAR01
« on: February 04, 2015, 12:08:55 PM »
50 Ford Street Beechworth, Victoria

2:30PM - 5:30PM

Your GIGS / Broadford Hotel - Friday MAR06
« on: February 04, 2015, 12:07:17 PM »
100 High St, Broadford

9PM 'til LATE!

Your GIGS / Bull and Mouth Hotel, Horsham - Saturday MARCH 21
« on: February 04, 2015, 12:06:10 PM »
9pm 'til LATE!

Your GIGS / Broadford Hotel - May 22
« on: February 04, 2015, 12:05:04 PM »
100 High St, Broadford

9pm 'til LATE!

Your GIGS / Commercial Hotel, Hamilton FEB 07
« on: February 04, 2015, 12:03:15 PM »
9:30 'til LATE!

Articles / Introduction to MIDI
« on: January 29, 2015, 05:18:35 PM »
Introduction to MIDI

MIDI is an acronym for Musical Instrument Digital Interface. It's an international standard for connecting all manner of musical equipment together and if you have any type of electronic gear, you've no doubt seen the MIDI connections on it somewhere.
If you are wishing to begin working with MIDI, but don't know where to start, hopefully this basic introduction will help you. If you already have a MIDI setup, you probably won't find much in this guide that you don't already know. Although you may come across some uses that you didn’t think of.

The Need For MIDI:
In the early nineteen eighties, there were many disparate electronic musical devices roaming the musical landscape. These devices included analogue synthesizers, drum machines, sequencers and various other keyboards.
It was possible to link certain pieces of gear from the same supplier(usually a requirement) together, if one wished to perform using synchronised machinery. This might include a solo performance using a keyboard and a drum machine. The drum machine might be triggered by a device known as a sequencer (discussed later) and all three units might be connected and set up in such a way that a performance would begin with the touch of a keyboard key. Signals would be sent to the drum machine, from the sequencer that

The above setup would be powerful and versatile, with one catch. You probably could not introduce a piece of equipment from a different manufacturer as more than likely, the communication protocols and physical connections would be incompatible. MIDI eliminates these problems by defining standards that all manufacturers can build to.

A MIDI Setup:
A MIDI device has input, output and sometimes 'thru' connections. The connections on a device are nearly always 'female', with the connecting lead consisting of 'male' connections at both ends.
The output jack (labelled 'out') of a MIDI device sends MIDI information out of the device. Can you guess what the 'in' jack is for? ;-)
Avoid confusion here and take careful note of the following. MIDI data is just that. It is data describing a musical performance. MIDI data does not create any audio signal of it’s own, like a .wav or .mp3 file would. You use the MIDI data (stored in a .mid file or played ‘live’ by a MIDI controller device) to drive a sound source. This could be a sound card with a built in synthesizer module or an external synthesizer. I’m going to assume that you will be using a computer to play, edit and organise your MIDI files. Often, your computer will have the ability to play MIDI files without the need to connect anything, this is because your soundcard has a tone generator/synthesizer that can be triggered by MIDI information.
If you wish to connect external MIDI devices to your computer, you’ll need to purchase a MIDI interface. This device, in it’s basic format, will usually have a MIDI IN and MIDI OUT jack, along with a USB connection. The USB will also power the unit. Also, if you have a USB Audio Interface, chances are, it’ll have MIDI connections enabling it to act as a MIDI interface.
The above describes what is technically known as a MIDI ‘port’.

What can you plug into the INPUT of a MIDI port?
A device that produces MIDI information! This includes the following:
- A keyboard with a MIDI OUT socket.
- Any other instrument with a MIDI OUT socket.
- A hardware sequencer or another computer with a MIDI OUT socket.

The above devices generate MIDI data based on how you played them. On the keyboard when you press a key, the note number, how fast/hard you hit the key and how long you held the key for, are recorded as data.

MIDI has 16 Channels:
Think of a MIDI channel as you would a single track in a multi-track recording. You select an instrument or ‘patch’ and that channel will transmit data on that channel and tell whatever device that’s being used to provide the audio, which instrument to use. You can only have one instrument per channel. But later, we’ll see how you could increase this amount.

Recording MIDI Data:
Using a method as described above, you would plug your MIDI input device (usually this will be a keyboard) into your computer. You select the instrument you want on the channel you want and hit record. Whatever you play will now be recorded as a MIDI file. Channel 10 is the default for drums. All this means is that most sound modules have a drum instrument setup on their channel 10. On every external sound module, you can setup which sounds are on which channel however you wish.

Editing MIDI Data:
This is where the real power of MIDI comes into play, I feel. Once you have your performance recorded as MIDI information, you can alter and edit it with ease. You can copy, cut and paste it, but you can also alter the key (transpose), change the instrument (try doing that with an audio recording!) and even tidy up the timing(quantisation) and dynamics(velocity) of the performance. Quantising is based on the idea that for any given tempo and time signature, there will be specific times when a note of any given length should fall. For instance, for a temp of 120 beats per minute, a quarter note will fall every half a second. When you recorded the performance, you may not have been this accurate! Your notes might fall slightly before or after the actual perfect timing. Now you might want this, and in most cases, you’ll want these imperfections in your performance, as they add feeling and groove to your work. But it’s nice to know that you can pull those notes back to the exact right positions (quantise them). And you can do a little or a lot of quantising, specified as a percentage. You can also ’offset’ where a note falls between where it’s exact correct position would be, and where the next note would be. This creates a ’swing’ effect and can be useful to spice up a dull sounding rhythm part for example. Think of a shuffle feel and you’ve got it.
The volume of each note is described using ‘velocity’ data. Double clicking on a channel from within 99% of modern music software will bring up a screen with just the data pertaining to that channel. Once again, this is analogous to an audio track in a conventional multi-track session. Most likely, down the bottom, directly under each note, there’ll be a little bar denoting the velocity of that note. All together, these bars make up a graph. You can select a drawing tool and manually alter the velocities for individual notes, or even draw across the graph to create volume sweeps. Another method would be to select multiple notes, then type in velocity values from a dialog box.

So you can see that MIDI affords powerful manipulation of your music. I find MIDI to be excellent for trying out ideas, as you can simply listen to the same part with any instrument you like!

What Is This Data?
Technically, the information being sent and received is binary data, in 'byte' sized chunks. The actual data being sent will be specific to the task at hand, and I'll outline some of these as the guide progresses, but first...
Music can be broken up, or abstracted in various ways in order to describe it. Obviously, common musical notation is a prime example. The note to be played on a violin can be represented by a dot, on or between the five lines of a musical staff. Additionally, the dot may be solid or un-filled, with various attachments to an optional 'tail' defining the duration the note should be played for. MIDI instruments output this type of data in a serial stream of bytes. This data includes which note (a number) was played, how hard (known as 'velocity') and for how long it was played. Sometimes, information about how the note ended is sent too. This is called 'aftertouch' information. The technical name for all of this type of data is 'performance' data. if you want to have a look at this data, you can select ‘MIDI Listing’ from the appropriate menus on the program you are using. Beware, this listing will look confusing at first!

You can also embed which instrument you were using into the MIDI file. This brings us to the MIDI file itself. If you studied the file under a microscope, you’d see that it was really a big list of byte data. At the start of the file, you’d usually find a set of data describing which instruments the files author has chosen for each channel.

Sound Modules:
This is where things get interesting and tricky. A sound module is the device that when sent MIDI data, will produce an audio signal from it’s outputs. By sound module, I mean any device that can do this. Sound cards included along side standalone synth modules.

There are two main methods that are used within sound modules. The first method (probably what your soundcard does) is to play samples of real instruments. A real piano, or trumpet etc will be recorded and made into a short digital file. This file will be stored in a ROM module, and when a MIDI message requesting that particular instrument is received, the sound source will play back this file. In order to achieve multiple notes, often many notes are recorded from the real instrument. Cheaper sound source devices will rely on fewer actual samples and just play back the same sample at different speeds, thus altering the pitch. This sounds OK for small pitch ranges of only a few notes, but the speeding up can become obvious for larger variations of more than about four notes. The downside of recording more real notes is the extra memory required to store the extra samples.
Using a sound card is a great, simple way to get into MIDI, but there are limitations. Unless you have a fairly good soundcard, it will be difficult to adjust the default sounds for each channel. Also, the sounds from most sound cards are not extremely realistic. This is where a sampler might come in handy. I’m talking about a software sampler, but the same applies to a hardware device. A sampler can create ‘sound fonts’ of instruments. You feed it source information and place these sounds into a bank of sounds that will be triggered by a MIDI file or device. This way, you can achieve higher quality than you may find from your soundcard. Sampling is a big topic though, and a great source of information on it can be found here:

The second method used by sound source devices is to use a synthesizer. The incoming MIDI information is used to control the synthesizer, as though someone was just playing it normally. In many modern synthesizers, the above sampling method is used for a lot of their sounds, but MIDI can be used to control analogue synthesizers if they are equipped with a MIDI port. Drum machines and software synthesizers also fit this category. With a software synthesizer, you setup the sounds you’d like with the synthesizer program, then assign this setup to a MIDI channel from in your music editing software (Cakewalk, Sonar etc).

MIDI Connection:
Your computer will be running software that contains a MIDI sequencer. Think of this as the conductor. From the sequencer, you send MIDI signals either to the soundcard, or to external sound sources or both. You can also ‘chain’ MIDI sound sources together by connecting the MIDI THRU from one unit into the MIDI IN of the next. MIDI OUT can also be used in this application, as it more than likely is just mirroring the input signal (which is exactly what MIDI THRU does). This is how to get more than 16 channels of instruments.
When you use external sound source devices, you’ll probably need to setup ‘performances’ on them. This simply means that you assign the instruments you want for each channel. Bare in mind that a MIDI file will usually contain ‘Program Change’ messages. These will override your ‘performance’ settings on your sound source device, but you can instruct the sound source device to ignore these messages.

General MIDI:
Also known as GM, this is an arrangement of instruments in a standard way. MIDI instruments are selected based on a number from 0 to 127 (128 variations). If you produce a MIDI file using these GM assignments, others can play back your file and all the instruments will be correct on whichever sound source device they use. This is opposed to producing your own ‘performance’ arrangement.

Other Uses For MIDI:
Just as MIDI can be used to describe a musical performance, it can also be used to describe settings for a piece of equipment. For example, a guitar FX unit I own stores all the settings as MIDI data. I can plug a MIDI cable into the FX unit and send them to my computer to make a backup of the internal settings.

Another fairly recent (past 15 years or so) employment for MIDI has been in the software controller area. Here, a device with faders, control knobs and transport controls (stop, record etc) can be used to control your music creation software. Often it’s much nicer to mix and work with a real controls than it is to use a mouse. These devices are known as ‘control surfaces’.

I hope I covered the basics of MIDI and how to get started with it, but if there’s some gaping hole I’ve left in your understanding, please ask about it in this thread. Maybe I or someone else can help you, and others will also be able to see the same answer.

Articles / microKORG tips!
« on: January 29, 2015, 05:15:21 PM »
For any owners of this little wonder, here's some things that I've discovered.

Allow one sound to morph into another:
NOTE: This technique will only work for duophonic patches, because I use two voices layered, resulting in consumption of the four available voices.

- Initialise a new patch by selecting where you wish to work and pressing SHIFT [3], then press [3] again. This new location will now hold a basic saw tooth patch with oscillator 1 audible only.

- Switch EDIT SELECT 1 to VOICE and select SYNTH with CONTROL 1. With CONTROL 2, select LAYER.

- Set up the patch however you like, bearing in mind that this sound will 'morph' into another sound. (Of course, this could be the sound that is morphed into, but for this guide, I'll make this the initial sound.)

- Press the TIMBER SELECT button. You will now be able to setup the other layer. This layer should be something different to the first layer in order to hear the effect. Press SHIFT -> TIMBRE SELECT to hear only the second layer and press SHIFT -> TIMBRE SELECT to toggle between either layer. This will help you to setup both 'patches' without the other getting in the way.

- Once you have both individual layers sounding to your liking, you can begin editing the AMPLIFIER ENVELOPE GENERATOR (AMP EG) of each. This is the trick to getting them to sound at different times.

- Press the TIMBER SELECT button to switch to the first layer. Press SHIFT -> TIMBER SELECT to only hear the first layer.

- The AMP EG for the first layer can be set pretty much however you like, but you don't want it to RELEASE too slowly. It needs to 'make room' for the second layer. Turn EDIT SELECT 1 to AMP EG. The ATTACK setting controls how quickly your sound will reach it's loudest point. The higher the number, the longer it will take. Set this fairly low, since this first layer needs to 'give way' at some stage.

- DECAY is a little bit tricky. It's best to set the SUSTAIN with CONTROL 3 first, then set DECAY with CONTROL 2. DECAY controls how quickly your sounds level falls to the level set with SUSTAIN. We will set SUSTAIN to zero and DECAY to around 100. What this means is, once the note has been struck, the sounds level will fall to zero at a speed set by the DECAY control.

- Press TIMBER SELECT to adjust the second layer. We will now set up its AMP EG to allow it to fade in, just after the first layer has nearly finished fading out. We want to hear both layers together now, so SHIFT wasn't needed.

- Using CONTROL 1, set the ATTACK fairly high. This will delay the onset of the layer two sound. Going by ear, set the ATTACK to make the sound come up in volume at around the same time as the first layer has diminished nearly completely.

- The other parameters should be set to whatever suite the style of layer two the best.

If all goes well, you should now have a patch that 'morphs' from one sound to another. I think this opens up some nice possibilities on the MK, as it's not apparent that this type of programming is possible. Most 'larger' synths allow you to do all this automatically in the patch setup.

Don't forget, you can use this technique with two of the same types of sound, but have one of them set up with a different character or octave range.

Yeah, I reckon there's a lot you can do with this technique! :-)

Articles / Tuning Harmonics
« on: January 28, 2015, 01:31:59 PM »
Does your guitar tune up ok? Does it sound in tune when you’re playing down low on the neck? Does it
sound out of tune when you play something anywhere but down low on the neck? Well, it’s a fair bet your
harmonics are out of whack!
The scale length of your guitar is measured from where the strings are pulled over the bridge saddles, to
the nut. This length is then divided up by the frets to allow you to play notes of varying pitch. The pitch
placement of the frets is mathematically calculated based on some very important scientific calculus and
sub diatonic vectorization rasters. Short story? The scale length needs to be adjusted in order for your
guitars intonation to be accurate for the whole length of the fret board. You can’t adjust where the frets
are, but by adjusting the scale length, that’s really what you’re doing. It makes sense to me…
It’s a quick and easy job. You’ll need your trusty electronic tuner, preferably one that can detect notes
automagically. Those are also known as chromatic tuners. You will need a screw driver of the type that
will fit the screws in the end of the bridge saddles as well.
Grab your guitar and plug ’er into to the tuna. Start on the low E string and play a harmonic on the
twelfth fret. To do this, place your finger lightly on the string, directly over the twelfth fret and pluck the
string. You should have just enough pressure so the string can ring, and enough to allow for the harmonic
to be sounded.
Once you are happy with the tuning of the harmonic, play a note on the twelve fret, the octave. This note
is the same note as the harmonic. Play this note carefully, as you need to get an accurate reading.
Checking this on the tuner, it should be the same as the harmonic.
If it’s pitched higher, you need an anti-clockwise turn on the saddle screw, and the opposite for a lower
pitched note. We’re playing the game of give and take here. Once you’ve adjusted the saddle, the strings
pitch will have altered also. So re-tune the harmonic and test again.
Make small adjustments! Get a feel for how much to adjust. Some things to note:
- New strings will have a more accurate tuning!
- Old strings will be harder to tune because wear and gunk alter their gauge.
- Gauge(string thickness) affects harmonics, so putting strings on with a different gauge to your previous
ones may put the harmonics out again.
- If you run out of travel in the saddle adjustment, you are doing something wrong, or your guitar needs
some professional help.
- Not all guitars have individual saddle adjustments. You will have to make do with what you have.
- If the neck of your guitar is warped(twisted) forget about getting the harmonics in tune. You need a new
- Adjusting the action and/or the truss rod will affect the harmonics.
If the harmonics were out, you’ll notice a vast improvement over what you had. Barre chords and scales
will sound much better, anywhere on the neck.
Hope this little tip is of some use!

Pages: [1] 2 3 4