Guitar Australia

Please login or register.

Login with username, password and session length
Advanced search  

News:

What's the difference between a drummer and a jet plane?
 About three decibels

Pages: 1 [2] 3 4 ... 6
 11 
 on: February 04, 2015, 12:06:10 PM 
Started by Nick - Last post by Nick
9pm 'til LATE!

 12 
 on: February 04, 2015, 12:05:04 PM 
Started by Nick - Last post by Nick
100 High St, Broadford

9pm 'til LATE!

 13 
 on: February 04, 2015, 12:03:15 PM 
Started by Nick - Last post by Nick
9:30 'til LATE!

 14 
 on: January 29, 2015, 05:18:35 PM 
Started by Nick - Last post by Nick
Introduction to MIDI

Intro:
MIDI is an acronym for Musical Instrument Digital Interface. It's an international standard for connecting all manner of musical equipment together and if you have any type of electronic gear, you've no doubt seen the MIDI connections on it somewhere.
If you are wishing to begin working with MIDI, but don't know where to start, hopefully this basic introduction will help you. If you already have a MIDI setup, you probably won't find much in this guide that you don't already know. Although you may come across some uses that you didn’t think of.

The Need For MIDI:
In the early nineteen eighties, there were many disparate electronic musical devices roaming the musical landscape. These devices included analogue synthesizers, drum machines, sequencers and various other keyboards.
It was possible to link certain pieces of gear from the same supplier(usually a requirement) together, if one wished to perform using synchronised machinery. This might include a solo performance using a keyboard and a drum machine. The drum machine might be triggered by a device known as a sequencer (discussed later) and all three units might be connected and set up in such a way that a performance would begin with the touch of a keyboard key. Signals would be sent to the drum machine, from the sequencer that

The above setup would be powerful and versatile, with one catch. You probably could not introduce a piece of equipment from a different manufacturer as more than likely, the communication protocols and physical connections would be incompatible. MIDI eliminates these problems by defining standards that all manufacturers can build to.

A MIDI Setup:
A MIDI device has input, output and sometimes 'thru' connections. The connections on a device are nearly always 'female', with the connecting lead consisting of 'male' connections at both ends.
The output jack (labelled 'out') of a MIDI device sends MIDI information out of the device. Can you guess what the 'in' jack is for? ;-)
Avoid confusion here and take careful note of the following. MIDI data is just that. It is data describing a musical performance. MIDI data does not create any audio signal of it’s own, like a .wav or .mp3 file would. You use the MIDI data (stored in a .mid file or played ‘live’ by a MIDI controller device) to drive a sound source. This could be a sound card with a built in synthesizer module or an external synthesizer. I’m going to assume that you will be using a computer to play, edit and organise your MIDI files. Often, your computer will have the ability to play MIDI files without the need to connect anything, this is because your soundcard has a tone generator/synthesizer that can be triggered by MIDI information.
If you wish to connect external MIDI devices to your computer, you’ll need to purchase a MIDI interface. This device, in it’s basic format, will usually have a MIDI IN and MIDI OUT jack, along with a USB connection. The USB will also power the unit. Also, if you have a USB Audio Interface, chances are, it’ll have MIDI connections enabling it to act as a MIDI interface.
The above describes what is technically known as a MIDI ‘port’.

What can you plug into the INPUT of a MIDI port?
A device that produces MIDI information! This includes the following:
- A keyboard with a MIDI OUT socket.
- Any other instrument with a MIDI OUT socket.
- A hardware sequencer or another computer with a MIDI OUT socket.

The above devices generate MIDI data based on how you played them. On the keyboard when you press a key, the note number, how fast/hard you hit the key and how long you held the key for, are recorded as data.

MIDI has 16 Channels:
Think of a MIDI channel as you would a single track in a multi-track recording. You select an instrument or ‘patch’ and that channel will transmit data on that channel and tell whatever device that’s being used to provide the audio, which instrument to use. You can only have one instrument per channel. But later, we’ll see how you could increase this amount.

Recording MIDI Data:
Using a method as described above, you would plug your MIDI input device (usually this will be a keyboard) into your computer. You select the instrument you want on the channel you want and hit record. Whatever you play will now be recorded as a MIDI file. Channel 10 is the default for drums. All this means is that most sound modules have a drum instrument setup on their channel 10. On every external sound module, you can setup which sounds are on which channel however you wish.

Editing MIDI Data:
This is where the real power of MIDI comes into play, I feel. Once you have your performance recorded as MIDI information, you can alter and edit it with ease. You can copy, cut and paste it, but you can also alter the key (transpose), change the instrument (try doing that with an audio recording!) and even tidy up the timing(quantisation) and dynamics(velocity) of the performance. Quantising is based on the idea that for any given tempo and time signature, there will be specific times when a note of any given length should fall. For instance, for a temp of 120 beats per minute, a quarter note will fall every half a second. When you recorded the performance, you may not have been this accurate! Your notes might fall slightly before or after the actual perfect timing. Now you might want this, and in most cases, you’ll want these imperfections in your performance, as they add feeling and groove to your work. But it’s nice to know that you can pull those notes back to the exact right positions (quantise them). And you can do a little or a lot of quantising, specified as a percentage. You can also ’offset’ where a note falls between where it’s exact correct position would be, and where the next note would be. This creates a ’swing’ effect and can be useful to spice up a dull sounding rhythm part for example. Think of a shuffle feel and you’ve got it.
The volume of each note is described using ‘velocity’ data. Double clicking on a channel from within 99% of modern music software will bring up a screen with just the data pertaining to that channel. Once again, this is analogous to an audio track in a conventional multi-track session. Most likely, down the bottom, directly under each note, there’ll be a little bar denoting the velocity of that note. All together, these bars make up a graph. You can select a drawing tool and manually alter the velocities for individual notes, or even draw across the graph to create volume sweeps. Another method would be to select multiple notes, then type in velocity values from a dialog box.

So you can see that MIDI affords powerful manipulation of your music. I find MIDI to be excellent for trying out ideas, as you can simply listen to the same part with any instrument you like!

What Is This Data?
Technically, the information being sent and received is binary data, in 'byte' sized chunks. The actual data being sent will be specific to the task at hand, and I'll outline some of these as the guide progresses, but first...
Music can be broken up, or abstracted in various ways in order to describe it. Obviously, common musical notation is a prime example. The note to be played on a violin can be represented by a dot, on or between the five lines of a musical staff. Additionally, the dot may be solid or un-filled, with various attachments to an optional 'tail' defining the duration the note should be played for. MIDI instruments output this type of data in a serial stream of bytes. This data includes which note (a number) was played, how hard (known as 'velocity') and for how long it was played. Sometimes, information about how the note ended is sent too. This is called 'aftertouch' information. The technical name for all of this type of data is 'performance' data. if you want to have a look at this data, you can select ‘MIDI Listing’ from the appropriate menus on the program you are using. Beware, this listing will look confusing at first!

You can also embed which instrument you were using into the MIDI file. This brings us to the MIDI file itself. If you studied the file under a microscope, you’d see that it was really a big list of byte data. At the start of the file, you’d usually find a set of data describing which instruments the files author has chosen for each channel.

Sound Modules:
This is where things get interesting and tricky. A sound module is the device that when sent MIDI data, will produce an audio signal from it’s outputs. By sound module, I mean any device that can do this. Sound cards included along side standalone synth modules.

There are two main methods that are used within sound modules. The first method (probably what your soundcard does) is to play samples of real instruments. A real piano, or trumpet etc will be recorded and made into a short digital file. This file will be stored in a ROM module, and when a MIDI message requesting that particular instrument is received, the sound source will play back this file. In order to achieve multiple notes, often many notes are recorded from the real instrument. Cheaper sound source devices will rely on fewer actual samples and just play back the same sample at different speeds, thus altering the pitch. This sounds OK for small pitch ranges of only a few notes, but the speeding up can become obvious for larger variations of more than about four notes. The downside of recording more real notes is the extra memory required to store the extra samples.
Using a sound card is a great, simple way to get into MIDI, but there are limitations. Unless you have a fairly good soundcard, it will be difficult to adjust the default sounds for each channel. Also, the sounds from most sound cards are not extremely realistic. This is where a sampler might come in handy. I’m talking about a software sampler, but the same applies to a hardware device. A sampler can create ‘sound fonts’ of instruments. You feed it source information and place these sounds into a bank of sounds that will be triggered by a MIDI file or device. This way, you can achieve higher quality than you may find from your soundcard. Sampling is a big topic though, and a great source of information on it can be found here:

http://www.samplecraze.com/

The second method used by sound source devices is to use a synthesizer. The incoming MIDI information is used to control the synthesizer, as though someone was just playing it normally. In many modern synthesizers, the above sampling method is used for a lot of their sounds, but MIDI can be used to control analogue synthesizers if they are equipped with a MIDI port. Drum machines and software synthesizers also fit this category. With a software synthesizer, you setup the sounds you’d like with the synthesizer program, then assign this setup to a MIDI channel from in your music editing software (Cakewalk, Sonar etc).

MIDI Connection:
Your computer will be running software that contains a MIDI sequencer. Think of this as the conductor. From the sequencer, you send MIDI signals either to the soundcard, or to external sound sources or both. You can also ‘chain’ MIDI sound sources together by connecting the MIDI THRU from one unit into the MIDI IN of the next. MIDI OUT can also be used in this application, as it more than likely is just mirroring the input signal (which is exactly what MIDI THRU does). This is how to get more than 16 channels of instruments.
When you use external sound source devices, you’ll probably need to setup ‘performances’ on them. This simply means that you assign the instruments you want for each channel. Bare in mind that a MIDI file will usually contain ‘Program Change’ messages. These will override your ‘performance’ settings on your sound source device, but you can instruct the sound source device to ignore these messages.

General MIDI:
Also known as GM, this is an arrangement of instruments in a standard way. MIDI instruments are selected based on a number from 0 to 127 (128 variations). If you produce a MIDI file using these GM assignments, others can play back your file and all the instruments will be correct on whichever sound source device they use. This is opposed to producing your own ‘performance’ arrangement.

Other Uses For MIDI:
Just as MIDI can be used to describe a musical performance, it can also be used to describe settings for a piece of equipment. For example, a guitar FX unit I own stores all the settings as MIDI data. I can plug a MIDI cable into the FX unit and send them to my computer to make a backup of the internal settings.

Another fairly recent (past 15 years or so) employment for MIDI has been in the software controller area. Here, a device with faders, control knobs and transport controls (stop, record etc) can be used to control your music creation software. Often it’s much nicer to mix and work with a real controls than it is to use a mouse. These devices are known as ‘control surfaces’.

I hope I covered the basics of MIDI and how to get started with it, but if there’s some gaping hole I’ve left in your understanding, please ask about it in this thread. Maybe I or someone else can help you, and others will also be able to see the same answer.

 15 
 on: January 29, 2015, 05:15:21 PM 
Started by Nick - Last post by Nick
For any owners of this little wonder, here's some things that I've discovered.

Allow one sound to morph into another:
NOTE: This technique will only work for duophonic patches, because I use two voices layered, resulting in consumption of the four available voices.

- Initialise a new patch by selecting where you wish to work and pressing SHIFT [3], then press [3] again. This new location will now hold a basic saw tooth patch with oscillator 1 audible only.

- Switch EDIT SELECT 1 to VOICE and select SYNTH with CONTROL 1. With CONTROL 2, select LAYER.

- Set up the patch however you like, bearing in mind that this sound will 'morph' into another sound. (Of course, this could be the sound that is morphed into, but for this guide, I'll make this the initial sound.)

- Press the TIMBER SELECT button. You will now be able to setup the other layer. This layer should be something different to the first layer in order to hear the effect. Press SHIFT -> TIMBRE SELECT to hear only the second layer and press SHIFT -> TIMBRE SELECT to toggle between either layer. This will help you to setup both 'patches' without the other getting in the way.

- Once you have both individual layers sounding to your liking, you can begin editing the AMPLIFIER ENVELOPE GENERATOR (AMP EG) of each. This is the trick to getting them to sound at different times.

- Press the TIMBER SELECT button to switch to the first layer. Press SHIFT -> TIMBER SELECT to only hear the first layer.

- The AMP EG for the first layer can be set pretty much however you like, but you don't want it to RELEASE too slowly. It needs to 'make room' for the second layer. Turn EDIT SELECT 1 to AMP EG. The ATTACK setting controls how quickly your sound will reach it's loudest point. The higher the number, the longer it will take. Set this fairly low, since this first layer needs to 'give way' at some stage.

- DECAY is a little bit tricky. It's best to set the SUSTAIN with CONTROL 3 first, then set DECAY with CONTROL 2. DECAY controls how quickly your sounds level falls to the level set with SUSTAIN. We will set SUSTAIN to zero and DECAY to around 100. What this means is, once the note has been struck, the sounds level will fall to zero at a speed set by the DECAY control.

- Press TIMBER SELECT to adjust the second layer. We will now set up its AMP EG to allow it to fade in, just after the first layer has nearly finished fading out. We want to hear both layers together now, so SHIFT wasn't needed.

- Using CONTROL 1, set the ATTACK fairly high. This will delay the onset of the layer two sound. Going by ear, set the ATTACK to make the sound come up in volume at around the same time as the first layer has diminished nearly completely.

- The other parameters should be set to whatever suite the style of layer two the best.

If all goes well, you should now have a patch that 'morphs' from one sound to another. I think this opens up some nice possibilities on the MK, as it's not apparent that this type of programming is possible. Most 'larger' synths allow you to do all this automatically in the patch setup.

Don't forget, you can use this technique with two of the same types of sound, but have one of them set up with a different character or octave range.

Yeah, I reckon there's a lot you can do with this technique! :-)

 16 
 on: January 28, 2015, 01:31:59 PM 
Started by Nick - Last post by Nick
Does your guitar tune up ok? Does it sound in tune when you’re playing down low on the neck? Does it
sound out of tune when you play something anywhere but down low on the neck? Well, it’s a fair bet your
harmonics are out of whack!
The scale length of your guitar is measured from where the strings are pulled over the bridge saddles, to
the nut. This length is then divided up by the frets to allow you to play notes of varying pitch. The pitch
placement of the frets is mathematically calculated based on some very important scientific calculus and
sub diatonic vectorization rasters. Short story? The scale length needs to be adjusted in order for your
guitars intonation to be accurate for the whole length of the fret board. You can’t adjust where the frets
are, but by adjusting the scale length, that’s really what you’re doing. It makes sense to me…
It’s a quick and easy job. You’ll need your trusty electronic tuner, preferably one that can detect notes
automagically. Those are also known as chromatic tuners. You will need a screw driver of the type that
will fit the screws in the end of the bridge saddles as well.
Grab your guitar and plug ’er into to the tuna. Start on the low E string and play a harmonic on the
twelfth fret. To do this, place your finger lightly on the string, directly over the twelfth fret and pluck the
string. You should have just enough pressure so the string can ring, and enough to allow for the harmonic
to be sounded.
Once you are happy with the tuning of the harmonic, play a note on the twelve fret, the octave. This note
is the same note as the harmonic. Play this note carefully, as you need to get an accurate reading.
Checking this on the tuner, it should be the same as the harmonic.
If it’s pitched higher, you need an anti-clockwise turn on the saddle screw, and the opposite for a lower
pitched note. We’re playing the game of give and take here. Once you’ve adjusted the saddle, the strings
pitch will have altered also. So re-tune the harmonic and test again.
Make small adjustments! Get a feel for how much to adjust. Some things to note:
- New strings will have a more accurate tuning!
- Old strings will be harder to tune because wear and gunk alter their gauge.
- Gauge(string thickness) affects harmonics, so putting strings on with a different gauge to your previous
ones may put the harmonics out again.
- If you run out of travel in the saddle adjustment, you are doing something wrong, or your guitar needs
some professional help.
- Not all guitars have individual saddle adjustments. You will have to make do with what you have.
- If the neck of your guitar is warped(twisted) forget about getting the harmonics in tune. You need a new
neck.
- Adjusting the action and/or the truss rod will affect the harmonics.
If the harmonics were out, you’ll notice a vast improvement over what you had. Barre chords and scales
will sound much better, anywhere on the neck.
Hope this little tip is of some use!

 17 
 on: January 28, 2015, 12:24:59 PM 
Started by Nick - Last post by Nick
I think maybe this will be a good idea for a YouTube lesson at some stage too. Here we go..

* Most rock bass revolves around four basic structures.



The four things:

THING1
It's easy to find intervals on a stringed instrument like the bass, as
the notes form repeating patterns all over the neck. In other words,
there's a general way of finding intervals that works for every
starting note, anywhere on the neck.
The first useful one for rock bass is the 5th. From your root note,
simply move up two frets and up to the next highest string.

Code: [Select]
G|----|---|----|------
D|----|---|---O|-----
A|---O|---|----|----
E|----|---|----|--- 


In the above diagram, a Bb is played on the first fret of the A string
and it's 5th, 'F', is played on the third fret of the D string.
It's also important to note that because we can use a lower string,
'E', we can play the 5th of Bb on the E string. (see the '*' below)

Code: [Select]
G|---|---|----|------
D|---|---|---O|-----
A|--O|---|----|----
E|--*|---|----|--- 


Moving right along, you'll find that on the G string, directly across
from the 5th we just found, is the octave of the original Bb (see
below!)

Code: [Select]
G|----|---|---*|------
D|----|---|---O|-----
A|---O|---|----|----
E|----|---|----|--- 


These notes are quite handy to use for variations in boring or
repetitititititititive bass lines. And using this knowledge, you can
quickly find notes on the fret board if need be!
Nearly all the riffs in Pink Floyds 'Time' use this pattern of notes, just in various orders.

THING2
The Minor Pentatonic scale. I'll leave it to you to find a reference
for this. Find one that has the tonic on the E and one with the tonic
on the A. Then, find a combined one and note how the spaces between
them may be connected.

THING3
As above, but for the Major scale. For one scale, you should
ultimately be able to play all over the neck. It's not as difficult as
it sounds, because you just remember the two main ones (root 5 and 6)
and then you naturally get to know the spaces between them. Slides
work particularly well with the root 5 pattern major scale. For
example:
Play a tonic C on the A string with your index finger, then play a D,
two frets up with your ring finger, but as soon as you play the D,
slide your ring finger up two more frets to the E (fret seven). Then
play a G on the D string with your index finger before sliding back
down from E to D (on the A string again) with your ring finger, to
finish on C once again with your index finger. This is should sound
sort of like the lead break in Maggie May.

THING4
The old four semi-tone walk up/down, Mainly used for blues, but highly
adaptable to rock, this bass staple is probably the easiest of all to
utilise. It simply involves walking up the four frets that lead to the
note you wish to change to, in time for that change. For instance, in
a 12 bar blues, in G, the first change will be to the 4th (C). There
is a C on the A string (and I'm assuming you're playing a G on the E
string to start with). To get to that C, you simply walk up from A in
one fret increments. In a 4/4 blues shuffle type thing, you'd begin
your walk on beat 2 (A) then 3 (Bb), then 4 (B) just in time to hit
the C on '1' from the next bar. This trick is also handy if you want
to return to G in bar 7, but in a higher octave. Simply start your
'walk up' from the E on the D string on beat 2 of bar 6. You'll end up
on the higher G and it sounds cool! You can do these walks all day and
it'll sound like proper blues. Experiment with changing to higher and
lower octaves and try some walk downs too.

Some Points to Ponder:
- Bass is a rhythm instrument and in most cases, needs to sound like
it's part of the drum kit.

- Bass can have a strong influence over the 'groove' of a song, based
on when you actually 'pick up' the kick drum with the bass note.
(for example, hitting the front end of the kick beat will tend to
make the drums and the band sound held back or even sluggish)


I've made some drum tracks to play along with:

http://www.nickfletcherproductions.com/PracticeBeats/

 18 
 on: January 26, 2015, 03:10:36 PM 
Started by Nick - Last post by Nick
For absolute beginners!

LESSON 1
LESSON 2
LESSON 3
LESSON 4
LESSON 5

 19 
 on: January 26, 2015, 01:30:02 PM 
Started by Nick - Last post by Nick
PART 1:

How to begin?
The most important aspect of being a performer is the performance. You need a quality product and you need to present something that people will want more of. If you’re not prepared and can’t put on an impressive performance, you won’t be asked back or even worse, you won’t be given a gig in the first place. You also need a way of showing potential employers what you can do. If you don’t have your own website, you can at least set up a Facebook account solely for the purpose of your live work. I recommend setting up your own website because it shows that you take this stuff seriously and you’ve made a commitment to doing things to the best of your ability. There’s nothing wrong with having only a Facebook account, but I feel a combination of a good Facebook account and a website is more impressive. These days, it will only cost you a few dollars a month to have a professional online presence and it’s not too hard to build the site yourself!
I should stress the importance of having a quality online presence as it’s probably the first impression you’ll get to make on the person responsible for making bookings. They’ll want to see how you present yourself and how you sound. They’ll be interested to see where you've played in the past and how many bookings you currently have. Imagine the type of things you look for when you’re checking out service providers yourself. You would be more likely to look further into a business that instills confidence from its well presented website over a business that gives off a bit of a dodgy feel!
So, whatever means you use to represent yourself to potential bookers is very important, but next to that, is the presentation of what you actually do. No one will book you based solely on the quality of your web presence. They’ll want to hear and/or see what you can actually do. They’ll picture you performing in their venue and try to imagine whether or not their usual patrons will like you. I’m not saying to second guess what any particular patrons may or may not think of you, I’m saying you should try and represent accurately what YOU do. You won’t be suitable for every type of venue, but you want the best chance of being selected for the ones that you are suitable for.
Here, I’ll guide you through producing your own high quality demo that you can point people to.

 20 
 on: January 24, 2015, 01:32:03 PM 
Started by Nick - Last post by Nick
Have you ever recorded to a machine based click track and felt constrained somehow? Sometimes it seems like you're fighting against the machine, trying to inject a bit of feeling into the performance, but you're dealing with a clinical time master. For a lot of cases, it's worth it to work this way, as the more you add to your project, the better things will sound if they're added to a solid grounding. However, for simple songs, there are ways to keep all the instruments in time, yet preserve subtle time dilations and performance dynamics.

One method I have often used when producing acoustic based artists is to have them perform a guide track with a microphone positioned near their foot. Place something hard under their foot so you can record a definitive sound. The sound itself doesn't matter, as long as it's audible. As the artist records the guide track, you'll also be recording a separate, organic, click track for use for the addition of more instruments to the project.

Another method is basically the opposite of the above. You have the performer record their basic guide with one instrument and a vocal and this is usually on the same track with one microphone. You then make a copy of this guide track and playing it back, slice on every bar or second bar. Just play it back with your finger on the slice key! Then, approximate the tempo of the song and from the last bar, snap its beginning to the nearest bar marker of the project. Do this for all slices. Then, go back to the end of the song and time stretch each chopped up section to snap the ends to the end of their relative bars. You'll have a natural sounding performance, but it'll be timed to perfection. Why would you want this? Having a machine synced performance makes it easy to arrange sections of the song and also makes it very convenient for adding sequenced sections. You can hand sequence a keyboard or string part and then just drop copies of it wherever you like in the project and they'll fit perfectly. It also allows you to add a simple drum track for use instead of a click. Many people find it hard to play well to a click track and a simple drum beat seems to aid them.

NOTE: In most modern software, there are options to automatically chop up a track and sync it to the project tempo. I suggest trying the manual method only if the automatic results are problematic. The automatic method may work best if you combine the foot generated click track with your guide performance, as the software will have a steady 'beat' to identify with.

Pages: 1 [2] 3 4 ... 6