Guitar Australia

Please login or register.

Login with username, password and session length
Advanced search  

News:

How many conductors does it take to screw in a light bulb?
No one knows, no one ever looks at him.

Pages: [1]   Go Down

Author Topic: How CPUs Work  (Read 520 times)

Nick

  • contributor
  • Newbie
  • *
  • Offline Offline
  • Posts: 49
    • View Profile
How CPUs Work
« on: June 27, 2018, 07:21:29 PM »

Get on my bus(!), I’m going to a magical place. I’m going to have to strap you all in before
we begin, as this is an experimental journey; one that travels through uncharted and scary
territory. It’s a journey from uninterested layperson, to enlightened computer LOVER. You’ll
wonder how you never felt about the machine you call a ‘box’.

Like magicians protecting the tricks of the trade, the uber nerds don’t want you to know
about this place! Well, maybe they don’t care, but it sounds dramatic, right? And it’s
damned dramatic. You’ll touch on parts of your mind that’ll have you linking the Universe
to bunnies and hair ties. You’ll scratch your left butt cheek as your knowing grin spreads
with the realisation that you know more about computers than 99% of Earths population.

Yes. I’m selling tickets for a bus ride. A bus ride through the very heart of a computer
brain. I mean, the brain of a computer, stopping off at the heart on the way. Or, was that
bypass the heart, then blow by the lungs as we circumnavigate the brain? It’s not important.
What you should know before we leave the depot is this: What you are about to learn will
make you an uber nerd. You will, from now and ever after be hated by everyone you meet.
Not through jealousy at your broad knowledge, not through disdain at not being able to keep
up with your dinner table banter… Oh no. It’s just that you’ll bore the crap out of everyone
you meet! Once you know the secret of the computer, you’ll be consumed. You’ll want to build
your own CPU, just like I have. And that’s a good thing.

Oh yeah… Uber nerds stay out! You already know all this stuff, so I ask that you give up
your seat for a lesser, pimplier nerd. Thank you. Are we ready, set, primed? Good, let’s
begin.

First Stop - Information:
You are standing in a dusty room. It’s in an arid outback location. Two people are talking
to the desk clerk about a journey to the summit of the nearby mountain. It’s named, Mt Uberknowledge.
It’s a pretty big mountain, lucky we’re taking my bus to the top!

Our aim, by the end of this article, is to get intimate with the workings of an electronic
computer. Our aim, by the end of this article, is to know what it is to compute. The computers
we’ll be studying are built and designed by me. They are not real! They are software simulations
of my designs. The first computer is primitive and simplistic, but it works. We will begin
our journey with a look at some of the items required in order to begin computing.

First off, what is a goal of designing a computer? It needs to process data. Apart from how
we interpret that data, everything a computer does is just processing data. You put something
in, and expect the right output. OK, that’s pretty bog standard knowledge. We all know
that. What we don’t all know is, how simple it really is inside the computer.

It’s simple, but there’s a lot of simple, which ultimately makes it complex… Please don’t
get the wrong idea about my computer designs that we’ll be using for this article. They
are not up to scratch by any modern or 1950s standard. What they are though, is a perfect
example of what a computer does. They can play games, process words and balance your budget
(we’ll get to those things). Clever folk come along and make the basic design quicker and
cheaper to build, but the concept remains the same. So onwards we go!

We process data, but what is data? To a computer, data is an electronic signal. Well that’s
what you’ll hear all the time. We call it a signal, but it’s nothing more than a voltage,
or not a voltage at some point in a circuit. We place our own meanings on what the value
of those ‘signals’ might be. How can this allow a machine to process human information?
This is the question that has driven my simple brain to learn what I now know. This is
the magic part of computers, and more importantly, the magic part of humans. Computers are
human things, they came from human minds. They are tools that work with our minds, like
a PCI card for our minds.

Have you ever stood at the end of a passage way that has a light switch at both ends, for
the same light? It’s so you can turn it on, walk to the other end, then switch it off;
all without being attacked by a monster. The passage light is something I find very useful
as your first example of digital electronics. The light is wired to the two switches in
such a way that they have a relation ship to it. The light can be turned on by different
combinations of the switch. Let’s see. If one switch is turned on, the light will be lit.
If both are on, the light will be lit. But you need at least one switch to be on before
you’ll light the light. This is a logical arrangement. And it has a special name within
the world of digital electronics, it’s called an ‘OR’ gate. Digital what the??? Digital
electronics is just a fancy name for circuits that work based on levels being above or below
a certain threshold. For instance, anything above 5volts DC will be considered an ‘on’ signal.
Anything below, will be considered an ‘off’ signal. But we’ll see more of this as we progress.
Back to the ‘OR’ gate. These ‘gates’ (no relation to Bill) are just switches. If you placed
five volts at one end of a wire, you’d get five volts at the other end. Unless you broke
the wire, then you’d have two pieces of wire. Not a bad deal really, except that the two
new pieces are not as long as the original. But let’s see a video!

Here is a generalisation of an ‘OR’ gate:

http://youtube.com/watch?v=L2fKHesYYNY

In a computer, there are a lot of these things, although they don’t look like that! The switches
are made using ‘semi-conductors’ and they are microscopic switches that rely on electricity
to switch them on or off. But the idea is simple. The ‘OR’ gate/switch has two input signals
and one output signal. It needs at least one input signal in order to output a signal. Two
input signals is fine and will yield the same output as one input signal. It should be noted
that the ‘OR’ gate may have many inputs, but it only ever has one output.

What you have seen is simply amazing. That simple switch arrangement is a major piece of
the computer puzzle. There are only two other pieces. Close you mouth…! I didn’t mention
that the computer puzzle uses the same three pieces over and over and over and over… Did
I?

Here is the second piece, it’s called the ‘AND’ gate. I’ll show you the video first, then
an explanation. Those at the front of the bus should already be onto this!

http://youtube.com/watch?v=gBQ23T4Ss6Y

The inputs aren’t so clear for this gate, but once again, they are the switches. See how
the light only comes one if switch one AND switch two are closed? As with the ‘OR’ gate,
the ‘AND’ gate can have multiple inputs. It can have one million inputs if you want, but
only one output. And for there to be an output, every input must bear a signal.

Step back! Signals, volts, gates? Yeah, I’m talking pretty generally, as it serves no purpose
to get down as far as the electronics behind these things. We can assume that five volts
means ‘a’ signal and anything less means no signal. This ‘signal’ is not really travelling
along. It’s more like either present on a wire or not. But we’ll see this in more detail
when I show you the design of my first computer. Hehehe..

There is one final gate that I can show you. It’s the only other gate used in computer circuits.
It’s the ‘NOT’ gate. Not that it’s not a gate, it’s name is the ‘NOT’ gate. It is a gate.
Not, not a gate.

http://youtube.com/watch?v=Eq8xAYmOy08

Simplicity itself. Any input is reversed. A signal going in results in nothing coming out
and verse vice. There is no end to how useful this gate is when applied to digital circuits.
And you’ll certainly see this when we look at the microcode for my computer designs.

These pretty animations are pretty, right? We all agree on that. What you may be wondering
though, is how do electronic ‘gates’ form a machine that can process data? A machine that
can beat you at chess? A machine that can connect to another machine, via the utilisation
of yet more machines, to a machine across the globe? This is why you’re strapped to your
seat. You will certainly try to leave the bus during this next section. Whatever magic was
there, will be stifled and possibly murdered by the boredom that is about to follow. Your
eyes will glaze over. Your brain will ask you “what have I ever done to you???”. You’ll
want to scream, just to liven things up a little. But relax. I strapped you down for a reason.
For it’s the simple things in life that are often the best. And when it comes to binary
logic, there is nothing more beautiful and succinct. Argue with me now, and later, you’ll
agree. Certainly, later, you’ll agree.

Binary logic is the paper work that makes electronic computers possible. Their operations
are designed using this form of mathematics, largely credited to George Boole who lived
in the mid part of the 19th century.

Binary logic is the description of what results from the application of operations on logic
states. Or, what happens when you try to find the truth of adding two false things together.
You’ll see this terminology crop up often. True, false, on, off, one and zero. They all
mean the same thing though. The same thing in the circuit of a computer. Either there is
a voltage, or there is not. Generally, a voltage is mapped to true, on, 1. A lack of voltage
is mapped to false, off or zero.

We combine gates in order to perform operations on data. That’s it. You can go home now!
Oh, you want to know ‘how’ we combine those gates? Well read on. The number system we use
has ten symbols, ranging from the symbol ‘zero’ to the symbol ‘9’., right? There’s nothing
special about it. It’s boring and silly and I want a new one. But that’s not important right
now. Back to the decimal number system…

The decimal number system has like some kind of ‘add in’ functionality that may be applied
to any number system with any amount of symbols. This functionality is the how the columnar
positions of a numeral lend weight to that numeral. The rightmost numeral has a value of
the numeral, multiplied by (10 to the power of 0). Anything to the power of zero is always
one. So this first column is just one, multiplied by whatever numeral is there. If it’s
a three, the value of the rightmost column is three.
Moving to the column left of the rightmost column, we have a different imposed ‘weight’ over
whatever numeral resides there. This time, the column multiples it’s numeral by 10 to the
power of 1. In effect, we just multiply everything in the second from the right column by
ten. Pretty straight forward. There is a generalisation for how this weighted column system
applies to the decimal system:

Code: [Select]
ColumnValue EQUALS ColumnNumeral X 10^ColumnNumberNote that the column number is a zero based count from the right(the rightmost column is
numbered zero) to left for ‘n’ number of columns. And forgetting all that garbage, we can
see that columns moving to the left apply powers of ten increasing by a factor of ten per
left column move. When you see a number written, like say 666, you are really seeing this
system in play. Let’s go thr…

INTERJECTION: This is brain numbing boring crap, I agree, but please bear with me,
as this turns into something beautiful. And it’s all yellow.


…ough this number. The leftmost three is in column number ‘2’, if we count from the right
and start our count with zero. That means, that we multiply the numeral in this column by
10^2, or 100. Giving us, 600. The column to the right of the leftmost column contains the
numeral ‘6’. This column is number one, so it’s power will be 10^1. We need to multiply
the numeral in this column by ten. OK, we have a total of 660 so far. I’ll let you guess
what happens for the rightmost column. It has something to do with multiplying the numeral
there by 10^0, which is ‘1’.

The magic of numbers and computers and toast starts to come into play, RIGHT NOW. We have
just generalised all of number symbols. You’ve just seen how flimsy our decimal system is,
and how we can generalise how we show amounts, any way we like. The fact that we grew up
with decimal means we ‘think’ it’s a great way to work with numbers. And I can’t see any
problem with it, but computer manufacturers could. As you may recall from earlier on in
this essay, I mentioned that logic gates are electronic circuits, on a microscopic sc…

INTERJECTION: Why all this talk of numbers and columns and crap?? Listen punk! It’s
all part of it, OK? We’re learning about how the output of an electronic circuit can be
mapped to something a person need to know, OK??


…ale. They aren’t microscopic for the fun of it. It has to do with productivity and competition.
Computer manufacturers need to work to a supply and demand basis like all companies. This
means efficiency, and simplicity in a mind numbingly complex field. When you hear someone
mention that computers can only work with binary numbers, tell them they are wrong. Explain
to them that any number base is possible, binary just happens to translate to cheaper circuits!
Cheaper, for many reasons, as we’ll see.

Binary? That is the focus of the last part of this first essay(there‘ll be more!). Try to
forget for a moment, decimal. Try to think of it as an arbitrary system for arranging symbols
to represent an amount of something. We use ten unique symbols in the decimal system. Binary
uses but two. The numeral one and the numeral zero. How do you show the number zero in the
binary number system? I think you all know the answer to this. How about this then… How
would represent one of something, using the binary number system? Yes, the numeral one!
Easy as wetting yourself.

By now you may be thinking that there’s nothing special about the decimal number system.
You may be thinking that you can generalise the representation of the amount of something
using any symbols you like. And you can. And further to this, why use a fancy name for a
number system? Why not use the number of symbols available as the name? So for decimal,
we’d call it a ‘base ten’ number system. For binary (which has two symbols), we’d call it
a base two number system. And to take this all the way home, let’s apply our weighted column
system to the binary number system and see where that takes us.

In the base ten (decimal) number system, the exponent used to calculate the value of each
column is the number base itself. So it’s ten in decimal. In binary, the exponent is two.
So our new generalised way of looking at a weighted column amount representation scheme
is:

 
Code: [Select]
ColumnValue EQUALS ColumnNumeral X 2^ColumnNumber…for binary numbers. The rightmost column represents a direct amount based on the numeral
there, so it can be either zero or one. The column to the left of the rightmost column has
2 X 2^1 applied to it. This simply means that a numeral of zero appearing here will give
the column a value of zero, but a numeral of one appearing here will give the column a value
of two. This pattern continues, with the value of each left moving column increasing by
a factor of a power of two.

If you thought this was magic, wait until part two when I show you how to take this theory
and use it to create a machine that can add numbers! Yes. We’ll build a machine that can
dumbly take two binary numbers and come up with their sum, completely unaided by us. Is
that magic? Well it’s certainly yellow!
Logged

Nick

  • contributor
  • Newbie
  • *
  • Offline Offline
  • Posts: 49
    • View Profile
Re: How CPUs Work
« Reply #1 on: June 27, 2018, 07:29:53 PM »

Now we’ve looked at binary number systems and how electronic signals can be used to represent
our human notions of on/off true/false etc, it’s time to pull some of these ideas together
and see our first computer component. Nearly…

Before we delve into the heady world of digital electronics, I want to get some things
straight. The level of detail I’m going to show you is purely for education. It doesn’t
help you to understand the computer system itself, as you could never envisage the computer
system itself from this close up. The sole intent of this detail detour, is to make you
feel better inside. It’s to help you get the magic zing flowing through your veins. You
see, the real magic begins in the electronic circuits, but once you’ve been close enough
to see it, you don’t need to worry about it again. So I’ll show you inside each item
or device, then put the lid on the box. From then on, we’ll be working with the box and
not consider what’s inside it.

The first device we’ll look at is called a ‘register’. It’s used for remembering a signal
that was sent to it. A good way to think about a register is to imagine a row of pins
stuck into a block of polystyrene foam. They are all at the same height, but if you push
on a few of them, those pins will remain deeper in the foam. The same kind of thing happens
inside a register. It’s not quite the same, but you get the idea, right?

Now this register device will have a certain amount of ’bits’ that it can handle. This
is the number of unique signals it can remember in one hit. For instance, if you plugged
eight wires to a register that could accept eight connections, you’d have an eight bit
register. If you sent an electric signal down a selection of those eight wires, the register
would be able to store the ’on’ signals in the same places as they appeared in the group
of wires. Maybe I over complicated that? It’s pretty simple though, just think of the
wires as some kind of parallel cable. The difference between the pins in polystyrene
and a real register is that a new group of signals will replace whatever was stored there
previously.

It’s time to remove the lid on the register box. Looking inside, we can see lots of AND
gates, NOR gates and OR gates. They’re all hooked up in neat little circuits. Those circuits
are what we will now examine, one small step at a time.

If you take an AND gate and place a NOT gate at the end of it, it becomes a new type of
gate called a NOT AND gate. This name is condensed down to NAND. Remember back to what
the AND gate did? It accepted any number of inputs and had one output. If any one of
it’s inputs did not have a signal, it would not output any signal. Only when all input
signals are ‘high’ will an AND gate output a ‘high’ signal. See how I snuck a new term
in there?

Now a NOT gate is the essence of simplicity. You may recall (I can’t!) that it’s AKA an
inverter, and placing it at the output of the AND gate will invert any output from the
AND gate.

So the AND gate has two inputs in this case and connected to the single output of the
AND gate is the NOR gate. Things are different now. For clarity, have a look at these
tables, known as ‘truth tables’. The first table represents a normal, two input AND gate.
Each column represents the state of each input and the output:

Code: [Select]
A TWO INPUT AND GATE
INPUT1    INPUT2    OUTPUT
0         0         0
0         1         0
1         0         0
1         1         1

AND as we’ve seen, only outputs a high signal if all of it’s inputs are high, as represented
by the last line in the above table. Now, I’ll show you the truth table for an AND gate,
with a NOT gate connected to it’s output:


Code: [Select]
A TWO INPUT AND GATE WITH AN INVERTER (A NOT GATE) CONNECTED TO IT’S OUTPUT
Code: [Select]
INPUT1    INPUT2    OUTPUT
0         0         1
0         1         1
1         0         1
1         1         0

We can compound this arrangement into a new gate called a NAND gate. It does the same
thing, but saves space on our schematic diagrams. You can think of this new gate as NOT-AND.
It’s symbol is in the above image.

There is an interesting switcheroo you can perform with NOT, AND and OR gates. All of
these gates can be purchased from any electronics store, and they come on little chips,
usually with four or more gates on each chip. The legs on the chip are the inputs and
outputs for the gates within. I’m not going to go into any more detail about chips, but
it serves my example to let you know about them! Just say you were making ten NAND gates
from NOT gates and AND gates. You would be inverting the outputs of your AND gates with
the NOT gates. An inverted output results in the opposite, as you know. Now let’s assume
that you have a chip called a CMOS HEX AND or something like that. All that means is
that CMOS is the technology used to build the circuits inside the chip (complementary
metal oxide semi-conductor) and that you have six separate AND gates on that chip. Now,
you’re busily hooking up NOT gates to the outputs of your AND gates and you use up the
six AND gates from your CMOS chip. Reaching into your box of supplies, you realise you
have no more AND gates! Shock!!! All is not lost, thanks to Augustus De Morgan. He worked
out that inverting the inputs of an OR gate would allow you to use an OR gate exactly
as you would a NAND gate. Well that’s not entirely correct… You see, Mr DeMorgan did
not live to see digital electronics, but his work with binary logic gave us lucky ones
plenty of handy knowledge. Anyway, below is what your OR gates would look like with inverters
placed on each input.

The above shows the OR gate equivalent to a NAND gate. It’s up to you which one you’d
rather use.

Staying with OR gates for a bit, let’s look at what happens with a normal OR gate, just
as a little refresher. We can work our NOT wonder with OR gates also. When we apply a
NOT gate to an OR gates output, it becomes a NOR gate.

Code: [Select]
A TWO INPUT OR GATE

INPYT1    INPUT2    OUTPUT
0         0         0
0         1         1
1         0         1
1         1         1

A TWO INPUT OR GATE WITH INVERTERS ON IT’S OUTPUTS
CAN BE SIMPLIFIED INTO A ‘NOR’ GATE (NotOR)


Code: [Select]
INPYT1    INPUT2    OUTPUT
0         0         1
0         1         0
1         0         0
1         1         0

Pretty simple, right? As with the AND gate, the output pattern is inverted by placing
an inverter across the outputs. Funny that!

Deep breaths now… We’re about to tackle the problem of building a register. The register
can accept a high or low input and can remember that input. The first thing to learn
about, is called a ‘latch’. The latch is an arrangement of gates that ‘remember’ the
last input given to them, until a ‘clear’ or ‘reset’ signal is given. The most basic
latch circuit is the ‘SET/RESET’ (RS) latch and here is it’s schematic.

In this example, we use two NOR gates. Here is the truth table for the NOR-SR Latch:


Code: [Select]
AN SR-LATCH BUILT WITH NOR GATES

RESET    SET        Q     !Q
0        0          -     -
0        1          1     0
1        0          0     1
1        1          Not allowed!

Hmmm. I hope this hasn’t caused anyone to stop reading! It’s really simple, as I can understand
it, and you will soon too. Firstly, we need to think about what is going on inside each
of the NOR gates. NOR gates only have an output if their inputs are all low/no signal.
So what is happening in the circuit? The first thing to note is the labels on the outputs.
They are ‘Q’ and ‘!Q’ (the correct way to right ‘not Q’ is to place a bar above Q or
a tick before it, `Q like that.) as Q is used to denote output in schematic diagrams;
it prevents confusing O with zero. So the outputs are always opposite from each other,
we can see that. Also, the outputs are driven by only one corresponding gate. Let’s run
through some inputs and trace the current flow: (I’ll use S for ‘set’ and R for ‘reset’
- ‘low‘ means off/no signal/false)

NOTE The inverse of low is high, so don‘t think that anything with a bar over it, or
a tick mark before it means that it‘s low. It simply means that whatever is there, is
inverted.

O Input S is high, input R is low. The signal flows into the top input of NOR #2. Since
the presence of any signal produces a low output from a NOR gate, that is what happens.

O The low output from NOR #2 is sent to the bottom input of NOR #1.

O NOR #1 has two low inputs, causing it to output to go to high.

O The output of NOR #2 remains low, reflecting it’s opposite to output #1 nature.

O The input to S now goes low… (read on)

Now we see the latching quality that causes us to call this type of circuit a latch! You
see, the key to these type of circuits is that there are really two inverters feeding
back the opposite signals to each other. Let’s continue and pick up from where we left
off, with S input going low…

O When S goes low, NOR #1 remains high, as if to remember the previous S signal.

The reason for this becomes clear if you watch this little video:

http://www.youtube.com/watch?v=cc9VQ_rMeaQ

Notice that once the signal goes low on ‘S’, NOR #1 is still outputting a high signal,
as both of it’s inputs are low. This high output from NOR #1 is also going into the top
input of NOR #2, hence it’s output remains low.

So now it should be clear how the RESET half of the circuit got it’s name! You can observe
from the video that a high signal into R will cause it’s output to fall to low. When
this happens, Q also falls to low. Since NOR #2 now has two low inputs, `Q becomes high
with the output of NOR #2. Magic!

Magic yes, but useful, not very. There are some obvious problems here. The main problem
occurs when both R and S are high. The results cannot be known beforehand. It’s known
as a ‘race’ condition. Also, this type of latch is not very useful when it comes to grouping
eight or more together to form a proper component.

Before we move on, I’d just like to note that the RS-Latch can be produced using NAND
gates too. The differences are listed below:

O The race condition happens if both R and S become low, instead of high like in the NOR
version.

O To prevent the race condition, the inputs of the NAND RS are inverted

That’s all for this instalment, but I think that’s a fair bit! The main thing to take
away from this is that we are building up specific functions from simple building blocks.
This first step does not yield useable results yet, but it shows the direction we will
be heading. In the next instalment, we will complete the register and see how we can
use a ‘clock’ to control it properly.

I’d suggest playing around with these gates and try building the latches yourself. You
can do this by downloading Multimedia Logic, a free program for simulating logic circuits:

http://www.softronix.com/logic.html

After
playing around at your own pace, the next part will be a lot easier to swallow.
Logged

Nick

  • contributor
  • Newbie
  • *
  • Offline Offline
  • Posts: 49
    • View Profile
Re: How CPUs Work
« Reply #2 on: June 27, 2018, 07:34:06 PM »

We’re getting closer to being able to produce useful digital components, which is a vast
step up from knowing nothing about them in the first instalment! We have a long way
to go before the plans for a working CPU and computer system are laid out before us
though, so let’s get stuck in and look at ways we can improve our latching circuit from
part two.

Hopefully, you’ll recall how in part two we were able to make a circuit that could remember
the last input given to it, depending on certain conditions and control signals. This
circuit relied on NAND or NOR gates that fed back to each other.

We have kind of a chicken and the egg branch in the discussion now! You see, I need to
explain how to improve this circuit in order to make it more useful for our CPU purposes,
but those improvements may not make sense right away. In other words, just bear with
me until the picture begins to become clear. It will become clear, if not right away.

Our computer system is going to have a conductor (in the musical sense, not the electrical!)
that directs many various components. And that’s the key right there. The latching circuit
will form one particular component that has the ability store the last signals given
to it, but the machine needs to do all of the setting and resetting without the aid
of a human. The conductor is of course the ‘clock’ and the clock is nothing more than
another digital circuit. We’re not ready to see how the clock works, yet, but we are
ready to see how to incorporate the conducting qualities of the clock into our register
design.

How Should Our Register Incorporate the Clock?
Well, the clock sends out a pulse of digital information. This pulse has two states,
high and low. It just keeps repeating these two states at a some rate per second (measured
in Hertz (Hz)). This is why you would have heard your computers clock measured as a
speed in Gigahertz. Our clock will not be operating at that speed. We’ll be more in
the range of thousands of cycles per second! The speed is of no interest to us at this
stage though. The function is what interests us. We can add a new input to the latch
circuit, one that affects the operation of the latch depending on the high/low status
of the signal. We could have a button on the side of our computers box, or… Use the
clock! Tick tock tick tock tick… Thanks Gwen.

Suddenly, our latch seems to be growing to monstrous proportions! Remember that I stated
that computers are just complex connections between MANY very simple things? Well,
we begin to see that concept in practice now, with the circuit below. Don’t worry, it’s
still quite simple, but it may look a bit daunting upon first inspection.

It’s not too difficult at all really, I lied! All that’s happened is the addition of
two NAND gates and the clock input. Remember, the clock is nothing more than a circuit
that outputs alternating high then low signals; as if someone were pushing a button
repeatedly.
Think ahead a bit now, to where the same clock is connected to more than
one component. Ha ha! The possibilities!

Here is the truth table for the clocked RS Latch:

Code: [Select]
CLK     R       S       Q
0       0       0       No Change
0       0       1       No Change
0       1       0       No Change
0       1       1       No Change
1       0       0       No Change
1       0       1       Set
1       1       0       Reset
1       1       1       BAD!! (Not allowed)

And here is a video, showing what happens in the circuit during operation.
http://www.youtube.com/watch?v=iNHAirYzeJE

Note, the clock is actually taking one second to go from off, to on and then off again. So a complete
clock cycle in this video is one Hertz, or once per second. I’ve slowed the video down,
to make it easier to see what happens.

I have added NAND gates with the clock, but AND gates would have worked too. Think about
the truth table for the NAND gate:

Code: [Select]
Input1    Input2    Q (output)
0         0         1
0         1         1
1         0         1
1         1         0     

See that any combination except all HIGH inputs results in a HIGH output? It’s the inverse
of the AND gate, as we have seen. Using AND gates in place of the NAND gates would simply
mean a LOW clock input could be used to trigger either the R or the S input. It’s up
to the designer of the circuit and you’ll see this kind of decision cropping up a lot
in digital circuits. It’s handy to be able to have a circuits behaving be normally opposite
to what’s expected, particularly when multiple circuits are connected.

But back to the circuit. The only difference to this ‘clocked’ version of our RS Latch
is that the setting and resetting can only occur at a specific clock output. Watch
the video carefully to see this happening. I click the et input, but nothing changes
until the clock goes HIGH. All of a sudden, we have a device that may be synchronised
to another device. We can still decide whether to give the latch an input or a reset
signal, but the clock dictates when exactly that input will take effect.

The D Latch
Mr D Latch is a simple relation of the RS Latch. The idea is to eliminate the [R]eset
input and have only a et input, that becomes known as the [D]ata input. If there’s
a high Data input, the latch is set. If not, the latch is reset. Below is an image of
a simple D Latch, built with NAND gates.

http://www.bandofgreen.com/cpu/prt3/img/NAND-DLATCH.jpg

And it’s truth table:

Code: [Select]
Data    Q (output)
0       0
1       1

And in operation, it acts like this:
http://www.youtube.com/watch?v=cY92QMIGyqQ

Simple little bugger isn’t it? Something to note about the D Latch is that there can
never be the ‘not allowed’ condition that occurs if both the input (S) and the reset
(R) both are HIGH. The inverter or NOT gate in this circuit guarantees the inverted
state of both inputs. Huh? Well, in theory, there are still two inputs, even though
only one is available to the outside world. Another thing to note about the D Latch
is that it’s nearly completely useless! It’s only doing what the simple NOT gate could
do all by itself, and we’ve actually used one (a NOT gate) in the circuit! With the
addition of the clock though, we have something very useful:

SIDE NOTE: A Better Way To Read The Clock
The clock we have been using thus far, remains HIGH or LOW for exactly half the duration
of it’s cycle length. Computers do not read the clock in this fashion. They take an
instantaneous ‘sampling’ of the clock state and use that instead. The effect is such
that really what is being detected is the change of the clock from one state (HIGH/LOW)
to the other. It’s easy to achieve too; it just takes a simple circuit built with a
resistor and a capacitor. Our next generation D Latch will rely on the rising ‘edge’
of the clock signal in order to operate and change it’s output.

And in operation, things work as expected. The output reflects the data signal, but only
in synchronisation with the clock signal. In other words, things only change if the
clock changes.

http://www.youtube.com/watch?v=yo7UtNhdIHE

That wraps up this instalment, but we are really getting somewhere now. The Clocked D (with some mods!)
Latch is what we’ll use to build our first register in the next instalment. Try and
design your own in the meantime; you can use the digital logic sim I referenced in the
other instalment.
Logged
Pages: [1]   Go Up