Blog Archive

Friday, August 22, 2008

Boolean Logic Works

Boolean Logic Works


Have you ever wondered how a computer can do something like balance a check book, or play chess, or spell-check a document? These are things that, just a few decades ago, only humans could do. Now computers do them with apparent ease. How can a "chip" made up of silicon and wires do something that seems like it requires human thought?

If you want to understand the answer to this question down at the very core, the first thing you need to understand is something called Boolean logic. Boolean logic, originally developed by George Boole in the mid 1800s, allows quite a few unexpected things to be mapped into bits and bytes. The great thing about Boolean logic is that, once you get the hang of things, Boolean logic (or at least the parts you need in order to understand the operations of computers) is outrageously simple. In this article,we will first discuss simple logic "gates," and then see how to combine them into something useful.

Simple Gates

There are three, five or seven simple gates that you need to learn about, depending on how you want to count them (you will see why in a moment). With these simple gates you can build combinations that will implement any digital component you can imagine. These gates are going to seem a little dry here, and incredibly simple, but we will see some interesting combinations in the following sections that will make them a lot more inspiring...

The simplest possible gate is called an "inverter," or a NOT gate. It takes one bit as input and produces as output its opposite. The table below shows a logic table for the NOT gate and the normal symbol for it in circuit diagrams:

NOT Gate
A Q
0 1
1 0

You can see in this figure that the NOT gate has one input called A and one output called Q ("Q" is used for the output because if you used "O," you would easily confuse it with zero). The table shows how the gate behaves. When you apply a 0 to A, Q produces a 1. When you apply a 1 to A, Q produces a 0. Simple.

The AND gate performs a logical "and" operation on two inputs, A and B:

AND Gate
A B Q
0 0 0
0 1 0
1 0 0
1 1 1

The idea behind an AND gate is, "If A AND B are both 1, then Q should be 1." You can see that behavior in the logic table for the gate. You read this table row by row, like this:

AND Gate
A B Q
0 0 0 If A is 0 AND B is 0, Q is 0.
0 1 0 If A is 0 AND B is 1, Q is 0.
1 0 0 If A is 1 AND B is 0, Q is 0.
1 1 1 If A is 1 AND B is 1, Q is 1.

The next gate is an OR gate. Its basic idea is, "If A is 1 OR B is 1 (or both are 1), then Q is 1."

OR Gate
A B Q
0 0 0
0 1 1
1 0 1
1 1 1

Those are the three basic gates (that's one way to count them). It is quite common to recognize two others as well: the NAND and the NOR gate. These two gates are simply combinations of an AND or an OR gate with a NOT gate. If you include these two gates, then the count rises to five. Here's the basic operation of NAND and NOR gates -- you can see they are simply inversions of AND and OR gates:

NOR Gate
A B Q
0 0 1
0 1 0
1 0 0
1 1 0

NAND Gate
A B Q
0 0 1
0 1 1
1 0 1
1 1 0

The final two gates that are sometimes added to the list are the XOR and XNOR gates, also known as "exclusive or" and "exclusive nor" gates, respectively. Here are their tables:

XOR Gate
A B Q
0 0 0
0 1 1
1 0 1
1 1 0

XNOR Gate
A B Q
0 0 1
0 1 0
1 0 0
1 1 1

The idea behind an XOR gate is, "If either A OR B is 1, but NOT both, Q is 1." The reason why XOR might not be included in a list of gates is because you can implement it easily using the original three gates listed. Here is one implementation:

If you try all four different patterns for A and B and trace them through the circuit, you will find that Q behaves like an XOR gate. Since there is a well-understood symbol for XOR gates, it is generally easier to think of XOR as a "standard gate" and use it in the same way as AND and OR in circuit diagrams.

Simple Adders

In the article on bits and bytes, you learned about binary addition. In this section, you will learn how you can create a circuit capable of binary addition using the gates described in the previous section.

Let's start with a single-bit adder. Let's say that you have a project where you need to add single bits together and get the answer. The way you would start designing a circuit for that is to first look at all of the logical combinations. You might do that by looking at the following four sums:

0 0 1 1
+ 0 + 1 + 0 + 1
0 1 1 10

That looks fine until you get to 1 + 1. In that case, you have that pesky carry bit to worry about. If you don't care about carrying (because this is, after all, a 1-bit addition problem), then you can see that you can solve this problem with an XOR gate. But if you do care, then you might rewrite your equations to always include 2 bits of output, like this:

0 0 1 1
+ 0 + 1 + 0 + 1
00 01 01 10

From these equations you can form the logic table:

1-bit Adder with Carry-Out
A B Q CO
0 0 0 0
0 1 1 0
1 0 1 0
1 1 0 1

By looking at this table you can see that you can implement Q with an XOR gate and CO (carry-out) with an AND gate. Simple.

What if you want to add two 8-bit bytes together? This becomes slightly harder. The easiest solution is to modularize the problem into reusable components and then replicate components. In this case, we need to create only one component: a full binary adder.

The difference between a full adder and the previous adder we looked at is that a full adder accepts an A and a B input plus a carry-in (CI) input. Once we have a full adder, then we can string eight of them together to create a byte-wide adder and cascade the carry bit from one adder to the next.


Full Adders

The logic table for a full adder is slightly more complicated than the tables we have used before, because now we have 3 input bits. It looks like this:

One-bit Full Adder with Carry-In and Carry-Out
CI A B Q CO
0 0 0 0 0
0 0 1 1 0
0 1 0 1 0
0 1 1 0 1
1 0 0 1 0
1 0 1 0 1
1 1 0 0 1
1 1 1 1 1

There are many different ways that you might implement this table. I am going to present one method here that has the benefit of being easy to understand. If you look at the Q bit, you can see that the top 4 bits are behaving like an XOR gate with respect to A and B, while the bottom 4 bits are behaving like an XNOR gate with respect to A and B. Similarly, the top 4 bits of CO are behaving like an AND gate with respect to A and B, and the bottom 4 bits behave like an OR gate. Taking those facts, the following circuit implements a full adder:

This definitely is not the most efficient way to implement a full adder, but it is extremely easy to understand and trace through the logic using this method. If you are so inclined, see what you can do to implement this logic with fewer gates.

Now we have a piece of functionality called a "full adder." What a computer engineer then does is "black-box" it so that he or she can stop worrying about the details of the component. A black box for a full adder would look like this:

With that black box, it is now easy to draw a 4-bit full adder:

In this diagram the carry-out from each bit feeds directly into the carry-in of the next bit over. A 0 is hard-wired into the initial carry-in bit. If you input two 4-bit numbers on the A and B lines, you will get the 4-bit sum out on the Q lines, plus 1 additional bit for the final carry-out. You can see that this chain can extend as far as you like, through 8, 16 or 32 bits if desired.

The 4-bit adder we just created is called a ripple-carry adder. It gets that name because the carry bits "ripple" from one adder to the next. This implementation has the advantage of simplicity but the disadvantage of speed problems. In a real circuit, gates take time to switch states (the time is on the order of nanoseconds, but in high-speed computers nanoseconds matter). So 32-bit or 64-bit ripple-carry adders might take 100 to 200 nanoseconds to settle into their final sum because of carry ripple. For this reason, engineers have created more advanced adders called carry-lookahead adders. The number of gates required to implement carry-lookahead is large, but the settling time for the adder is much better.


Flip Flops

One of the more interesting things that you can do with Boolean gates is to create memory with them. If you arrange the gates correctly, they will remember an input value. This simple concept is the basis of RAM (random access memory) in computers, and also makes it possible to create a wide variety of other useful circuits.

Memory relies on a concept called feedback. That is, the output of a gate is fed back into the input. The simplest possible feedback circuit using two inverters is shown below:

If you follow the feedback path, you can see that if Q happens to be 1, it will always be 1. If it happens to be 0, it will always be 0. Since it's nice to be able to control the circuits we create, this one doesn't have much use -- but it does let you see how feedback works.

It turns out that in "real" circuits, you can actually use this sort of simple inverter feedback approach. A more useful feedback circuit using two NAND gates is shown below:

This circuit has two inputs (R and S) and two outputs (Q and Q'). Because of the feedback, its logic table is a little unusual compared to the ones we have seen previously:

R S Q Q'
0 0
Illegal
0 1 1 0
1 0 0 1
1 1
Remembers

What the logic table shows is that:

  • If R and S are opposites of one another, then Q follows S and Q' is the inverse of Q.
  • If both R and S are switched to 1 simultaneously, then the circuit remembers what was previously presented on R and S.
There is also the funny illegal state. In this state, R and S both go to 0, which has no value in the memory sense. Because of the illegal state, you normally add a little conditioning logic on the input side to prevent it, as shown here:

In this circuit, there are two inputs (D and E). You can think of D as "Data" and E as "Enable." If E is 1, then Q will follow D. If E changes to 0, however, Q will remember whatever was last seen on D. A circuit that behaves in this way is generally referred to as a flip-flop.

The J-K Flip-Flop

A very common form of flip-flop is the J-K flip-flop. It is unclear, historically, where the name "J-K" came from, but it is generally represented in a black box like this:

In this diagram, P stands for "Preset," C stands for "Clear" and Clk stands for "Clock." The logic table looks like this:

P C Clk
J K Q Q'
1 1 1-to-0
1 0 1 0
1 1 1-to-0
0 1 0 1
1 1 1-to-0
1 1
Toggles
1 0 X
X X 0 1
0 1 X
X X 1 0

Here is what the table is saying: First, Preset and Clear override J, K and Clk completely. So if Preset goes to 0, then Q goes to 1; and if Clear goes to 0, then Q goes to 0 no matter what J, K and Clk are doing. However, if both Preset and Clear are 1, then J, K and Clk can operate. The 1-to-0 notation means that when the clock changes from a 1 to a 0, the value of J and K are remembered if they are opposites. At the low-going edge of the clock (the transition from 1 to 0), J and K are stored. However, if both J and K happen to be 1 at the low-going edge, then Q simply toggles. That is, Q changes from its current state to the opposite state.

You might be asking yourself right now, "What in the world is that good for?" It turns out that the concept of "edge triggering" is very useful. The fact that J-K flip-flop only "latches" the J-K inputs on a transition from 1 to 0 makes it much more useful as a memory device. J-K flip-flops are also extremely useful in counters (which are used extensively when creating a digital clock). Here is an example of a 4-bit counter using J-K flip-flops:

The outputs for this circuit are A, B, C and D, and they represent a 4-bit binary number. Into the clock input of the left-most flip-flop comes a signal changing from 1 to 0 and back to 1 repeatedly (an oscillating signal). The counter will count the low-going edges it sees in this signal. That is, every time the incoming signal changes from 1 to 0, the 4-bit number represented by A, B, C and D will increment by 1. So the count will go from 0 to 15 and then cycle back to 0. You can add as many bits as you like to this counter and count anything you like. For example, if you put a magnetic switch on a door, the counter will count the number of times the door is opened and closed. If you put an optical sensor on a road, the counter could count the number of cars that drive by.

Another use of a J-K flip-flop is to create an edge-triggered latch, as shown here:

In this arrangement, the value on D is "latched" when the clock edge goes from low to high. Latches are extremely important in the design of things like central processing units (CPUs) and peripherals in computers.

Implementing Gates

In the previous sections we saw that, by using very simple Boolean gates, we can implement adders, counters, latches and so on. That is a big achievement, because not so long ago human beings were the only ones who could do things like add two numbers together. With a little work, it is not hard to design Boolean circuits that implement subtraction, multiplication, division... You can see that we are not that far away from a pocket calculator. From there, it is not too far a jump to the full-blown CPUs used in computers.

So how might we implement these gates in real life? Mr. Boole came up with them on paper, and on paper they look great. To use them, however, we need to implement them in physical reality so that the gates can perform their logic actively. Once we make that leap, then we have started down the road toward creating real computation devices.

The easiest way to understand the physical implementation of Boolean logic is to use relays. This is, in fact, how the very first computers were implemented. No one implements computers with relays anymore -- today, people use sub-microscopic transistors etched onto silicon chips. These transistors are incredibly small and fast, and they consume very little power compared to a relay. However, relays are incredibly easy to understand, and they can implement Boolean logic very simply. Because of that simplicity, you will be able to see that mapping from "gates on paper" to "active gates implemented in physical reality" is possible and straightforward. Performing the same mapping with transistors is just as easy.

Let's start with an inverter. Implementing a NOT gate with a relay is easy: What we are going to do is use voltages to represent bit states. We will define a binary 1 to be 6 volts and a binary 0 to be zero volts (ground). Then we will use a 6-volt battery to power our circuits. Our NOT gate will therefore look like this:

[If this figure makes no sense to you, please read How Relays Work for an explanation.]

You can see in this circuit that if you apply zero volts to A, then you get 6 volts out on Q; and if you apply 6 volts to A, you get zero volts out on Q. It is very easy to implement an inverter with a relay!

It is similarly easy to implement an AND gate with two relays:

Here you can see that if you apply 6 volts to A and B, Q will have 6 volts. Otherwise, Q will have zero volts. That is exactly the behavior we want from an AND gate. An OR gate is even simpler -- just hook two wires for A and B together to create an OR. You can get fancier than that if you like and use two relays in parallel.

You can see from this discussion that you can create the three basic gates -- NOT, AND and OR -- from relays. You can then hook those physical gates together using the logic diagrams shown above to create a physical 8-bit ripple-carry adder. If you use simple switches to apply A and B inputs to the adder and hook all eight Q lines to light bulbs, you will be able to add any two numbers together and read the results on the lights ("light on" = 1, "light off" = 0).

Boolean logic in the form of simple gates is very straightforward. From simple gates you can create more complicated functions, like addition. Physically implementing the gates is possible and easy.


you know that digital devices depend on Boolean gates. You also know from that article that one way to implement gates involves relays. However, no modern computer uses relays -- it uses "chips."

What if you want to experiment with Boolean gates and chips? What if you would like to build your own digital devices? It turns out that it is not that difficult. In this article, you will see how you can experiment with all of the gates discussed in the Boolean logic article. We will talk about where you can get parts, how you can wire them together, and how you can see what they are doing. In the process, you will open the door to a whole new universe of technology.

Setting the Stage

we looked at seven fundamental gates. These gates are the building blocks of all digital devices. We also saw how to combine these gates together into higher-level functions, such as full adders. If you would like to experiment with these gates so you can try things out yourself, the easiest way to do it is to purchase something called TTL chips and quickly wire circuits together on a device called a solderless breadboard. Let's talk a little bit about the technology and the process so you can actually try it out!

If you look back at the history of computer technology, you find that all computers are designed around Boolean gates. The technologies used to implement those gates, however, have changed dramatically over the years. The very first electronic gates were created using relays. These gates were slow and bulky. Vacuum tubes replaced relays. Tubes were much faster but they were just as bulky, and they were also plagued by the problem that tubes burn out (like light bulbs). Once transistors were perfected (transistors were invented in 1947), computers started using gates made from discrete transistors. Transistors had many advantages: high reliability, low power consumption and small size compared to tubes or relays. These transistors were discrete devices, meaning that each transistor was a separate device. Each one came in a little metal can about the size of a pea with three wires attached to it. It might take three or four transistors and several resistors and diodes to create a gate.

In the early 1960s, integrated circuits (ICs) were invented. Transistors, resistors and diodes could be manufactured together on silicon "chips." This discovery gave rise to SSI (small scale integration) ICs. An SSI IC typically consists of a 3-mm-square chip of silicon on which perhaps 20 transistors and various other components have been etched. A typical chip might contain four or six individual gates. These chips shrank the size of computers by a factor of about 100 and made them much easier to build.

As chip manufacturing techniques improved, more and more transistors could be etched onto a single chip. This led to MSI (medium scale integration) chips containing simple components, such as full adders, made up of multiple gates. Then LSI (large scale integration) allowed designers to fit all of the components of a simple microprocessor onto a single chip. The 8080 processor, released by Intel in 1974, was the first commercially successful single-chip microprocessor. It was an LSI chip that contained 4,800 transistors. VLSI (very large scale integration) has steadily increased the number of transistors ever since. The first Pentium processor was released in 1993 with 3.2 million transistors, and current chips can contain up to 20 million transistors.

In order to experiment with gates, we are going to go back in time a bit and use SSI ICs. These chips are still widely available and are extremely reliable and inexpensive. You can build anything you want with them, one gate at a time. The specific ICs we will use are of a family called TTL (Transistor Transistor Logic, named for the specific wiring of gates on the IC). The chips we will use are from the most common TTL series, called the 7400 series. There are perhaps 100 different SSI and MSI chips in the series, ranging from simple AND gates up to complete ALUs (arithmetic logic units).


The 7400-series chips are housed in DIPs (dual inline packages). As pictured on the right, a DIP is a small plastic package with 14, 16, 20 or 24 little metal leads protruding from it to provide connections to the gates inside. The easiest way to construct something from these gates is to place the chips on a solderless breadboard. The breadboard lets you wire things together simply by plugging pieces of wire into connection holes on the board.


A solderless breadboard

All electronic gates need a source of electrical power. TTL gates use 5 volts for operation. The chips are fairly particular about this voltage, so we will want to use a clean, regulated 5-volt power supply whenever working with TTL chips. Certain other chip families, such as the 4000 series of CMOS chips, are far less particular about the voltages they use. CMOS chips have the additional advantage that they use much less power. However, they are very sensitive to static electricity, and that makes them less reliable unless you have a static-free environment to work in. Therefore, we will stick with TTL here.


Assembling Your Equipment

In order to play with TTL gates, you must have several pieces of equipment. Here's a list of what you will need to purchase:
  • A breadboard
  • A volt-ohm meter (also known as a multimeter)
  • A logic probe (optional)
  • A regulated 5-volt power supply
  • A collection of TTL chips to experiment with
  • Several LEDs (light emitting diodes) to see outputs of the gates
  • Several resistors for the LEDs
  • Some wire (20 to 28 gauge) to hook things together
These parts together might cost between $40 and $60 or so, depending on where you get them.

Let's walk through a few details on these parts to make you more familiar with them:

  • As described on the previous page, a breadboard is a device that makes it easy to wire up your circuits.

  • A volt-ohm meter lets you measure voltage and current easily. We will use it to make sure that our power supply is producing the right voltage.

  • The logic probe is optional. It makes it easy to test the state (1 or 0) of a wire, but you can do the same thing with an LED.

  • Of the parts described above, all are easy except the 5-volt power supply. No one seems to sell a simple, cheap 5-volt regulated power supply. You therefore have two choices. You can either buy a surplus power supply from Jameco (for something like a video game) and use the 5-volt supply from it, or you can use a little power-cube transformer and then build the regulator yourself. We will talk through both options below.


    A resistor and an LED
  • An LED (light emitting diode) is a mini light bulb. You use LEDs to see the output of a gate.

  • We will use the resistors to protect the LEDs. If you fail to use the resistors, the LEDs will burn out immediately.

This equipment is not the sort of stuff you are going to find at the corner store. However, it is not hard to obtain these parts. You have a few choices when trying to purchase the components listed above:

  1. Radio Shack

  2. A local electronics parts store - Most major cities have electronics parts stores, and many cities are blessed with good surplus electronics stores. If you can find a good surplus store in your area that caters to people building their own stuff, then you have found a goldmine.

  3. A mail-order house like Jameco - Jameco has been in business for decades, has a good inventory and good prices. (Be sure to download their PDF catalog or get a paper catalog from them -- it makes it much easier to traverse the Web site.)
The following table shows you what you need to buy, with Jameco part numbers listed.

Part
Jameco #
Breadboard
20722
Volt-ohm meter
119212
Logic probe (optional)
149930
Regulated 5-volt power supply
See below
7400 (NAND gates)
48979
7402 (NOR gates)
49015
7404 (NOT gates)
49040
7408 (AND gates)
49146
7432 (OR gates)
50235
7486 (XOR gates)
50665
5 to 10 LEDs
94529*
5 to 10 330-ohm resistors
30867
Wire (20 to 28 gauge)
36767

For the Power Supply (optional)
(See next section for details)
Part
Jameco #
Transformer (7 to 12 volts, 300ma)
149964
7805 5-volt voltage regulator (TO-220 case)
51262
2 470-microfarad electrolytic capacitors
93817

Notes

  • *Jameco also has "assorted LEDs" (or grab bags) that are much cheaper on a per-LED basis. Look around and see what's available. This is one place where a surplus electronics shop will have much better prices.

  • If you are shopping at Jameco, you may want to get two or three of each chip just in case -- they only cost about 30 cents each. You might also want to purchase an extra 7805 or two.

  • You will also need a pair of wire cutters and wire strippers. In a pinch, you can use scissors and your fingernails, but having the proper tool makes it easier. You can get wire cutters and wire strippers at Jameco, Wal-mart, Radio Shack, and tons of other places. I also find that a small pair of needle nose pliers is helpful at times.

The Power Supply

You will definitely need a regulated 5-volt power supply to work with TTL chips. As mentioned previously, neither Radio Shack nor Jameco seem to offer a standard, inexpensive 5-volt regulated power supply. One option you have is to buy from Jameco something like part number 116089. This is a 5-volt power supply from an old Atari video game. If you look in the Jameco catalog, you will find that they have about 20 different surplus power supplies like this, producing all sorts of voltages and amperages. You need 5 volts at at least 0.3 amps (300 milliamps) -- you need no more than 2 amps, so do not purchase more power supply than you need. What you can do is buy the power supply, then cut off the connector and get access to the 5-volt and ground wires. That will work fine, and is probably the easiest path. You can use your volt meter (see below) to make sure the power supply produces the voltage you need.

Your alternative is to build a 5-volt supply from a little power-cube transformer. What you need is a transformer that produces 7 to 12 DC volts at 100 milliamps or more. Note that:

  • The transformer MUST produce DC voltage.
  • It MUST produce 7 to 12 volts.
  • It MUST produce 100 milliamps (0.1 amps) or more.
You may have an old one lying around that you can use -- read the imprint on the cover and make sure it meets all three requirements. If not, you can purchase a transformer from Radio Shack or Jameco.

Radio Shack sells a 9-volt 300-milliamp transformer (part number 273-1455). Jameco has a 7.5-volt 300-milliamp model (part number 149964). Clip the connector off the transformer and separate the two wires. Strip about a centimeter of insulation off both wires. Now plug the transformer in (once it is plugged in, NEVER let the two wires from the transformer touch one another or you are likely to burn out the transformer and ruin it). Use your volt meter (see below) to measure the voltage. You want to make sure that the transformer is producing approximately the stated voltage (it may be high by as much as a factor of two -- that is okay). Your transformer is acting like a battery for you, so you also want to determine which wire is the negative and which is the positive. Hook the black and red leads of the volt meter up to the transformer's wires randomly and see if the voltage measured is positive or negative. If it is negative, reverse the leads. Now you know that the wire to which the black lead is attached is the negative (ground) wire, while the other is the positive wire.

Using a Volt-Ohm Meter
A volt-ohm meter (multimeter) measures voltage, current and resistance. It has two "leads" (wires), one black and one red. What we want to do with the meter right now is learn how to measure voltage. To do this, find a AA, C or D battery to play with (not a dead one). We will use it as a voltage source.

Every meter is different, but in general, here are the steps to get ready to measure a battery's voltage:

  1. Take your black test lead and insert it in the hole marked (depends on the meter) "Common," "Com," "Ground," "Gnd" or "-" (minus).
  2. Take your red test lead and insert it in the hole marked (depends on the meter) "Volts," "V," "Pos" or "+" (plus). Some meters have multiple holes for the red lead -- make sure you use the one for volts.
  3. Turn the dial to the "DC Volts" section. There will usually be multiple voltage ranges available in this section -- on my meter, the ranges are 2.5 volts, 50 volts, 250 volts and 1,000 volts (fancy auto-ranging meters may set the range for you automatically). Your meter will have similar ranges. The battery will have a voltage of 1.25 volts, so find the closest voltage greater than 1.25 volts. In my case, that is 2.5 volts.
Now, hold the black lead to the negative terminal of the battery and the red lead to the positive terminal. You should be able to read something close to 1.25 volts off the meter. It is important that you hook the black lead up to negative and the red lead up to positive and stay in the habit of doing that. Now you can use the meter to test your power supply, as well. Change the voltage range if necessary and then connect the black lead to ground and the red lead to what you presume to be the positive 5-volt wire. The meter should read 5 volts.

Building the Regulator

To build the regulator, you need three parts:
  • A 7805 5-volt voltage regulator in a TO-220 case (Radio Shack part number 276-1770)

  • Two electrolytic capacitors, anywhere between 100 and 1,000 microfarads (typical Radio Shack part number 272-958)

An electrolytic capacitor
The 7805 takes in a voltage between 7 and 30 volts and regulates it down to exactly 5 volts. The first capacitor takes out any ripple coming from the transformer so that the 7805 is receiving a smooth input voltage, and the second capacitor acts as a load balancer to ensure consistent output from the 7805.

The 7805 has three leads. If you look at the 7805 from the front (the side with printing on it), the three leads are, from left to right, input voltage (7 to 30 volts), ground, and output voltage (5 volts).


To connect the regulator to the transformer, you can use this configuration:



The two capacitors are represented by parallel lines. The "+" sign indicates that electrolytic capacitors are polarized: There is a positive and a negative terminal on an electrolytic capacitor (one of which will be marked). You need to make sure you get the polarity right when you install the capacitor.

You can build this regulator on your breadboard. To do this, you need to understand how a breadboard is internally wired. The following figure shows you the wiring:


On the outer edges of the breadboard are two lines of terminals running the length of the board. All of these terminals are internally connected. Typically, you run +5 volts down one of them and ground down the other. Down the center of the board is a channel. On either side of the channel are sets of five interconnected terminals. You can use your volt-ohm meter to see the interconnections. Set the meter's dial to its ohm setting, and then stick wires at different points in the breadboard (the test leads for the meter are likely too thick to fit in the breadboard's holes).

In the ohm setting, the meter measures resistance. Resistance will be zero if there is a connection between two points (touch the leads together to see this), and infinite if there is no connection (hold the leads apart to see this). You will find that points on the board really are interconnected as shown in the diagram. Another way to see the connections is to pull back the sticker on the back of the breadboard a bit and see the metal connectors.

Now connect the parts for your regulator:

  1. Connect the ground wire of the transformer to one of the long outer strips on the breadboard.
  2. Plug the 7805 into three of the five-hole rows.
  3. Connect ground from the terminal strip to the middle lead of the 7805 with a wire -- simply cut a short piece of wire, strip off both ends and plug them in.
  4. Connect the positive wire from the transformer to the left lead (input) of the 7805.
  5. Connect a capacitor from the left lead of the 7805 to ground, paying attention to the polarity.
  6. Connect the 5-volt lead of 7805 to the other long outer terminal strip on the breadboard.
  7. Connect the second capacitor between the 5-volt and ground strips.
You have created your regulator. It might look like this when you are done (two views):




In both of the above figures, the lines from the transformer come in from the left. You can see the ground line of the transformer connected directly into the ground strip running the length of the board at the bottom. The top strip supplies +5 volts and is connected directly to the +5 pin of the 7805. The left capacitor filters the transformer voltage, while the right capacitor filters the +5 volts produced by the 7805. The LED connects between the +5 and ground strips, through the resistor, and lets you know when the power supply is "on."

Plug in the transformer and measure the input and output voltage of the 7805. You should see exactly 5 volts coming out of the 7805, and whatever voltage your transformer delivers going in. If you do not, then immediately disconnect the transformer and do the following:

  • Pull out the capacitors. Plug the transformer back in for a moment and see if that changed anything.
  • Make sure the ground wire and positive wire from the transformer are not reversed (if they are, it is likely the 7805 is very hot, and possibly fried).
  • Make sure the transformer is producing any voltage at all by disconnecting it and checking it with your volt meter. See the previous page to learn how to do this.

Once you see 5 volts coming out of the regulator, you can test it further and see that it is on by connecting an LED to it. You need to connect an LED and a resistor in series -- something that is easy to do on your breadboard. You must use the resistor or the LED will burn out immediately. A good value for the resistor is 330 ohms, although anything between 200 and 500 ohms will work fine. LEDs, being diodes, have a polarity, so if your LED does not light, try reversing the leads and see if that helps.

It might seem like we've had to go to a tremendous amount of trouble just to get the power supply wired up and working. But you've learned a couple of things in the process. Now we can experiment with Boolean gates!


Playing with Boolean Gates

If you used the table on the previous page to order your parts, you should have six different chips containing six different types of gates:
  • 7400 - NAND (four gates per chip)
  • 7402 - NOR (four gates per chip)
  • 7404 - NOT (six gates per chip)
  • 7408 - AND (four gates per chip)
  • 7432 - OR (four gates per chip)
  • 7486 - XOR (four gates per chip)
Inside the chips, things look like this:




Let's start with a 7408 AND chip. If you look at the chip, there will normally be a dot at pin 1, or an indentation at the pin 1 end of the chip, or some other marking to indicate pin 1. Push the chip into the breadboard so it straddles the center channel. You can see from the diagrams that on all chips, pin 7 must connect to ground and pin 14 must connect to +5 volts. So connect those two pins appropriately. (If you connect them backward you will burn the chip out, so don't connect them backward. If you happen to burn a chip out accidentally, throw it away so you do not confuse it with your good parts.) Now connect an LED and resistor between pin 3 of the chip and ground. The LED should light. If not, reverse the LED so it lights. Your IC should look like this:


In this figure, the chip is receiving +5 volts on pin 14 (red wire) and ground on pin 7 (black wire). The resistor leaves pin 3 and connects to the LED, which is also connected to ground. Connect wires from +5 and ground to the gate's A and B inputs to exercise the gate.

Here is what is happening. In TTL, +5 represents a binary "1" and ground represents a binary "0." If an input pin to a gate is not connected to anything, it "floats high," meaning the gate makes an assumption that there is a 1 on the pin. So the AND gate should be seeing 1s on both the A and B inputs, meaning that the output at pin 3 is delivering 5 volts. So the LED lights. If you ground either pin 1 or 2 or both on the chip, the LED will extinguish. This is the standard behavior for an AND gate.

Try out the other gates by connecting them on your breadboard and see that they all behave according to the logic tables in the Boolean logic article. Then try wiring up something more complicated. For example, wire up the XOR gate, or the Q bit of the full adder, and see that they behave as expected.


Bits and Bytes Work


Ones and zeroes represent bytes because it's a base-2 system.

Both RAM and hard disk capacities are measured in bytes, as are file sizes when you examine them in a file viewer.

You might hear an advertisement that says, "This computer has a 32-bit Pentium processor with 64 megabytes of RAM and 2.1 gigabytes of hard disk space." In this article, we will discuss bits and bytes so that you have a complete understanding.

Decimal Numbers
The easiest way to understand bits is to compare them to something you know: digits. A digit is a single place that can hold numerical values between 0 and 9. Digits are normally combined together in groups to create larger numbers. For example, 6,357 has four digits. It is understood that in the number 6,357, the 7 is filling the "1s place," while the 5 is filling the 10s place, the 3 is filling the 100s place and the 6 is filling the 1,000s place. So you could express things this way if you wanted to be explicit:

(6 * 1000) + (3 * 100) + (5 * 10) + (7 * 1) = 6000 + 300 + 50 + 7 = 6357

Another way to express it would be to use powers of 10. Assuming that we are going to represent the concept of "raised to the power of" with the "^" symbol (so "10 squared" is written as "10^2"), another way to express it is like this:

(6 * 10^3) + (3 * 10^2) + (5 * 10^1) + (7 * 10^0) = 6000 + 300 + 50 + 7 = 6357

What you can see from this expression is that each digit is a placeholder for the next higher power of 10, starting in the first digit with 10 raised to the power of zero.

That should all feel pretty comfortable -- we work with decimal digits every day. The neat thing about number systems is that there is nothing that forces you to have 10 different values in a digit. Our base-10 number system likely grew up because we have 10 fingers, but if we happened to evolve to have eight fingers instead, we would probably have a base-8 number system. You can have base-anything number systems. In fact, there are lots of good reasons to use different bases in different situations.

Computers happen to operate using the base-2 number system, also known as the binary number system (just like the base-10 number system is known as the decimal number system).

The Base-2 System and the 8-bit Byte

The reason computers use the base-2 system is because it makes it a lot easier to implement them with current electronic technology. You could wire up and build computers that operate in base-10, but they would be fiendishly expensive right now. On the other hand, base-2 computers are relatively cheap.

So computers use binary numbers, and therefore use binary digits in place of decimal digits. The word bit is a shortening of the words "Binary digIT." Whereas decimal digits have 10 possible values ranging from 0 to 9, bits have only two possible values: 0 and 1. Therefore, a binary number is composed of only 0s and 1s, like this: 1011. How do you figure out what the value of the binary number 1011 is? You do it in the same way we did it above for 6357, but you use a base of 2 instead of a base of 10. So:

(1 * 2^3) + (0 * 2^2) + (1 * 2^1) + (1 * 2^0) = 8 + 0 + 2 + 1 = 11

You can see that in binary numbers, each bit holds the value of increasing powers of 2. That makes counting in binary pretty easy. Starting at zero and going through 20, counting in decimal and binary looks like this:

 0 =     0
1 = 1
2 = 10
3 = 11
4 = 100
5 = 101
6 = 110
7 = 111
8 = 1000
9 = 1001
10 = 1010
11 = 1011
12 = 1100
13 = 1101
14 = 1110
15 = 1111
16 = 10000
17 = 10001
18 = 10010
19 = 10011
20 = 10100

When you look at this sequence, 0 and 1 are the same for decimal and binary number systems. At the number 2, you see carrying first take place in the binary system. If a bit is 1, and you add 1 to it, the bit becomes 0 and the next bit becomes 1. In the transition from 15 to 16 this effect rolls over through 4 bits, turning 1111 into 10000.

Bits are rarely seen alone in computers. They are almost always bundled together into 8-bit collections, and these collections are called bytes. Why are there 8 bits in a byte? A similar question is, "Why are there 12 eggs in a dozen?" The 8-bit byte is something that people settled on through trial and error over the past 50 years.

With 8 bits in a byte, you can represent 256 values ranging from 0 to 255, as shown here:
  0 = 00000000
1 = 00000001
2 = 00000010
...
254 = 11111110
255 = 11111111
you learn that a CD uses 2 bytes, or 16 bits, per sample. That gives each sample a range from 0 to 65,535, like this:
    0 = 0000000000000000
1 = 0000000000000001
2 = 0000000000000010
...
65534 = 1111111111111110
65535 = 1111111111111111

The Standard ASCII Character Set

Bytes are frequently used to hold individual characters in a text document. In the ASCII character set, each binary value between 0 and 127 is given a specific character. Most computers extend the ASCII character set to use the full range of 256 characters available in a byte. The upper 128 characters handle special things like accented characters from common foreign languages.

You can see the 127 standard ASCII codes below. Computers store text documents, both on disk and in memory, using these codes. For example, if you use Notepad in Windows 95/98 to create a text file containing the words, "Four score and seven years ago," Notepad would use 1 byte of memory per character (including 1 byte for each space character between the words -- ASCII character 32). When Notepad stores the sentence in a file on disk, the file will also contain 1 byte per character and per space.

Try this experiment: Open up a new file in Notepad and insert the sentence, "Four score and seven years ago" in it. Save the file to disk under the name getty.txt. Then use the explorer and look at the size of the file. You will find that the file has a size of 30 bytes on disk: 1 byte for each character. If you add another word to the end of the sentence and re-save it, the file size will jump to the appropriate number of bytes. Each character consumes a byte.

If you were to look at the file as a computer looks at it, you would find that each byte contains not a letter but a number -- the number is the ASCII code corresponding to the character (see below). So on disk, the numbers for the file look like this:

     F   o   u   r     a   n   d      s   e   v   e   n
    70 111 117 114 32 97 110 100 32 115 101 118 101 110
By looking in the ASCII table, you can see a one-to-one correspondence between each character and the ASCII code used. Note the use of 32 for a space -- 32 is the ASCII code for a space. We could expand these decimal numbers out to binary numbers (so 32 = 00100000) if we wanted to be technically correct -- that is how the computer really deals with things.

The first 32 values (0 through 31) are codes for things like carriage return and line feed. The space character is the 33rd value, followed by punctuation, digits, uppercase characters and lowercase characters. To see all 127 values, check out Unicode.org's chart.


Byte Prefixes and Binary Math

When you start talking about lots of bytes, you get into prefixes like kilo, mega and giga, as in kilobyte, megabyte and gigabyte (also shortened to K, M and G, as in Kbytes, Mbytes and Gbytes or KB, MB and GB). The following table shows the binary multipliers:

Name
Abbr.
Size
Kilo
K
2^10 = 1,024
Mega
M
2^20 = 1,048,576
Giga
G
2^30 = 1,073,741,824
Tera
T
2^40 = 1,099,511,627,776
Peta
P
2^50 = 1,125,899,906,842,624
Exa
E
2^60 = 1,152,921,504,606,846,976
Zetta
Z
2^70 = 1,180,591,620,717,411,303,424
Yotta
Y
2^80 = 1,208,925,819,614,629,174,706,176


You can see in this chart that kilo is about a thousand, mega is about a million, giga is about a billion, and so on. So when someone says, "This computer has a 2 gig hard drive," what he or she means is that the hard drive stores 2 gigabytes, or approximately 2 billion bytes, or exactly 2,147,483,648 bytes. How could you possibly need 2 gigabytes of space? When you consider that one CD holds 650 megabytes, you can see that just three CDs worth of data will fill the whole thing! Terabyte databases are fairly common these days, and there are probably a few petabyte databases floating around the Pentagon by now.

Binary math works just like decimal math, except that the value of each bit can be only 0 or 1. To get a feel for binary math, let's start with decimal addition and see how it works. Assume that we want to add 452 and 751:

  452
+ 751
---
1203

To add these two numbers together, you start at the right: 2 + 1 = 3. No problem. Next, 5 + 5 = 10, so you save the zero and carry the 1 over to the next place. Next, 4 + 7 + 1 (because of the carry) = 12, so you save the 2 and carry the 1. Finally, 0 + 0 + 1 = 1. So the answer is 1203.

Binary addition works exactly the same way:

  010
+ 111
---
1001
Starting at the right, 0 + 1 = 1 for the first digit. No carrying there. You've got 1 + 1 = 10 for the second digit, so save the 0 and carry the 1. For the third digit, 0 + 1 + 1 = 10, so save the zero and carry the 1. For the last digit, 0 + 0 + 1 = 1. So the answer is 1001. If you translate everything over to decimal you can see it is correct: 2 + 7 = 9.


To sum up, here's what we've learned about bits and bytes:

  • Bits are binary digits. A bit can hold the value 0 or 1.
  • Bytes are made up of 8 bits each.
  • Binary math works just like decimal math, but each bit can have a value of only 0 or 1.
There really is nothing more to it -- bits and bytes are that simple.

microprocessor

The computer you are using to read this page uses a microprocessor to do its work. The microprocessor is the heart of any normal computer, whether it is a desktop machine, a server or a laptop. The microprocessor you are using might be a Pentium, a K6, a PowerPC, a Sparc or any of the many other brands and types of microprocessors, but they all do approximately the same thing in approximately the same way.


Intel 4004 chip
A microprocessor -- also known as a CPU or central processing unit -- is a complete computation engine that is fabricated on a single chip. The first microprocessor was the Intel 4004, introduced in 1971. The 4004 was not very powerful -- all it could do was add and subtract, and it could only do that 4 bits at a time. But it was amazing that everything was on one chip. Prior to the 4004, engineers built computers either from collections of chips or from discrete components (transistors wired one at a time). The 4004 powered one of the first portable electronic calculators.


Microprocessor Progression: Intel

Intel 8080
The Intel 8080 was the first microprocessor in a home computer. See more microprocessor pictures.
The first microprocessor to make it into a home computer was the Intel 8080, a complete 8-bit computer on one chip, introduced in 1974. The first microprocessor to make a real splash in the market was the Intel 8088, introduced in 1979 and incorporated into the IBM PC (which first appeared around 1982). If you are familiar with the PC market and its history, you know that the PC market moved from the 8088 to the 80286 to the 80386 to the 80486 to the Pentium to the Pentium II to the Pentium III to the Pentium 4. All of these microprocessors are made by Intel and all of them are improvements on the basic design of the 8088. The Pentium 4 can execute any piece of code that ran on the original 8088, but it does it about 5,000 times faster!

The following table helps you to understand the differences between the different processors that Intel has introduced over the years.

Name
Date
Transistors
Microns
Clock speed
Data width
MIPS
8080
1974
6,000
6
2 MHz
8 bits
0.64
8088
1979
29,000
3
5 MHz
16 bits
8-bit bus
0.33
80286
1982
134,000
1.5
6 MHz
16 bits
1
80386
1985
275,000
1.5
16 MHz
32 bits
5
80486
1989
1,200,000
1
25 MHz
32 bits
20
Pentium
1993
3,100,000
0.8
60 MHz
32 bits
64-bit bus
100
Pentium II
1997
7,500,000
0.35
233 MHz
32 bits
64-bit bus
~300
Pentium III
1999
9,500,000
0.25
450 MHz
32 bits
64-bit bus
~510
Pentium 4
2000
42,000,000
0.18
1.5 GHz
32 bits
64-bit bus
~1,700
Pentium 4 "Prescott"
2004
125,000,000
0.09
3.6 GHz
32 bits
64-bit bus
~7,000

Compiled from The Intel Microprocessor Quick Reference Guide and TSCP Benchmark Scores

Information about this table:

    What's a Chip?
    A chip is also called an integrated circuit. Generally it is a small, thin piece of silicon onto which the transistors making up the microprocessor have been etched. A chip might be as large as an inch on a side and can contain tens of millions of transistors. Simpler processors might consist of a few thousand transistors etched onto a chip just a few millimeters square.

  • The date is the year that the processor was first introduced. Many processors are re-introduced at higher clock speeds for many years after the original release date.
  • Transistors is the number of transistors on the chip. You can see that the number of transistors on a single chip has risen steadily over the years.
  • Microns is the width, in microns, of the smallest wire on the chip. For comparison, a human hair is 100 microns thick. As the feature size on the chip goes down, the number of transistors rises.
  • Clock speed is the maximum rate that the chip can be clocked at. Clock speed will make more sense in the next section.
  • Data Width is the width of the ALU. An 8-bit ALU can add/subtract/multiply/etc. two 8-bit numbers, while a 32-bit ALU can manipulate 32-bit numbers. An 8-bit ALU would have to execute four instructions to add two 32-bit numbers, while a 32-bit ALU can do it in one instruction. In many cases, the external data bus is the same width as the ALU, but not always. The 8088 had a 16-bit ALU and an 8-bit bus, while the modern Pentiums fetch data 64 bits at a time for their 32-bit ALUs.
  • MIPS stands for "millions of instructions per second" and is a rough measure of the performance of a CPU. Modern CPUs can do so many different things that MIPS ratings lose a lot of their meaning, but you can get a general sense of the relative power of the CPUs from this column.
From this table you can see that, in general, there is a relationship between clock speed and MIPS. The maximum clock speed is a function of the manufacturing process and delays within the chip. There is also a relationship between the number of transistors and MIPS. For example, the 8088 clocked at 5 MHz but only executed at 0.33 MIPS (about one instruction per 15 clock cycles). Modern processors can often execute at a rate of two instructions per clock cycle. That improvement is directly related to the number of transistors on the chip.

Microprocessor Logic


Photo courtesy Intel Corporation
Intel Pentium 4 processor
To understand how a microprocessor works, it is helpful to look inside and learn about the logic used to create one. In the process you can also learn about assembly language -- the native language of a microprocessor -- and many of the things that engineers can do to boost the speed of a processor.

A microprocessor executes a collection of machine instructions that tell the processor what to do. Based on the instructions, a microprocessor does three basic things:

  • Using its ALU (Arithmetic/Logic Unit), a microprocessor can perform mathematical operations like addition, subtraction, multiplication and division. Modern microprocessors contain complete floating point processors that can perform extremely sophisticated operations on large floating point numbers.
  • A microprocessor can move data from one memory location to another.
  • A microprocessor can make decisions and jump to a new set of instructions based on those decisions.
There may be very sophisticated things that a microprocessor does, but those are its three basic activities. The following diagram shows an extremely simple microprocessor capable of doing those three things:


This is about as simple as a microprocessor gets. This microprocessor has:

  • An address bus (that may be 8, 16 or 32 bits wide) that sends an address to memory
  • A data bus (that may be 8, 16 or 32 bits wide) that can send data to memory or receive data from memory
  • An RD (read) and WR (write) line to tell the memory whether it wants to set or get the addressed location
  • A clock line that lets a clock pulse sequence the processor
  • A reset line that resets the program counter to zero (or whatever) and restarts execution
Let's assume that both the address and data buses are 8 bits wide in this example.

Here are the components of this simple microprocessor:

  • Registers A, B and C are simply latches made out of flip-flops. (See the section on "edge-triggered latches" in How Boolean Logic Works for details.)
  • The address latch is just like registers A, B and C.
  • The program counter is a latch with the extra ability to increment by 1 when told to do so, and also to reset to zero when told to do so.
  • The ALU could be as simple as an 8-bit adder ( or it might be able to add, subtract, multiply and divide 8-bit values. Let's assume the latter here.
  • The test register is a special latch that can hold values from comparisons performed in the ALU. An ALU can normally compare two numbers and determine if they are equal, if one is greater than the other, etc. The test register can also normally hold a carry bit from the last stage of the adder. It stores these values in flip-flops and then the instruction decoder can use the values to make decisions.
  • There are six boxes marked "3-State" in the diagram. These are tri-state buffers. A tri-state buffer can pass a 1, a 0 or it can essentially disconnect its output (imagine a switch that totally disconnects the output line from the wire that the output is heading toward). A tri-state buffer allows multiple outputs to connect to a wire, but only one of them to actually drive a 1 or a 0 onto the line.
  • The instruction register and instruction decoder are responsible for controlling all of the other components.
Although they are not shown in this diagram, there would be control lines from the instruction decoder that would:
  • Tell the A register to latch the value currently on the data bus
  • Tell the B register to latch the value currently on the data bus
  • Tell the C register to latch the value currently output by the ALU
  • Tell the program counter register to latch the value currently on the data bus
  • Tell the address register to latch the value currently on the data bus
  • Tell the instruction register to latch the value currently on the data bus
  • Tell the program counter to increment
  • Tell the program counter to reset to zero
  • Activate any of the six tri-state buffers (six separate lines)
  • Tell the ALU what operation to perform
  • Tell the test register to latch the ALU's test bits
  • Activate the RD line
  • Activate the WR line
Coming into the instruction decoder are the bits from the test register and the clock line, as well as the bits from the instruction register.

Microprocessor Memory

The previous section talked about the address and data buses, as well as the RD and WR lines. These buses and lines connect either to RAM or ROM -- generally both. In our sample microprocessor, we have an address bus 8 bits wide and a data bus 8 bits wide. That means that the microprocessor can address (28) 256 bytes of memory, and it can read or write 8 bits of the memory at a time. Let's assume that this simple microprocessor has 128 bytes of ROM starting at address 0 and 128 bytes of RAM starting at address 128.


ROM chip

ROM stands for read-only memory. A ROM chip is programmed with a permanent collection of pre-set bytes. The address bus tells the ROM chip which byte to get and place on the data bus. When the RD line changes state, the ROM chip presents the selected byte onto the data bus.


RAM chip
RAM stands for random-access memory. RAM contains bytes of information, and the microprocessor can read or write to those bytes depending on whether the RD or WR line is signaled. One problem with today's RAM chips is that they forget everything once the power goes off. That is why the computer needs ROM.

By the way, nearly all computers contain some amount of ROM (it is possible to create a simple computer that contains no RAM -- many microcontrollers do this by placing a handful of RAM bytes on the processor chip itself -- but generally impossible to create one that contains no ROM). On a PC, the ROM is called the BIOS (Basic Input/Output System). When the microprocessor starts, it begins executing instructions it finds in the BIOS. The BIOS instructions do things like test the hardware in the machine, and then it goes to the hard disk to fetch the boot sector (. This boot sector is another small program, and the BIOS stores it in RAM after reading it off the disk. The microprocessor then begins executing the boot sector's instructions from RAM. The boot sector program will tell the microprocessor to fetch something else from the hard disk into RAM, which the microprocessor then executes, and so on. This is how the microprocessor loads and executes the entire operating system.

Microprocessor Instructions

Even the incredibly simple microprocessor shown in the previous example will have a fairly large set of instructions that it can perform. The collection of instructions is implemented as bit patterns, each one of which has a different meaning when loaded into the instruction register. Humans are not particularly good at remembering bit patterns, so a set of short words are defined to represent the different bit patterns. This collection of words is called the assembly language of the processor. An assembler can translate the words into their bit patterns very easily, and then the output of the assembler is placed in memory for the microprocessor to execute.

Here's the set of assembly language instructions that the designer might create for the simple microprocessor in our example:

  • LOADA mem - Load register A from memory address
  • LOADB mem - Load register B from memory address
  • CONB con - Load a constant value into register B
  • SAVEB mem - Save register B to memory address
  • SAVEC mem - Save register C to memory address
  • ADD - Add A and B and store the result in C
  • SUB - Subtract A and B and store the result in C
  • MUL - Multiply A and B and store the result in C
  • DIV - Divide A and B and store the result in C
  • COM - Compare A and B and store the result in test
  • JUMP addr - Jump to an address
  • JEQ addr - Jump, if equal, to address
  • JNEQ addr - Jump, if not equal, to address
  • JG addr - Jump, if greater than, to address
  • JGE addr - Jump, if greater than or equal, to address
  • JL addr - Jump, if less than, to address
  • JLE addr - Jump, if less than or equal, to address
  • STOP - Stop execution
If you have read How C Programming Works, then you know that this simple piece of C code will calculate the factorial of 5 (where the factorial of 5 = 5! = 5 * 4 * 3 * 2 * 1 = 120):

    a=1;
    f=1;
    while (a <= 5)
    {
    f = f * a;
    a = a + 1;
    }

At the end of the program's execution, the variable f contains the factorial of 5.

Assembly Language
A C compiler translates this C code into assembly language. Assuming that RAM starts at address 128 in this processor, and ROM (which contains the assembly language program) starts at address 0, then for our simple microprocessor the assembly language might look like this:

    // Assume a is at address 128
    // Assume F is at address 129
    0 CONB 1 // a=1;
    1 SAVEB 128
    2 CONB 1 // f=1;
    3 SAVEB 129
    4 LOADA 128 // if a > 5 the jump to 17
    5 CONB 5
    6 COM
    7 JG 17
    8 LOADA 129 // f=f*a;
    9 LOADB 128
    10 MUL
    11 SAVEC 129
    12 LOADA 128 // a=a+1;
    13 CONB 1
    14 ADD
    15 SAVEC 128
    16 JUMP 4 // loop back to if
    17 STOP

ROM
So now the question is, "How do all of these instructions look in ROM?" Each of these assembly language instructions must be represented by a binary number. For the sake of simplicity, let's assume each assembly language instruction is given a unique number, like this:

  • LOADA - 1
  • LOADB - 2
  • CONB - 3
  • SAVEB - 4
  • SAVEC mem - 5
  • ADD - 6
  • SUB - 7
  • MUL - 8
  • DIV - 9
  • COM - 10
  • JUMP addr - 11
  • JEQ addr - 12
  • JNEQ addr - 13
  • JG addr - 14
  • JGE addr - 15
  • JL addr - 16
  • JLE addr - 17
  • STOP - 18
The numbers are known as opcodes. In ROM, our little program would look like this:

    // Assume a is at address 128
    // Assume F is at address 129
    Addr opcode/value
    0 3 // CONB 1
    1 1
    2 4 // SAVEB 128
    3 128
    4 3 // CONB 1
    5 1
    6 4 // SAVEB 129
    7 129
    8 1 // LOADA 128
    9 128
    10 3 // CONB 5
    11 5
    12 10 // COM
    13 14 // JG 17
    14 31
    15 1 // LOADA 129
    16 129
    17 2 // LOADB 128
    18 128
    19 8 // MUL
    20 5 // SAVEC 129
    21 129
    22 1 // LOADA 128
    23 128
    24 3 // CONB 1
    25 1
    26 6 // ADD
    27 5 // SAVEC 128
    28 128
    29 11 // JUMP 4
    30 8
    31 18 // STOP

You can see that seven lines of C code became 18 lines of assembly language, and that became 32 bytes in ROM.

Decoding
The instruction decoder needs to turn each of the opcodes into a set of signals that drive the different components inside the microprocessor. Let's take the ADD instruction as an example and look at what it needs to do:

  1. During the first clock cycle, we need to actually load the instruction. Therefore the instruction decoder needs to:
    • activate the tri-state buffer for the program counter
    • activate the RD line
    • activate the data-in tri-state buffer
    • latch the instruction into the instruction register
  2. During the second clock cycle, the ADD instruction is decoded. It needs to do very little:
    • set the operation of the ALU to addition
    • latch the output of the ALU into the C register
  3. During the third clock cycle, the program counter is incremented (in theory this could be overlapped into the second clock cycle).
Every instruction can be broken down as a set of sequenced operations like these that manipulate the components of the microprocessor in the proper order. Some instructions, like this ADD instruction, might take two or three clock cycles. Others might take five or six clock cycles.

Microprocessor Performance and Trends

The number of transistors available has a huge effect on the performance of a processor. As seen earlier, a typical instruction in a processor like an 8088 took 15 clock cycles to execute. Because of the design of the multiplier, it took approximately 80 cycles just to do one 16-bit multiplication on the 8088. With more transistors, much more powerful multipliers capable of single-cycle speeds become possible.

More transistors also allow for a technology called pipelining. In a pipelined architecture, instruction execution overlaps. So even though it might take five clock cycles to execute each instruction, there can be five instructions in various stages of execution simultaneously. That way it looks like one instruction completes every clock cycle.

Many modern processors have multiple instruction decoders, each with its own pipeline. This allows for multiple instruction streams, which means that more than one instruction can complete during each clock cycle. This technique can be quite complex to implement, so it takes lots of transistors.

Trends
The trend in processor design has primarily been toward full 32-bit ALUs with fast floating point processors built in and pipelined execution with multiple instruction streams. The newest thing in processor design is 64-bit ALUs, and people are expected to have these processors in their home PCs in the next decade. There has also been a tendency toward special instructions (like the MMX instructions) that make certain operations particularly efficient, and the addition of hardware virtual memory support and L1 caching on the processor chip. All of these trends push up the transistor count, leading to the multi-million transistor powerhouses available today. These processors can execute about one billion instructions per second!

64-bit Microprocessors

Sixty-four-bit processors have been with us since 1992, and in the 21st century they have started to become mainstream. Both Intel and AMD have introduced 64-bit chips, and the Mac G5 sports a 64-bit processor. Sixty-four-bit processors have 64-bit ALUs, 64-bit registers, 64-bit buses and so on.


Photo courtesy AMD

One reason why the world needs 64-bit processors is because of their enlarged address spaces. Thirty-two-bit chips are often constrained to a maximum of 2 GB or 4 GB of RAM access. That sounds like a lot, given that most home computers currently use only 256 MB to 512 MB of RAM. However, a 4-GB limit can be a severe problem for server machines and machines running large databases. And even home machines will start bumping up against the 2 GB or 4 GB limit pretty soon if current trends continue. A 64-bit chip has none of these constraints because a 64-bit RAM address space is essentially infinite for the foreseeable future -- 2^64 bytes of RAM is something on the order of a billion gigabytes of RAM.

With a 64-bit address bus and wide, high-speed data buses on the motherboard, 64-bit machines also offer faster I/O (input/output) speeds to things like hard disk drives and video cards. These features can greatly increase system performance.

Servers can definitely benefit from 64 bits, but what about normal users? Beyond the RAM solution, it is not clear that a 64-bit chip offers "normal users" any real, tangible benefits at the moment. They can process data (very complex data features lots of real numbers) faster. People doing video editing and people doing photographic editing on very large images benefit from this kind of computing power. High-end games will also benefit, once they are re-coded to take advantage of 64-bit features. But the average user who is reading e-mail, browsing the Web and editing Word documents is not really using the processor in that way.