Click here to download the episode worksheet.
Welcome to the Geek Author series on Computer Organization and Design Fundamentals. I’m David Tarnoff, and in this series we are working our way through the topics of Computer Organization, Computer Architecture, Digital Design, and Embedded System Design. If you’re interested in the inner workings of a computer, then you’re in the right place. The only background you’ll need for this series is an understanding of integer math, and if possible, a little experience with a programming language such as Java.
In this episode, we’re going to discuss the anatomy of a binary signal by going through some of the terminology that you’ll see throughout the rest of the series.
Most digital systems do their work using millions of tiny switches called transistors. When operating in a discrete mode, transistors can attain only one of two possible values, on or off. In order to represent quantities beyond on or off, these transistors must be organized into groups, which collectively could represent…well…they could represent anything.
There is a wonderful quote from arguably the first person on the planet who understood what computers might be capable of, Augusta Ada King-Noel, Countess of Lovelace, otherwise known as Lady Ada Lovelace. In the mid-1800’s, she made a statement regarding Charles Babbage’s design of a mechanical computer stating that the machine, “might act upon other things besides number.” (1) She went on to suggest that the machine, known as the Analytical Engine, might be capable of composing elaborate music, “of any degree of complexity or extent.” She was right. Today’s digital circuits use transistors to represent scores of things ranging from position to color, from mass to velocity, or from temperature to time. Each value is interpreted as needed by the application. The on and off settings of a transistor can just as easily represent 1 or 0, yes or no, true or false, up or down, or high or low. Combine multiple transistors together, and they could represent a whole number, the amount of blue in a color, or the volume of a song.
At this point, it is immaterial what the digital values represent. What matters is that there are only two possible values per transistor. The complexity of the computer comes in how the millions of transistors work together. For the purpose of this discussion, the two values of a transistor will be referred to as logic 1 and logic 0, the term “logic” being added so as to distinguish it from a whole number.
Now let’s examine some of the methods used to represent binary data by first looking at a single binary signal. Assume we are recording the binary values present on a single wire controlling, say, a light bulb. Ignoring the possibility that this light bulb may be on a dimmer circuit, this is a binary system; the light is either on or off, in other words, a logic 1 or a logic 0. Over time, the state of the light bulb changes following the position of the switch. Picture the waveform of this binary signal. As time passes, the value of the bulb changes based on the positions of the switch.
This representation is much like a mathematical x-y plot where the x-axis represents time and the y-axis identifies either logic 1 or 0. Sometimes, two or more binary lines are grouped together to perform a single function. For example, the overall lighting in a room may be controlled by three different switches controlling independent banks of lights. Each of the three switches can take on one of two possible positions. Using the multiplication principle, we know that the total number of possible ways these three switches can be set is equal to the number of ways the first switch can be set times the number of ways the second switch can be set times the number of ways the third switch can be set. That’s two times two times two, which equals eight. That means that collectively, the three binary switches can represent eight different lighting levels.
Now let’s talk about the parts that make up one of these binary signals. Like a lightbulb that never turns on, a binary signal that stays at a constant, unchanging level is not of much use. At some point, a binary signal will change to the opposite logic level. That moment of change is referred to as an edge. An edge may be a change from a logic 0 to a logic 1 or vice versa. An edge that is a transition from a logic 0 to a logic 1 is referred to as a rising edge while an edge that is a transition from a logic 1 to a logic 0 is referred to as a falling edge.
Edges are an essential component of a binary signal. Taking a picture with a camera is a good way to understand the importance of edges. Most of the time, we just want a single picture to be taken, and we want it to be taken the moment our finger pressed the shutter button. That moment is when the edge occurs. If the camera was set up to take pictures based on the logic 1 or logic 0 level of the button rather than when an edge occurs, then the camera would be taking pictures as long as our finger was on the button. This is good for movies or burst mode, perhaps, where several photographs are captured quickly, but not so good for the ordinary selfie.
One edge followed by a second edge in the opposite direction gives us a pulse. A binary pulse occurs when a signal changes from one logic level to the other for a short period, then returns to its original logic level. A soda fountain might serve as an example of this. The moment that a cup (or anything really) presses against the lever beneath a soda selection, a stream of soda will start to pour from that spigot. That stream will continue to flow until the cup is removed.
Like the edges, there are two types of pulses. The first is called a positive-going pulse. This pulse starts at a logic zero with a short pulse to logic one, and then a return to a logic zero. The other type of pulse is a negative-going pulse. This is where the signal starts at a logic one with a short pulse to logic zero, and then a return to a logic one. The duration of the pulse, whether it is positive-going or negative going, is referred to as the pulse width.
Like the soda fountain that dispenses soda only when the lever is pressed, most pulses are used to represent when a signal goes “active”. The time before and after the pulse is considered the idle or inactive state. In these cases, a positive-going pulse might also be referred to as an active high pulse while a negative-going pulse might be described as an active low pulse. We will encounter these terms again when we begin discussing memory circuits.
Now let’s put these pulses together to do something. A continuous stream of pulses is referred to as a pulse train. Digital pulse trains occur in all sorts of systems. The pulses of some digital signals may appear to have a very random pattern with no predictable pulse widths or time between pulses. These are referred to as non-periodic pulse trains.
Picture a single key on the keyboard of a piano. As the pianist plays the notes of a song, the duration of each pulse of that single note and the spaces between the times when that note is played can be longer or shorter. In fact, if you were to only hear that one note played from a song, it would sound rather random and without meaning.
Non-periodic pulse trains also have this seemingly random nature. The wire carrying data from one computer to another on a network may look random, but an understanding of the rules governing that transmission would reveal the data embedded in the pulses. And like the notes of a song, when multiple non-periodic pulse trains are combined, large amounts of information can be passed from one digital circuit to another. This is the case with the interface between a processor and its memory devices.
There is a second type of pulse train. Like the drum beat to a song, a periodic pulse train is meant to synchronize events and keep processes moving forward. A simple example of a periodic pulse train might be the blinking cursor on your computer’s display. The duration of each pulse of the cursor is consistent and the pulses are coming at a regular cadence.
The primary characteristic of a periodic pulse train is the measured time in seconds between the start of one pulse and the start of the next. This value is referred to as the period, T, and regardless of where you measure the period on the pulse train, it should always be the same. This measurement is given the units of seconds/cycle.
Another way to represent this characteristic of a periodic pulse train is to describe how fast the pulses are coming, in other words, the number of pulses per second. This measurement is called the frequency of the periodic pulse train, and it has units of cycles/second, otherwise known as Hertz (Hz). Note that the units for frequency equals the inverse of the units used to describe the period. That means that we can calculate the frequency of a periodic pulse train from the period by inverting the measurement for the period.
For example, if it takes 0.1 seconds for a periodic pulse train to make a complete cycle or period, the frequency is one over 0.1 or 10 cycles/second. We can use this formula in the other direction too. If a computer’s system clock is 2 Gigahertz or 2 x 109, then the duration of a single period of its system clock is 1/2,000,000,000 or 5 x 10-10 seconds which is 0.5 nanoseconds.
The duration of the period does not fully describe a periodic pulse train, however. A second measurement, the width of the positive going pulse, tw, is also needed. For example, two different binary signals may have positive going pulses coming once every second. The positive going pulse width for one of them may be half a second meaning that the signal is a logic one half of the time. The other one may have a positive going pulse width of a quarter of a second meaning that the signal is a logic one only 25% of the time. Although the cadence is the same, there is more power in the first signal.
Like the period, pulse widths are measured in seconds. A pulse width will always be greater than or equal to zero and less than or equal to the period. A tw of zero implies the signal has no positive going pulses, in other words, it is a constant logic zero. If tw equaled the period, then the signal would always be logic one.
Now that we’ve covered the parts of a binary signal, join us in the next episode where we will be taking a closer look at the periodic pulse train and see how it’s used to turn binary into analog. Until then remember that while the scope of what makes a computer is immense, it’s all just ones and zeros.
1 – Ada Augusta, Countess of Lovelace, notes on L. F. Menabrea’s “Sketch of the Analytical Engine Invented by Charles Babbage,” Scientific Memoirs (Sept., 1843)