Welcome to the Geek Author series on Computer Organization and Design Fundamentals. I’m David Tarnoff, and in this series we are working our way through the topics of Computer Organization, Computer Architecture, Digital Design, and Embedded System Design. If you’re interested in the inner workings of a computer, then you’re in the right place. The only background you’ll need for this series is an understanding of integer math, and if possible, a little experience with a programming language such as Java.
Imagine a world with just two colors. Understand that we are not talking about a world like a black and white photo with its infinite shades of gray – we mean exactly two colors, each with exactly one shade. Black and white – one bit. The world would look like a Rorschach test with all of its intricacies smelted down to indistinguishable shapes. Oh sure, the human brain may be able to extract meaning out of some of the shapes, but many objects would fuse together to create, well, meaninglessness.
What does this have to do with computers? Well, if we used a single bit, a one or a zero, to represent the color of a minute point in an image, the image itself could only exhibit two colors, no matter how many of these minute points or picture elements the image comprised. If the image was made up of a 100 by 100 square array of these picture elements, also known as pixels, then we would need 100 times 100 or 10,000 bits to represent the picture, albeit nearly useless.
Now let’s double the number of bits representing the color of a pixel to two. This would give us four possible colors in our image. At least now the sky appears to be a different color than the ground. If we double the number of bits again, we’ll have 16 colors. Now we see that there were clouds in that sky.
This is an example of quantization noise, and it happens to some degree everytime we convert an analog measurement to a digital value. In analog to digital conversion, quantization noise is the error that is introduced when the analog to digital converter or ADC maps an analog measurement to its closest corresponding digital value. We can attain better accurracy by giving our digital value more bits. This is called increasing the bit depth.
Remember from our discussion regarding analog to digital conversion that the minimum and maximum of the infinite analog range are mapped to the all 0’s and the all 1’s digital binary values respectively. The range in between is divided into evenly spaced intervals based on the number of digital values between the minimum and maximum. The greater the bit depth, the more digital values we have and the smaller the interval. Smaller intervals mean less rounding will be needed, and therefore, there will be less quantization noise.
The same is true when it comes to sound. If you were to plot a graph of the pressure waves that create sound, you’d see that sound is a continuous waveform. In other words, each of the infinite points that make up the waveform has an infinite precision. Computers don’t have an infinite amount of memory nor can they take an infinite number of measurements over any discrete period of time.
Computers need to take measurements referred to as samples. A sample is a measurement of an analog signal taken at a fixed rate called the sampling rate. There are two consequences of sampling. We discussed the first of these in our last episode when we covered the effects of not taking samples fast enough. In this episode, we’ll be discussing the effects of using a finite number of bits for our measurement. This means that each sample won’t be exactly the same as the analog signal’s value. It will be close, but not exact.
Quantization error occurs when we round the analog signal in order to map it to its nearest digital value. In general, an analog-to-digital converter with a bit depth of n divides the analog range into 2n – 1 increments. The worst possible rounding error occurs when an analog value falls halfway between the two nearest digital values. In that case, our error is one half of the increment size, which equals the analog range divided by 2n – 1. How do we make this potential error smaller? Why we increase the bit depth of course! Larger values of n will divide the range into smaller increments.
Quantization noise will always be present in a digitized signal, but it is much worse with a low bit-depth. As the analog signal varies, the quantization error between the analog value and the digital value will get larger until the rounding takes us to the next digital value. That means that the error is rising and falling between positive one half of the increment size to negative one half of the increment size. Another way of saying this is that the quantization noise is confined to the least significant bit, i.e., the bit that changes when we go from one digital level to the next.
This noise, however, isn’t random white noise like a hiss. As the original analog signal passes from one digital value to the next, the error ramps up and then back down. This isn’t random, and it can add a harshness to the stored data.
Let’s take a listen to an audio example. This is a 440 Hz digitized audio tone that was generated using a bit depth of 16. Now let’s listen to that same audio tone generated using a bit depth of 3. Quite different, huh? Now let’s listen to the difference between the two signals, in other words, the noise that was added to our sine wave. This is found by subtracting the digital values of the 3-bit signal from the digital values of the 16-bit signal. The only way to remove this harsh noise is to increase the bit depth.
That brings us to the end of another episode of the Geek Author series on Computer Organization and Design Fundamentals. In our next episode, we will head back to the world of binary integers by taking a look a Gray Code. Nope, we’re not talking about the color gray, but rather a special way of re-ordering all of the n-bit integers so that in binary, consecutive numbers differ by only one bit. And unlike unsigned binary, the bit that changes from one number to the next could be at any bit position, not necessarily the least significant one.
For transcripts, links, or other podcast notes, please check us out at intermation.com where you will also find links to our Instagram, Twitter, Facebook, and Pinterest pages. Until then remember that while the scope of what makes a computer is immense, it’s all just ones and zeros.