ADC and Your Digital Cameras Bit Depth

It may come as a surprise to learn that all image sensors are analog, not digital, devices. In fact, the entire universe, except the world that exists inside a computer, is analog. Including film. Digital's sole inhabitants are zeroes and ones, period.

So how do you get digital data from an analog capture device?

That's what the ADC (analog-to-digital converter) does inside your digital camera (and in fact, the reason why it's even called a digital camera). All digital cameras have an ADC chip inside that converts the picture captured by the image sensor into digital data. Not all ADCs are created equal, however. How well and how quickly it does its job depends upon the manufacturer, chip design, and most importantly, the number of data bits it can process.

The ADC works by taking the analog data stream from your camera's image sensor, registering the charge, or number of electrons related to each photoreceptor site, and then deciding whether a piece of data should remain a zero or be a one.

These digital pixels consist of data bits that establish exactly what color that pixel will be. The more bits, the more possibilities for more precisely defining the color's hue, saturation, and brightness. Most inexpensive digital cameras process 8-bit pixels. Better digital cameras may have 10 bits or 12 bits per pixel. That's good, because more information usually translates into better color, more subtle transitions or gradations, and increased clarity of detail in the highlights and shadows. Professional digital cameras and studio digital cameras may even associate 14 or 16 bits of data with each pixel.

Since it's the job of the ADC to convert the analog pixel that the image sensor captures into a digital pixel that your computer can recognize and use, how many bits the ADC can handle (or its bit depth) is an important factor in rating the power and quality of your camera.

As with image sensors, you can't specify what kind of ADC your digital camera should have. But you can choose to buy a camera that has, for example, a 12-bit ADC over a 10-bit ADC by carefully looking over the specs on the manufacturer's Web site. If it's not listed, your digital camera probably has an 8-bit or 10-bit ADC, as do many inexpensive consumer models. Better, but pricier models will list the camera's bit depth.

But remember, an ADC's bit depth is only one of many components that contribute to a digital camera's image quality. So, avoid obsessing on buying only a model that has the highest data bit depth, since the lens, programming, and image sensor are equally important.

Not All Electronic Cameras Are Digital Cameras

Back in the early 1990s, when we first started working with and writing about filmless photography, we scrupulously avoided calling every electronic camera a digital camera. Why? Because most of the electronic cameras backthen weren't, strictly speaking, digital devices.

Remember, all cameras are analog devices. A camera is only considered to be digital if it has a built-in ADC chip that gives it the ability to instantly convert captured analog information into digital data.

Almost all early electronic cameras used CCD image sensors, but the analog models lacked a built-in ADC chip. So, instead of instantly converting the just-shot image to digital data, it stored the analog information on tiny silver-dollar-sized floppy diskettes in hard plastic shells. (It's very similar to how nondigital camcorders record video, except the camcorders save to analog tape instead of floppies.) Once saved, the floppy was removed from the camera and inserted in a very expensive shoebox-sized disk drive attached to a PC or Mac via a SCSI (Small Computer System Interface) cable. When activated, the drive would read the tiny floppy, process that data through its built-in ADC chip, and transfer the digital file to the computer—a cumbersome, time-consuming procedure.

Not surprisingly, external ADCs quickly went the way of the dodo when affordable digital cameras began reaching the marketplace. But the concept of post-shooting processing lives on, not in hardware, but as the RAW file format so popular with professional photographers. (See more about RAW in Chapter 5.)

Tip: 8-bit Color Is the Same as 24-bit Color

Sometimes, it feels as though computer jargon is purposely complicated just to make things more difficult. When talking about color inside a camera or scanner, bits are described according to how many there are per primary color. Since all cameras and scanners use the RGB color model, that means there are three primary colors—red, green and blue. When those bits are brought into the computer, they are described according to how many there are for all colors. Therefore, 8-bit color in your digital camera becomes 24-bit color in your computer.

Champion Flash Photography

Champion Flash Photography

Here Is How You Can Use Flash Wisely! A Hands-on Guide On Flash Photography For Camera Friendly People!. Learn Flash Photography Essentials By Following Simple Tips.

Get My Free Ebook


Post a comment