With more indie filmmakers shooting with video-enabled DSLRs and introductions of new low-cost HD video cameras, the charged coupled device (CCD) is now competing with the complementary metal oxide semiconductor (CMOS) for the hearts and mind of industrial, government and mid-level professional videographers.
by Wayne Cole
Each technology has its advocates who assert their favorite's superiority over the other. But a more objective view of their respective plusses and minuses might show that the resulting image quality varies more with the talent of the videographer than the imager's construction.
Each sensor is like a tiled surface where each individual tile is a photo-sensitive element. However, they use different methods to sense or sample the incident light energy and to shift that data from the sensor to the image processor for output or recording.
There are two CCD implementations. Virtually all of the surface area of the frame transfer (FT) CCD is used for sensing incident light. The light energy captured by all the photosensitive elements on the FT CCD's exposed surface gets transferred at the same time to a shielded storage area on the chip for access by the image processing system.
Two frames show what banding might look like for a CMOS video sensor's "rolling shutter" when an unsynchronized photo flash goes off. Manufacturers now implement image processing to partially suppress such artifacts. The larger sensing area and frame-at-once transfer mean that FT CCDs have a larger dynamic range than interline transfer (IT) CCDs, are not susceptible to smearing and show very little aliasing. But it does require a mechanical shutter to shield the light-sensing surface while the frame transfer occurs, increasing the FT CCD's complexity and cost.
For each frame, IT CCDs sequentially transfer the light readings from a single row of photo diodes until all rows of the frame have been sent to the image processor. This helps cut down on the overall size of the chip since less storage area is required.
However, a portion of each photo diode in the array must be shielded from light in order to provide a "holding area," or shift register, for a latched sensor value during the line's transfer period. These shift registers can take up as much as 60 percent of the sensor surface, hence the lower dynamic range. Current generations of IT CCDs try to offset this by placing micro lenses over each photodiode to refocus some of the light falling on the shift registers to the photo sensitive surface. If a photodiode is oversaturated, some of its voltage may bleed into all the shift registers for that line and neighboring photodiodes, producing a vertical smear in the highlights.
In-camera digital image processing has greatly reduced IT CCDs susceptibility to vertical smearing. However, at higher frame rates (i.e., slow motion) or lower exposure times, smearing will rear its ugly head.
In both types of CCD, once the photo gate or photodiode's value is shifted into the storage area, it is reset or zeroed in preparation for the next exposure cycle.
Each photo site on a CMOS chip is surrounded by transistors that do some of the image processing "on chip." However, these transistors take up space, so the light-sensing area of a CMOS imager is similar to that of an IT CCD. CMOS sensors also use micro lenses to partially offset this loss of sensing area. The surrounding amplifiers, however will not bleed when saturated to the other amplifiers in a line or to adjacent photo sites, so smearing is not possible.
Another advantage over CCDs is that the photo site is not reset after sampling, so in practice, individual photo sites could be sampled at different rates. This provides the opportunity to devise image processing algorithms that provide significantly better dynamic range than CCDs, and also greater noise reduction leading to higher apparent resolution.
CMOS Webcam CMOS detractors point to "rolling shutter" issues as a show-stopper for CMOS sensors in video. These manifest as distortions due to rapid motions or changes in lighting like a photo flash in a scene. Horizontal motion can make rectangles appear as parallelograms, and rapid vertical motion will either appear to vertically stretch or compress an object depending on whether the motion follows or counters the direction in which sampling progresses across the sensor. "Banding" occurs when a lightsource in the scene changes more rapidly than the all the rows of photo sites can be sampled. This produces frames with horizontal bands that have different light/exposure values.
This progression of CMOS sampling produces the same effect as a curtain with a slit being pulled across the sensor repeatedly, but always in the same direction, hence the term "rolling shutter." What is important to remember is that criticisms often focus on still frames extracted from video. As still photos, such artifacts "ruin" the picture. In a full-motion video context, these artifacts may last a frame or two, and subjectively may not be noticed by most viewers. Banding is more serious because it can cause some display devices to momentarily lose sync and give the appearance of a vertical hold defect.
Should "rolling shutter" be a reason to dismiss CMOS video? If you absolutely must have zip-pans and pushes, or you absolutely have to shoot sports close-ups and use your editing system to make slowmotion effects, then CMOS is not for you. If you shoot mainly talking heads, use slow or controlled pans and pushes, it will not be an issue. Also, manufacturers are becoming smarter with image processing so CMOS rolling shutter artifacts will be greatly reduced as subsequent generations of CMOS video cameras appear.
In the final analysis, CMOS vs. CCD is no more problematic than film vs. video, or uncompressed vs. MPEG. Each possesses its own issues and requires skilled camera operators who know how to produce the best imagery given the limitations. In the end, the advantages of CMOS (sensitivity, resolution, processing flexibility) may more than offset the minor concerns presented by rolling shutter effects.