PxPixel
One-Chip Color - GovernmentVideo.com

One-Chip Color

Several trends in video are leading to broad acceptance of one-chip video devices at all ends of the market spectrum.
Author:
Publish date:

by Wayne Cole



Image placeholder title

Several trends in video are leading to broad acceptance of one-chip video devices at all ends of the market spectrum. Many of the miniature POV cameras used in sports, surveillance, security and field combat or policing operations rely on a single hi-res chip.

High-speed cameras for slow-motion recording have traditionally been used in scientific and engineering applications. Now they are expanding significantly into Hollywood filmmaking and broadcast advertising content creation. And now, many indie filmmakers are using RED or even less expensive video-enabled DSLRs like the Panasonic DMC-GH1K, Canon EOS-5D Mark II, or Nikon D5000 for HD and 2K projects. So how, exactly, does one get color from a one-chip camera?

EYE OF THE BEHOLDER

In the early days of photography a lot of research went into how the human visual system (HVS) perceives light and color. As you may recall from high school physical science, light color is a function of frequency, and intensity is a function of amplitude. Both of these quantities combine in a non-linear fashion to compose the energy of a light "particle," wave or beam.

Image placeholder title

Black-and-white drawing from Bayer's original patent application in 1976 (with color added by the author.) A 2x2 block from this array is used to form one pixel. Once the move was made from photo-chemical imaging to electronic imaging, the initial impetus was to encode only the intensity, interpreting it as "luminance," or black-and-white pictures.

The first approach to color entailed the use of a prism to "homogenize" light into separate red, blue and green colors before electrically encoding intensity. Then these colors were added together at each geometrically corresponding picture element (pixel) to produce the corresponding color; hence the term "additive color." The transition from plumbicon tubes and analog sensors to digital processing and CCDs continued to use the beam-splitting, additive color approach for most applications. But industrial and scientific applications such as machine vision, high-speed photography, photo microscopy and security/surveillance often needed small and rugged units. That precluded the bulk and delicate alignment required for a prism system to work without significant smearing.

In 1976, Bryce Bayer, an Eastman Kodak engineer, filed a patent for a "Color Imaging Array" that came to be known as the Bayer Filter. It defined an array of "individual luminance-and chrominancesensitive elements" arranged with uniform high-frequency luminance sampling and two different low-frequency chrominance sampling elements. Every other element in a row or column would be a luma element (Y), while the alternating elements in odd-numbered rows would be one chroma element (C1) and alternating elements of even-numbered rows would be the second chroma element (C2).

Since the HVS is most sensitive to shades of green, Bayer proposed that Y elements be greensensitive, while C1 would sense blue and C2 would sense red. By making each element small enough, the combined effect would appear to the HVS to be very much like a spot of "natural color." By making a dense-enough array of these "spots," they would fool the HVS into thinking it was seeing a natural color scene.

Shortly after Bayer's patent filing, a number of variants using the same 2x2 smallest element group appeared. They generally added a transparent or "white" element for filters that defined CMYW or RGBW sensing systems.

MAKING IT WORK

Various implementations evolved using thin films, micro lenses, and small dichroic mirrors. Electronic filtering in the frequency domain can also be used to implement such filtration. But the end result of the popular Bayer filter sensing, be it a single CMOS or CCD sensor is that each element on the sensor is "exposed" to only one primary color of the incoming light. Each 2x2 element block is mathematically interpolated into a single pixel of the resulting picture.

MULTIPLE EYES ON NASA MOON IMPACT

Image placeholder title


Artist's rendering of the LCROSS spacecraft and Centaur separation. IMAGE: NASA

The Oct. 9 impact of the NASA Centaur module on the surface of the moon was captured on several different devices around the world and by LCROSS (the Lunar Crater Observation and Sensing Satellite) and could change the way scientists look at the moon. NASA TV wasted no time getting the moving images of the impact up on its channel and Website.

Excited NASA officials on NASA TV, speaking from locations including at a special event at the Newseum in Washington, spoke of the enormous amount of data the experiment obtained from several Earth and orbital stations. "This is NASA at its very best," said one official.

No longer will scientists look at the moon as a desolate, unchanging place, but as a dynamic body changing over time and with plenty of secrets yet to reveal, NASA said. It remained to be seen if the mission would show evidence of water on the moon.

Some reporters at a NASA press conference were disappointed by the lack of a spectacular debris cloud or other obvious visual sign of the impact beyond a crater that looked small on the released video clips. NASA officials said morning that more answers would be forthcoming but that the data, including spectrometry, would be studied for a long time ahead.

The Japanese Aerospace Exploration Agency provided early access to data from its lunar explorer and the Indian Space Research Organization provided radar mapping data of the lunar poles. This "demosaic" operation forms the basis of what digital photographers have come to know as the RAW picture format. And, if you've ever dealt with it, you know this is not a uniform format, but is generally manufacturer, make and model specific. Due to differences in filtration methods, materials, sensor properties and even a manufacturer's color design preferences, the processor in each single-chip imaging device implements a different, often proprietary algorithm to get to the RAW color image.

You can think of this RAW color balance as similar to the color characteristics of different films. As cinematographer, you then get to choose a "baseline" by choosing your sensor/camcorder, then make modifications using sensitivity, white balance, exposure and other controls the unit provides.

The technology to handle the required bandwidth for video recording and display has always limited both the resolution and sensitivity of video capture devices. Three separate sensors made sense in order to partially alleviate these problems. But film imagery was always king in both departments until digital image sensors caught up at the beginning of this century.

For a while, digital camcorders made a big deal of being able to also record digital still images. But recent advances in digital imaging devices make a joke of the still-image resolution possible with a video camcorder. The reverse, however, is not true. Recent DSLRs with video recording capability can take advantage of high-bandwidth solid-state and external disk-based recording systems to record uncompressed HD video with 16- or 32-bit color depths. They have been used successfully for theatrical releases at 2K and 4K resolution for a fraction of the equipment cost of a traditional film shoot.

ONE-CHIP FUTURE?

CMOS has closed the gap on sensitivity and resolution without major difference in costs compared to a CCD. To be sure, each technology has some differences for which you need to account in your shooting habits. But the success of Bayer filtering and single-chip imaging in professional digital photography may point to the end of 3-chip video camcorders. And the trend toward professional video capable DSLRs may mean we are on the verge of seeing one-chip "multi-media" recording devices whose "raw" imagery can be batch-processed for video, film, print, and Web from a single application. It is clear the technology exists. Now it is only a matter of changing the mindset of photographers, videographers and cinematographers, into "mediographers" who in turn create the demand to which developers and manufactures will respond.

Related

Image placeholder title

Wacom Intuos 4 for Your Studio

Quit mousing around Save your hands and arms from RSI (repetitive strain injury) pain while gaining precise user configurable control for your computer “pointing device.” by Wayne Cole More than a decade ago, I was having hand, arm and shoulder

RED ONE Heading to Mt. Kilimanjaro

A videographer from Massachusetts-based Del Padre Digital is packing a RED ONE 4K digital cinema camera on a three-week trip to Tanzania as part of the National Science Foundation’s STEPS (Science Theater Education Programming System) project.