From Wakapon
Revision as of 23:24, 29 December 2011 by Patapom (talk | contribs)
Jump to: navigation, search

The Color Pipeline : A Compendium of colorimetry and light perception for the Computer Graphics programmer

This article has 3 knowledge levels :

  • The first level is the article itself, which is going to attempt to sum up what I could grasp of the vast subject (!!) that is color perception
  • The article will sometimes refer to another page called Colorimetry where technical informations regarding specific details are available
  • The Colorimetry page itself will sometimes refer to even more detailed informations and equations that will help performing various tasks

This article is a poor attempt at gathering all the important notions and is a digest of the enormous amount of available informations about colors, color spaces, color profiles, color corrections, color grabbing and color display.


Color Spaces

RGB

XYZ

Quick overview of the pipeline

The typical color pipeline for a photographer would imply the acquisition by a camera, storage to the disk (usually in JPEG or RAW), processing (in Photoshop or Gimp) then perhaps another storage and finally a hard print to paper.

In CG, the pipeline is quite larger and no longer limited to a single unique pipeline since images can come from different sources (a camera, a hand-painted texture, a rendering software).

Typically, you have the following scenarii for image generation:

Real Scene → Camera → Storage (real scene acquisition scenario)
Photoshop → Storage (hand-painted scenario)
Renderer → Storage (generated scenario)

Then, you have optionally one or more instances of the processing stage:

Storage → Photoshop → Storage

Finally, the main pipeline:

Storage → 3D Renderer → Frame Buffer → Display


Obviously, the ultimate goal is to display the exact same color (perceptually speaking) than the one that was originally viewed/captured/painted.

It would be easy if:

  • The camera could have the same adaptation range as the eye and store the luminance in a lossless HDR format.
  • All the various stages in the pipelines would work in linear-space colors.
  • The display device could render the same luminance levels as the ones stored by the camera.

Unfortunately, there are various clipping, compression and transform limitations in each of the described stages.


Acquisition

A Bayer filter on a CCD

The color acquisition by a camera is performed by CCD (charge-coupled device) sensors. These sensors commonly respond to 70 percent of the incident light making them far more efficient than photographic film, which captures only about 2 percent of the incident light.

The sensor is usually covered with a Bayer mask. Each square of four pixels has one filtered red, one blue, and two green (the human eye is more sensitive to green than either red or blue). The result of this is that luminance information is collected at every pixel, but the color resolution is lower than the luminance resolution.

Typical digital cameras have 12-bit CCD, high end cameras can have 14-bit CCD while scientific telescopes or scientific cameras can have a 16-bit CCD.

This means each RGB component can then take a value in [0,4095], although the actual maximum is usually limited by noise.


An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location.

This means the RGB values stored by the camera internally is proportional to the perceived light intensity in the Red, Green and Blue parts of the spectrum. But that does not mean these values are standardized and are the same for all cameras !

Ignoring the minor discrepancies of optics, the main problem comes from the RGB filters applied to the CCD that will react differently depending on the coating material and method used by the camera manufacturer. Each camera has its own [Profile] which we will discuss later. But this is where the nightmare begins ! Check a nice video showing you how to correctly setup the color profile of your camera: http://www.pinkbike.com/news/Camera-color-Profile-2010.html.


With pro and semi-pro DSLR cameras (the only ones we will be discussing here), you usually have 2 main options for storage :

RAW

RAW is the method of choice for photographers since you can leave many things (e.g. the white balance and color grading) to the processing stage (typically Adobe Photoshop, Adobe Lightroom).

Raw image files are sometimes called digital negatives, as they fulfill the same role as negatives in film photography: that is, the negative is not directly usable as an image, but has all of the information needed to create an image. Likewise, the process of converting a raw image file into a viewable format is sometimes called developing a raw image, by analogy with the film development process used to convert photographic film into viewable prints. The selection of the final choice of image rendering is part of the process of white balancing and color grading.

Like a photographic negative, a raw digital image may have a wider dynamic range or color gamut than the eventual final image format, and it preserves most of the information of the captured image. The purpose of raw image formats is to save, with minimum loss of information, data obtained from the sensor, and the conditions surrounding the capturing of the image (the metadata).

Although RAW is a good choice for exporting images, it cannot be read back by a 3D renderer directly as the image will certainly require a lot of processing prior to be saved to a "ready format" that we will later be able to use directly.

Typical processing in the CG pipeline involves white balancing, color grading, level and curve adjustment as well as a fair deal of hand painting (e.g. tiling) by artists that will transform the raw image into a usable texture. This is described later in the [Processing] section.

It is important to note that RAW still stores RGB colors in the color profile of the camera as RGB is not a device-independent format.

JPEG

JPEG is a very compact format that might be useful to store many images in the minimum amount of space but, although the processing software embedded in cameras is pretty good (since it was calibrated for the camera itself), it has the main disadvantage of forcing the photographer to decide immediately of the white balance and color balance to apply to the image without the possibility to retrieve the image as it was originally shot.

Also, JPEG only supports 8 bits per component, losing quite an important deal of precision compared to the 12-bit depth of the standard CCD sensor.


Storage

Processing

Photoshop

Loading

Lighting

Writing

Displaying

Color Acquisition

Taking a Picture

Artist Painting

Color Storage

Color Profiles

When editing an image, there will be a color profile that the image is currently in. This is called your "working profile" or "working color space." There are three primary working spaces when editing images in Adobe Photoshop: ProfotoRGB, AdobeRGB, and sRGB. These are ICC profiles designed to be assigned to out-of-camera image files.

When shooting in RAW mode, the color space setting on your camera is essentially an afterthought, since you will be selecting your working space when converting the RAW to an editable file. However, if you are shooting JPEG, it is important to set the ideal setting for color space in your camera. (Source: http://www.steves-digicams.com/knowledge-center/color-management-picking-the-right-working-space.html)

Gamma Correction

Image Formats

JPEG

PNG

TGA

TIFF

HDR

Color Loading

Very important !

We Want Linear Space

Fixing Things

Nuaj' Code

The bitmap class


Lighting in Linear Space

Writing (to the FrameBuffer)

Monitor display

CRT

LCD

Plasma