(6 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | =The Color Pipeline : A Compendium | + | =The Color Pipeline : A Compendium about colorimetry and light perception for the Computer Graphics programmer= |
+ | This article is a poor (and still confusing) attempt at gathering all the important notions and is a digest of the enormous amount (!!) of available informations about colors, color spaces, color profiles, color corrections, color management, color grabbing and color display. | ||
− | This article has 3 knowledge | + | This article has 3 levels of knowledge : |
− | * The first level is the article itself, which is going to attempt to sum up in a very general what I could grasp of the vast subject | + | * The first level is the article itself, which is going to attempt to sum up in a very general way of what I could grasp of the vast subject that is color perception |
− | * | + | * It will sometimes refer to other pages called [[Colorimetry]] and [[Color Profile]] where technical information regarding specific details are available |
− | * The [[Colorimetry]] | + | * The [[Colorimetry]] and [[Color Profile]] pages will sometimes themselves refer to even more detailed information and equations (''e.g.'' [[Color_Transforms]], [[Illuminant_Computation]], [[Image_Metadata]]) |
− | |||
− | |||
=Quick overview of the pipeline= | =Quick overview of the pipeline= | ||
− | The typical color pipeline for a photographer would imply the acquisition by a camera, storage to the disk (usually in JPEG or RAW), processing (in Photoshop or Gimp) then perhaps another storage and finally a hard print to paper. | + | The typical color pipeline for a photographer would imply the acquisition by a camera, storage to the disk (usually in JPEG or RAW), processing (in Adobe Photoshop, Adobe Lightroom or Gimp) then perhaps another storage stage and finally a hard print to paper. |
In CG, the pipeline is quite larger and no longer limited to a single unique pipeline since images can come from different sources (a camera, a hand-painted texture, a rendering software). | In CG, the pipeline is quite larger and no longer limited to a single unique pipeline since images can come from different sources (a camera, a hand-painted texture, a rendering software). | ||
Line 16: | Line 15: | ||
Typically, you have the following scenarii for image generation: | Typically, you have the following scenarii for image generation: | ||
:Real Scene → Camera → Storage (real scene acquisition scenario) | :Real Scene → Camera → Storage (real scene acquisition scenario) | ||
− | :Photoshop → Storage (hand- | + | :Photoshop → Storage (hand-painting scenario) |
:Renderer → Storage (generated scenario) | :Renderer → Storage (generated scenario) | ||
Line 30: | Line 29: | ||
It would be easy if: | It would be easy if: | ||
* The camera could have the same adaptation range as the eye and store the luminance in a lossless, device-independent HDR format. | * The camera could have the same adaptation range as the eye and store the luminance in a lossless, device-independent HDR format. | ||
− | * | + | * Every stage in the pipelines would work in device-independent linear-space colors. |
* The display device could render the same luminance levels as the ones stored by the camera. | * The display device could render the same luminance levels as the ones stored by the camera. | ||
− | Unfortunately, there are various clipping, compression and transform limitations | + | Unfortunately, there are various clipping, compression and transform limitations at each stage that we will attempt to describe in a quick overview of the pipeline. |
==Acquisition== | ==Acquisition== | ||
− | |||
− | |||
First, color acquisition by a camera sensor or a scanner is not device-independent at all. | First, color acquisition by a camera sensor or a scanner is not device-independent at all. | ||
Although, as we will see later, [http://en.wikipedia.org/wiki/Charge-coupled_device CCD (charge-coupled device) sensors] capture a value proportional to the light intensity reaching the sensor, the sensor has 1) a limited range (bit depth) and 2) the RGB color filter has its very own response curve that is camera-specific. | Although, as we will see later, [http://en.wikipedia.org/wiki/Charge-coupled_device CCD (charge-coupled device) sensors] capture a value proportional to the light intensity reaching the sensor, the sensor has 1) a limited range (bit depth) and 2) the RGB color filter has its very own response curve that is camera-specific. | ||
− | In short, each camera has its own '''Color Profile'''. Even 2 cameras of the same model and brand can have 2 different Color Profiles, | + | In short, each camera has its own '''Color Profile'''. Even 2 cameras of the same model and brand can have 2 different Color Profiles. Also, a camera will see its Color Profile change due to sensor degradation/aging. |
The '''Color Profile''' of an image is the most important concept we need to deal with as it intervenes in every stage of the pipeline. It's inherent to all stages of the pipeline because (almost) all of these stages work with '''RGB Color Space''' which is maybe the most well known and relevant color space because imposed by hardware but, as we saw, the sensors/emitters all have their own '''profile''', making them (and the RGB Color Space) inherently '''device-dependent'''. | The '''Color Profile''' of an image is the most important concept we need to deal with as it intervenes in every stage of the pipeline. It's inherent to all stages of the pipeline because (almost) all of these stages work with '''RGB Color Space''' which is maybe the most well known and relevant color space because imposed by hardware but, as we saw, the sensors/emitters all have their own '''profile''', making them (and the RGB Color Space) inherently '''device-dependent'''. | ||
− | In conclusion: RGB | + | In conclusion: RGB maybe the easiest and most commonly used color space but also is the least reliable in terms of transfer from one device to the other. This is why we need '''Color Management''' and '''Color Profiles''' to ensure the colors stay "the same" throughout the entire pipeline. |
==Storage== | ==Storage== | ||
− | Image storage, on top of storing images in different Color Profiles, is a difficult task due to the multitude of | + | Image storage, on top of storing images in different Color Profiles, is a difficult task due to the multitude of image formats and their limitations : there is no one-size-fits-all perfect format. |
− | A few of the commonly used image formats we will be dealing with include JPEG | + | A few of the commonly used image formats we will be dealing with include JPEG, PNG, TIFF, BMP, GIF, TGA, RAW, DDS and HDR. |
Each format can: | Each format can: | ||
Line 62: | Line 59: | ||
* Apply gamma-compression or not | * Apply gamma-compression or not | ||
* Use an explicit Color Profile or not | * Use an explicit Color Profile or not | ||
− | * Store metadata or not | + | * Store [[Image_Metadata|metadata]] or not |
* Use the RGB color space or some other color spaces like YCbCr or CIELAB in the case of TIFF extensions (even CMYK for printing purpose but we will ignore that since it's not part of our pipeline) | * Use the RGB color space or some other color spaces like YCbCr or CIELAB in the case of TIFF extensions (even CMYK for printing purpose but we will ignore that since it's not part of our pipeline) | ||
− | Basically, the RGB Color Space is used in almost every case. As stated in the previous [[Color_Pipeline#Acquisition Acquisition]] section, RGB is the method of choice due to hardware constraints but also a device-dependent format. This is why the most recent profiles usually include '''metadata''' that give informations about the Color Profile the image was created with. | + | Basically, the RGB Color Space is used in almost every case. As stated in the previous [[Color_Pipeline#Acquisition||Acquisition]] section, RGB is the method of choice due to hardware constraints but also a device-dependent format. This is why the most recent profiles usually include [[Image_Metadata|'''metadata''']] that give informations about the Color Profile the image was created with. |
If each stage of the pipeline follows the Color Profile correctly from beginning to end, then ultimately we will display the same colors independently of the display device used, and these colors will be the same no matter how many storage and processing stages the image sustained, no matter the camera or software used to create them. | If each stage of the pipeline follows the Color Profile correctly from beginning to end, then ultimately we will display the same colors independently of the display device used, and these colors will be the same no matter how many storage and processing stages the image sustained, no matter the camera or software used to create them. | ||
Line 72: | Line 69: | ||
==Processing== | ==Processing== | ||
− | We will be talking | + | We will be talking only about Adobe Photoshop here as it is certainly the main tool used by artists in the CG and video games industry. Photoshop has been embedding '''Color Profiles''' and '''Color Management''' from the CS2 version, certainly due to the explosion of the digital cameras and LCD display devices market : the birth of the digital generation. |
+ | |||
Line 123: | Line 121: | ||
Raw image files are sometimes called digital negatives, as they fulfill the same role as negatives in film photography: that is, the negative is not directly usable as an image, but has all of the information needed to create an image. Likewise, the process of converting a raw image file into a viewable format is sometimes called developing a raw image, by analogy with the film development process used to convert photographic film into viewable prints. The selection of the final choice of image rendering is part of the process of white balancing and color grading. | Raw image files are sometimes called digital negatives, as they fulfill the same role as negatives in film photography: that is, the negative is not directly usable as an image, but has all of the information needed to create an image. Likewise, the process of converting a raw image file into a viewable format is sometimes called developing a raw image, by analogy with the film development process used to convert photographic film into viewable prints. The selection of the final choice of image rendering is part of the process of white balancing and color grading. | ||
− | Like a photographic negative, a raw digital image may have a wider dynamic range or color gamut than the eventual final image format, and it preserves most of the information of the captured image. The purpose of raw image formats is to save, with minimum loss of information, data obtained from the sensor, and the conditions surrounding the capturing of the image (the metadata). | + | Like a photographic negative, a raw digital image may have a wider dynamic range or color gamut than the eventual final image format, and it preserves most of the information of the captured image. The purpose of raw image formats is to save, with minimum loss of information, data obtained from the sensor, and the conditions surrounding the capturing of the image (the [[Image_Metadata|metadata]]). |
Although RAW is a good choice for exporting images, it cannot be read back by a 3D renderer directly as the image will certainly require a lot of processing prior to be saved to a "ready format" that we will later be able to use directly. | Although RAW is a good choice for exporting images, it cannot be read back by a 3D renderer directly as the image will certainly require a lot of processing prior to be saved to a "ready format" that we will later be able to use directly. |
Latest revision as of 17:07, 20 November 2016
Contents
The Color Pipeline : A Compendium about colorimetry and light perception for the Computer Graphics programmer
This article is a poor (and still confusing) attempt at gathering all the important notions and is a digest of the enormous amount (!!) of available informations about colors, color spaces, color profiles, color corrections, color management, color grabbing and color display.
This article has 3 levels of knowledge :
- The first level is the article itself, which is going to attempt to sum up in a very general way of what I could grasp of the vast subject that is color perception
- It will sometimes refer to other pages called Colorimetry and Color Profile where technical information regarding specific details are available
- The Colorimetry and Color Profile pages will sometimes themselves refer to even more detailed information and equations (e.g. Color_Transforms, Illuminant_Computation, Image_Metadata)
Quick overview of the pipeline
The typical color pipeline for a photographer would imply the acquisition by a camera, storage to the disk (usually in JPEG or RAW), processing (in Adobe Photoshop, Adobe Lightroom or Gimp) then perhaps another storage stage and finally a hard print to paper.
In CG, the pipeline is quite larger and no longer limited to a single unique pipeline since images can come from different sources (a camera, a hand-painted texture, a rendering software).
Typically, you have the following scenarii for image generation:
- Real Scene → Camera → Storage (real scene acquisition scenario)
- Photoshop → Storage (hand-painting scenario)
- Renderer → Storage (generated scenario)
Then, you have optionally one or more instances of the processing stage:
- Storage → Photoshop → Storage
Finally, the main pipeline:
- Storage → 3D Renderer → Frame Buffer → Display
Obviously, the ultimate goal is to display the exact same color (perceptually speaking) than the one that was originally viewed/captured/painted.
It would be easy if:
- The camera could have the same adaptation range as the eye and store the luminance in a lossless, device-independent HDR format.
- Every stage in the pipelines would work in device-independent linear-space colors.
- The display device could render the same luminance levels as the ones stored by the camera.
Unfortunately, there are various clipping, compression and transform limitations at each stage that we will attempt to describe in a quick overview of the pipeline.
Acquisition
First, color acquisition by a camera sensor or a scanner is not device-independent at all.
Although, as we will see later, CCD (charge-coupled device) sensors capture a value proportional to the light intensity reaching the sensor, the sensor has 1) a limited range (bit depth) and 2) the RGB color filter has its very own response curve that is camera-specific.
In short, each camera has its own Color Profile. Even 2 cameras of the same model and brand can have 2 different Color Profiles. Also, a camera will see its Color Profile change due to sensor degradation/aging.
The Color Profile of an image is the most important concept we need to deal with as it intervenes in every stage of the pipeline. It's inherent to all stages of the pipeline because (almost) all of these stages work with RGB Color Space which is maybe the most well known and relevant color space because imposed by hardware but, as we saw, the sensors/emitters all have their own profile, making them (and the RGB Color Space) inherently device-dependent.
In conclusion: RGB maybe the easiest and most commonly used color space but also is the least reliable in terms of transfer from one device to the other. This is why we need Color Management and Color Profiles to ensure the colors stay "the same" throughout the entire pipeline.
Storage
Image storage, on top of storing images in different Color Profiles, is a difficult task due to the multitude of image formats and their limitations : there is no one-size-fits-all perfect format.
A few of the commonly used image formats we will be dealing with include JPEG, PNG, TIFF, BMP, GIF, TGA, RAW, DDS and HDR.
Each format can:
- Apply lossy or lossless compression, or no compression at all.
- Store an Alpha Channel or not
- Use 8, 16, 24 or 32 bits per component
- Apply gamma-compression or not
- Use an explicit Color Profile or not
- Store metadata or not
- Use the RGB color space or some other color spaces like YCbCr or CIELAB in the case of TIFF extensions (even CMYK for printing purpose but we will ignore that since it's not part of our pipeline)
Basically, the RGB Color Space is used in almost every case. As stated in the previous |Acquisition section, RGB is the method of choice due to hardware constraints but also a device-dependent format. This is why the most recent profiles usually include metadata that give informations about the Color Profile the image was created with.
If each stage of the pipeline follows the Color Profile correctly from beginning to end, then ultimately we will display the same colors independently of the display device used, and these colors will be the same no matter how many storage and processing stages the image sustained, no matter the camera or software used to create them.
Processing
We will be talking only about Adobe Photoshop here as it is certainly the main tool used by artists in the CG and video games industry. Photoshop has been embedding Color Profiles and Color Management from the CS2 version, certainly due to the explosion of the digital cameras and LCD display devices market : the birth of the digital generation.
Loading
Lighting
Writing
Displaying
Summary of the Problems
Limitation of Color Spaces
First of all, we should talk about the various color spaces used
RGB
XYZ
Color Profiles
Color Acquisition
The color acquisition by a camera is performed by CCD sensors. These sensors commonly respond to 70 percent of the incident light making them far more efficient than photographic film, which captures only about 2 percent of the incident light.
The sensor is usually covered with a Bayer mask. Each square of four pixels has one filtered red, one blue, and two green (the human eye is more sensitive to green than either red or blue). The result of this is that luminance information is collected at every pixel, but the color resolution is lower than the luminance resolution.
Typical digital cameras have 12-bit CCD, high end cameras can have 14-bit CCD while scientific telescopes or scientific cameras can have a 16-bit CCD.
This means each RGB component can then take a value in [0,4095], although the actual maximum is usually limited by noise.
An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location.
This means the RGB values stored by the camera internally is proportional to the perceived light intensity in the Red, Green and Blue parts of the spectrum. But that does not mean these values are standardized and are the same for all cameras !
Ignoring the minor discrepancies of optics, the main problem comes from the RGB filters applied to the CCD that will react differently depending on the coating material and method used by the camera manufacturer. Each camera has its own Color Profile which we will discuss later. But this is where the nightmare begins ! Check a nice video showing you how to correctly setup the color profile of your camera: http://www.pinkbike.com/news/Camera-color-Profile-2010.html.
With pro and semi-pro DSLR cameras (the only ones we will be discussing here), you usually have 2 main options for storage :
RAW
RAW is the method of choice for photographers since you can leave many things (e.g. the white balance and color grading) to the processing stage (typically Adobe Photoshop, Adobe Lightroom).
Raw image files are sometimes called digital negatives, as they fulfill the same role as negatives in film photography: that is, the negative is not directly usable as an image, but has all of the information needed to create an image. Likewise, the process of converting a raw image file into a viewable format is sometimes called developing a raw image, by analogy with the film development process used to convert photographic film into viewable prints. The selection of the final choice of image rendering is part of the process of white balancing and color grading.
Like a photographic negative, a raw digital image may have a wider dynamic range or color gamut than the eventual final image format, and it preserves most of the information of the captured image. The purpose of raw image formats is to save, with minimum loss of information, data obtained from the sensor, and the conditions surrounding the capturing of the image (the metadata).
Although RAW is a good choice for exporting images, it cannot be read back by a 3D renderer directly as the image will certainly require a lot of processing prior to be saved to a "ready format" that we will later be able to use directly.
Typical processing in the CG pipeline involves white balancing, color grading, level and curve adjustment as well as a fair deal of hand painting (e.g. tiling) by artists that will transform the raw image into a usable texture. This is described later in the Processing section.
It is important to note that RAW still stores RGB colors in the color profile of the camera and is not a device-independent format.
JPEG
JPEG is a very compact format that might be useful to store many images in the minimum amount of space but, although the processing software embedded in cameras is pretty good (since it was calibrated for the camera itself), it has the main disadvantage of forcing the photographer to decide immediately of the white balance and color balance to apply to the image without the possibility to retrieve the image as it was originally shot.
Also, JPEG is a lossy format that only supports 8 bits per component, losing quite an important deal of precision compared to the 12-bit depth of the standard CCD sensor and the lossless RAW format.
Finally, JPEG stores a gamma-compressed version of the image (as will be seen later) that further increases the loss of information.
Taking a Picture
Artist Painting
Color Storage
Color Profiles
When editing an image, there will be a color profile that the image is currently in. This is called your "working profile" or "working color space." There are three primary working spaces when editing images in Adobe Photoshop: ProfotoRGB, AdobeRGB, and sRGB. These are ICC profiles designed to be assigned to out-of-camera image files.
When shooting in RAW mode, the color space setting on your camera is essentially an afterthought, since you will be selecting your working space when converting the RAW to an editable file. However, if you are shooting JPEG, it is important to set the ideal setting for color space in your camera. (Source: http://www.steves-digicams.com/knowledge-center/color-management-picking-the-right-working-space.html)
Gamma Correction
Image Formats
JPEG
PNG
TGA
TIFF
HDR
Color Loading
Very important !
We Want Linear Space
Fixing Things
Nuaj' Code
The bitmap class