Color is a Slippery Trickster

“How do you know, when you think blue — when you say blue — that you are talking about the same blue as anyone else?"

Christopher Moore, Sacre Bleu: A Comedy d'Art

Two recurring questions that we often see, including in my mailbox:

  • "I have color management properly set up on my computer; why is it that the color is different between an out-of-camera JPEG and, say, Lightroom (substitute with your favorite 3rd-party converter)?"
  • "Why is it that the particular color on a photo is different from the actual color?"
Sony a55: embedded JPEG vs. ACR render - red umbrella

Figure 1. Left image – embedded JPEG, right image – RAW, opened in ACR

Which one has the red umbrella in the correct color?

Though these two questions seem to be different ones, they still have much in common. That’s the part where we need to consider the stages it takes to get from captured data to color, as well as the limitations of color models and output media (monitor and print, mostly) when it comes to color reproduction.

SONY a55: embedded JPEG vs. ACR render - TShort and letters

Figure 2. Left image – embedded JPEG, right image – RAW, opened in ACR:

the colors of the red T-shirt (samplers 1 and 2) and blue letters on the sign “New York Police” (samplers 3 and 4) are quite different

The goals of this article are twofold: the first is to demonstrate that out-of-camera JPEGs, including in-camera previews, can’t be implicitly, with no checking, used to evaluate color (as we already know, the in-camera histogram is misleading, too). The second is to show that it isn’t necessary that the camera manufacturer-recommended converter be specifically tuned to match the out-of-camera JPEG.

Let’s start with an example.

Recently, I got an email from a photographer asking essentially the same question we quoted in the beginning: how is it that the color on an out-of-camera JPEG is nothing like the color on the original subject of the shot? The photographer was taking shots of blue herons and hummingbirds, relying on the previews to evaluate the shots, and was rather confused: the camera was displaying strongly distorted blues in the sky and on the birds. One can say that camera's LCD and EVF are “calibrated” to an unknown specification, so this “calibration” and viewing conditions might be what causes the color issue. However, the color on a computer monitor also looked wrong. Naturally, the photographer decided to dig deeper, and to take a picture of something blue to check the accuracy of the color. The test subject was a piece of stained glass, and… (drumroll, please)…the out-of-camera JPEGs looked off not just on the camera display, but (as expected from examining the shots of birds and sky) on a computer monitor as well.

Here is the out-of-camera JPEG (the camera was set to sRGB, and the photographer told me that setting it to Adobe RGB didn't really make much of a difference). The region of interest was the pane of glass in the middle, the one that looks cyan-ish-colored.

SONY a6500: embedded JPEG

Figure 3. SONY a6500. Embedded JPEG

The photographer said it was painted in a much deeper blue. Obviously, I asked for details and got a raw file.

I looked into the metadata (EXIF/Makernotes) and ruled out any issues with the camera settings – they were all standard. Opening the raw file in Adobe CameraRaw with “as shot” white balance, I got a much more reasonable blue, and the photographer confirmed that it looked much closer to the real thing, maybe lacking a tad of depth in the blue, as if it is from a slightly different palette. So, this is not a problem with white balance. Moreover, the default conversion in ACR proved that the color could be rendered better than on the out-of-camera JPEG, even with a 3rd-party converter.

SONY a6500: ACR render

Figure 4. SONY a6500. RAW opened in ACR

The shot was taken with a SONY a6500, so my natural impulse was to recommend to the photographer to use the "SONY-recommended" converter, which happens to be Capture One (Phase One).

One thing to keep in mind as you’re reading this: this is in no way an attack on any specific product. You can check for yourself and find if this effect occurs with your camera and preferred RAW converter. The reason we’re using this as an example is just because it happened to fall into our lap. That said, we certainly wouldn’t mind if SONY and Phase One fixed this issue.

Back to our image. Here comes the unpleasant part.

The first thing I see upon opening the raw file in Capture One is the blue pane of glass covered with an "overexposure" overlay. Easily enough fixed, change the curve in Capture One from "normal film simulation" to linear and the overexposure indication is gone. Next, I move the exposure slider to +0.66 EV: overexposure doesn’t kick in (takes 0.75 EV for faint over-exposure overlay spots to appear). Here is the result; it's distinctly different in color from what we have in the embedded JPEG, in spite of the fact that white balance was left at “As Shot”, but it's still wrong, having a color shift into purple:

SONY a6500: Capture One render

Figure 5. SONY a6500. RAW in Capture One

Let’s have a closer look at the region of interest:

Figure 6. The blue area is the main region of interest

So, let’s reiterate the two points we made at the beginning:

  • First, not only is the JPEG histogram misleading, the JPEG color preview may not be very useful either, be it in-camera or computer monitor, — check how it is with your camera;
  • Second, for some reason, Capture One renders things differently from SONY in spite of being "the recommended" and free option for SONY users. Not just differently, in fact, but also incorrectly with certain hues.

First digression

When we say “render things differently”, we mean not just the obvious things like different color profiles (color transforms; icc, dcp, or some other) used in different converters, and / or different contrast (tone) curves; not (minor) differences in sharpening and / or noise reduction: we also mean how white balance is applied.

Somehow it is often forgotten that the methods of white balance calculation and application are also different between various converters, leading to differences in color renditions.

In a lot of cases we see that discussions of white balance operate with color temperature and tint values. However, white balance is not measured as color temperature and tint – it is only reported this way. This report is an approximation only, and there are multiple ways to derive such a report from white balance measurement data. If you compare color temperature and tint readings from the same raw file in different converters, most probably you will find the readings are different. That’s because the methods of calculating color temperature (actually, correlated color temperature, CCT) and tint vary, and there is no exact or standard way to calculate those parameters from the primary data recorded in the camera (essentially, this primary data is ratios of red to green and blue to green for a neutral area in the scene, or for the whole scene, averaged, see “Gray World” method and its variations).

Consider Canon EOS 5D Mark II. For different camera samples the preset for Flash color temperature in EXIF data of non-modified raw files varies from 6089 K to 6129 K. Checking the color temperature for a range of Canon camera models, the color temperature for Flash preset varies from 6089 K on Canon EOS 5D Mark II to 7030 K on Canon EOS 60D. For Canon EOS M3 it reaches 8652 K. Meanwhile, Adobe converters have 5500 K as Flash preset, for any camera. If you dig deeper, the variations of tint are also rather impressive.

Quite often the color temperature and tint reports differ between the converters when you establish white balance in the converters using “click on gray” method.

Some converters calculate white balance back from the color temperature and tint, calculated using various methods; some (like Adobe) re-calculate color transform matrices based on color temperature and tint; while some apply the white balance coefficients (those ratios we mentioned above). Obviously, the neutrals will look nearly the same, but overall color changes depending on the method in use and the color transform matrices a converter has for the camera.

Of course, it is rather strange that Capture One is indicating overexposure in the default mode. If one were to open the raw file in RawDigger or FastRawViewer, it becomes clear that the raw is not even close to overexposure; it’s easily 1 1/3 EV below the saturation point [the maximum value on the shot is 6316 (red rectangle on the left histogram), while the camera can go as far as 16116: log2(16116/6316) = 1.35]. If the exposure compensation is raised to 1.5 EV, only 194 pixels in the green channels are clipped (red rectangle at the “OE+Corr” column of Exposure Stat panel), as the statistics in FastRawViewer demonstrate.

FastRawViewer and RawDigger: SONY a6500 Raw Histogram and Raw Exposure Statistics

Figure 7.

left part: FastRawViewer - RAW histogram and RAW Exposure Statistics (with & without Exposure Compensation)

right part: RawDigger – detailed RAW histogram, showing the maximum values in channels

So, Capture One is indicating “overexposure” for no good reason, effectively cutting more than 1 stop of dynamic range from the highlights in the default film simulation mode, and about 1/3 EV in linear mode.

Now completely hooked, I downloaded a scene from Imaging Resource that was shot with the same SONY a6500 model, but of course a different camera sample was used.

Let’s look at embedded vs. default Capture One, both sRGB, embedded first:

SONY a6500: Still life studio scene. Embedded JPEG

Figure 8. Sony a6500. A6500hSLI00100NR0.arw, embedded JPEG

Now, to Capture One’s “all defaults” rendition:

SONY a6500: Still life studio scene. Capture One

Figure 9. Sony a6500. A6500hSLI00100NR0.arw, opened in Capture One

I’m left completely mind boggled: comparing side-by-side, it is easy to not only see the differences in the yellows, reds, deep blues, and purples; but also the different level of color “flatness”; for example, compare the color contrast and amount of color details on the threads:

SONY a6500: Still life studio scene: embedded JPEG vs. Capture One

Figure 10. SONY a6500. A6500hSLI00100NR0.arw:

top row: crops from embedded JPEG;

bottom row: crops from Capture One render

For Figure 10, we are not suggesting to pick the best, or the sharpest rendition; just pointing out how different the renditions are. Look, for example, at the crayon box. SONY JPEG is very cold yellow, nearly greenish, like Pantone 611 C, while Capture One rendered it warm yellow, slightly reddish, like Pantone 117C. Red strips on the label “Fiddler’s” of the bottle: JPEG – close to Pantone 180C, Capture One rendition – close to Pantone 7418C. Deep purple hunk (eight from the right): JPEG – Pantone 161C, Capture One rendition – Pantone 269C. Another portion on the box with crayons, the strip that is supposed to be green: on the JPEG, it is muted green, while Capture One rendered it into a more pure and saturated variety.

Finally, I took a scene from DPReview, put it through PatchTool, and came up with the following color differences report for the embedded JPEG vs. Capture One’s version (I used dE94 metric because I think there’s too much of a difference for dE00 to be applicable):

SONY a6500: Studio scene: embedded JPEG vs. Capture One deltaE report

Figure 11. SONY a6500. DPReview.com DSC00879.arw. Embedded JPEG vs. Capture One render

The question we’re left with: how is it that the color is so different and so wrong?

How does it happen that different converters render the same color differently and incorrectly?

The real problem is a combination of:

  1. The necessity to substitute out-of-gamut colors
  2. The perceptual non-uniformity of the CIE color model, non-uniformity that is especially pronounced when it comes to blue-purple color regions
  3. Sensor metamerism being different from human observer metamerism (more on this later).

Because of that non-uniformity the hue angle is not constant, and substituting an out-of-gamut color with a less saturated color of the same hue number (we need to decrease saturation in order to fit into the gamut) results in hue discontinuity. "Blue-turns-purple" and "purple-turns-blue" are quite common problems, caused by exactly this color model’s perceptual inaccuracies. Another hue twist causes a "red-turned-orange" effect (we suggested an example in the beginning of this article). With certain colors (often called “memory colors”), the problem really catches the eye. The problem also causes a perceived change in color with any brightness changes.

One of the things that we have come to expect from color management is consistent and stable color. That is, if the color management is properly set up, we expect the color to be the same on different monitors and printers, less the constraints of the color gamut of those output devices.

Color management maintains color in a consistent manner by mapping the color numbers in one color space to the color numbers in a different color space, taking corrective measures when the source color space and the destination color space have different gamuts (those measures depend on colorimetric intent stated for the conversion, and on particular the implementation of that intent). In short, color management is guided by a strict and rather unambiguous set of rules.

Why is it that we do not enjoy the same color consistency and stability when converting RAW, even when the utmost care is used to apply the correct white balance?

Another digression

If you have been reading carefully, you may be asking why we are not limiting this to cross-converter consistency. The answer is: a color model used in a converter may be very good in general, but not very suitable for some particular shooting conditions or particular colors. Worse, some types of lights, like mercury vapor lamps used for street lighting, certain fluorescent bulbs, some “white” LEDs have such strong light spectrum deficiencies that color consistency is out of question. Oddly enough, some not-so-good color models behave better when dealing with low quality lights.

And while we are discussing consistency, there is another problem. The question “why do my consecutive indoor sports shots have different color/brightness” is also among the recurring ones. The reason for this is unrelated to RAW processing and equally affects both RAW and JPEGs: some light sources flicker. That is, for the same white balance and exposure set in the camera, the result depends on what part of the light cycle you are capturing. For example, ordinary fluorescent lights tend to flicker each half-period of the main supply frequency. Because of that flicker, it is safe to shoot at shutter speed = X/(2 *frequency of mains power), X being 1, 2, 3,…n, as the full bulb cycles are captured this way; if it is 60 Hz mains, safe speeds are 1/120, 1/60, 1/40 if you have it on your camera, 1/30,…, while for 50 Hz it is 1/100, 1/50,… You can test the lights for flicker, setting your camera to a fixed white balance, like fluorescent, and shooting at different shutter speeds, say, 1/200 and 1/30. If the color changes between the shots, it is the flicker. Of course, nearly the same is true when it comes to shooting monitor screens and various LCDs. If the refresh rate is 60 Hz, for consistent results try shooting with a shutter speed of 2 *X/60; again, X being 1, 2, 3,…. Some modern cameras help reduce this problem, synchronizing the start of the exposure with the light cycle. However for lights with a non-smooth spectrum that changes during the cycle it is not a complete solution; the shutter speed still needs to be set according to the frequency of flicker.

When we attempt to apply familiar color management logic to digital cameras, we need to realize that color management, somewhat tautologically, manages colors; and it can’t be applied directly to RAW numbers -- there is no color to raw image data to begin with. Raw image data is just a set of light measurements from a scene. Those measurements (except, for now, for Foveon sensors) are taken through color filters (color filter array, CFA), but the regular in-camera filtration (and this includes Foveon sensors) does not result in something one can unambiguously map to a reference color space, hence such measurements do not constitute color. But again, color management deals with color, and for color management to kick in we need first to convert the measurements contained in raw image data to color.

One more digression

Filtrations that result in color spaces that can be mapped to reference color spaces do exist, but currently they work acceptably well only with smooth, continuous, non-spiky spectrums – that is, many sources of light and many artificial color pigments will cause extreme metameric errors. On top of that, such filters have very low transmittance, demanding such an increase in exposure that isn’t acceptable for general-purpose cameras. However, CFA is not the only possible filtration method, and 3-color filtration has alternatives.

So, what’s the problem? Why can’t we just … convert raw data numbers to color? Well, we can, but it is not an unambiguous conversion. This ambiguity is among main reasons for the differences in output color.

Why is it ambiguous? Because we need to fit the measurements made by the camera into the color gamut of some “regular” color space: the profile connection space, working color space, or the output color space. That’s a bit of alchemy that we need here; we’re performing a transmutation between two different physical essences. To better understand the problem, we need to take a short excursion into some of color science concepts.

The term “color gamut” is commonly abused; in many cases we hear chatter discussing “camera gamut” and such. Let’s try to address this misconception because it’s an important one for the topic at hand.

Color gamut is defined as the entire range of colors available at an output, be it a TV, a projector, a monitor, a printer, a working color space. In other words, a color gamut pertains to working color spaces and devices that render color for output. Digital cameras, however, are input devices. They just measure light, and the concept of the color gamut is not relevant for such measurements: gamut means some limited range, a subset of something, but a sensor responds in some way to all visible colors present in a scene. Also, sensors are capable of responding to color at very low luminance levels, where our ability to discriminate colors is decreased or even absent. More than that: the range of wavelengths a sensor records is wider than the visible spectrum and not limited by the CIE color space chromaticity diagram; that’s why UV and IR photography is possible even with a non-modified camera. As you can see, the term color gamut does not apply to RAW. Only the range of relative lightnesses of colors poses the limitations to the sensor response, and that’s the whole different matter – dynamic range.

Thus, a sensor doesn’t have a gamut, and there is no single, standard, or even preferred set of rules defining how we map larger into smaller, raw data numbers to color numbers, nothing like what we have in color management. One needs to be creative here, making trade-offs to achieve agreeable, expected, and pleasing color most of the times.

- OK, and what happens when we set a camera to sRGB or Adobe RGB? Those do have gamuts!

- Well, nothing happens to the raw data, only a tag indicating the preferred rendering changes in metadata, and the out-of-camera JPEGs, including embedded into RAW JPEG preview(s) and external JPEGs are rendered accordingly. Whatever the color space you set your camera in, only JPEG data and, consequently, the in-camera histogram are affected. Here is a curveball: pseudo-raw files, like some small RAW variants (sRAW), which are in fact not raw but JPEGs, have white balance applied to them.

Color is a sensation, meaning color simply does not exist outside of our perception, and so we need to map measurements to sensation. In other words, we need a bridge between the compositions of wavelengths (together with their intensities), which our eye registered, and colors that we perceive. Such a bridge, or mapping, is called color matching function, CMF; or observer. It tries to emulate the way we humans process the information our eyes gather from a scene. In other words, observers model typical human perception, based on experimental data.

And here comes yet another source of ambiguity: the spectral response functions (SRFs) of the sensors we have in our cameras do not match typical human perception.

Human response (LMS) vs. camera response (RGB)

Figure 12. Human LMS vs. camera SRF

From figure 12 it is pretty obvious that there is no simple transform that can convert camera RGB output to what our perception is. More, the above graph is based on data at nearly the hottest exposure possible (white with faint texture is close to 92% of the maximum). When the exposure is decreased (say, the blue patch in ColorChecker is about 4 stops darker than the white one), the task of restoring the hue of the dark saturated blue becomes more problematic because the red curve flattens a lot and small changes in the response in red are now comparable to noise – but we need to know that red to identify the necessary hue of the blue. Now, suppose you are (mis-)led by the in-camera exposure meter, in-camera histogram, and / or “blinkies” into underexposing the scene by a stop; and there are surely darker blues in the real life than that blue patch on the ColorChecker… That’s how the color becomes unstable, and that’s how it depends on exposure.

This difference between SRF and LMS leads to what is known as metameric error: wavelength / intensity combinations that look the same to a human (that is, we recognize them as having the same color) a camera records as different and separate, with different raw numbers. This is especially the case with colors on the both sides of the lightness scale, dark colors and light colors; as well as with low saturated, close to neutral pastel colors. The reverse also happens; colors that are recorded the same in raw data look different to a human. Metameric error can’t be corrected through any math, as the spectral information characterizing the scene is absent at the stage when we deal with raw. This makes exact, non-ambiguous color reproduction impossible.

Yet another one

What follows from here is that instead of talking about some vague “sensor color reproduction” deficiencies we can operate with metameric error, comparing sensors over this defined parameter. Incidentally, this characteristic can be calculated independent of any raw converters, as a characteristic of the camera per se; but it can also be used to evaluate mappings produced by raw converters. However, measuring metameric error by shooting targets is a limited method. To quote the ISO 17321-1:2012 standard, the method based on shooting some targets (the standard refers to it as to method B) “can only provide accurate characterization data to the extent that the target spectral characteristics match those of the scene or original to be photographed”, that is, it is mostly suitable for in-studio reproduction work.

To reiterate.

What immediately follows from sensors having no gamuts and their spectral response functions differing from what we have as our perception mechanism is this: raw data needs to be interpreted to fit the output or working color space gamut (sRGB, Adobe RGB, printer profile…), and some approximate transform between a sensor’s spectral response functions and the human observer needs to be applied.

There are multiple ways to perform such an approximate transform, depending on the constraints and assumptions involved. Some of those ways are better than others. By the way, “better” needs to be defined here. When it comes to “optimum reproduction”, it can be an “appearance matching”, a “colorimetric matching”, or something in between. That is, “better” is pretty subjective, it is a matter of interpretation and quite often it is an aesthetic call on the part of a camera or raw converter manufacturer, especially if one is using default out-of-camera or out-of-converter color. It’s actually the same as with film; accurate color reproduction was never a goal for most popular emulsions, but pleasing color was.

Earlier, we mentioned that there are two major reasons for output color differences. We discussed the ambiguity, and now let’s get to the second one, the procedure and the quality of the measurements that are used to calculate color transforms for the mapping of raw data to color data.

Imagine you are shooting one of those color targets we use for profiling, like a ColorChecker. What is the light that you are going to use for the shot? It seems logical to use the illuminant that matches the one the future profile will be based upon. However, standard color spaces are based on synthetic illuminants, mostly D50 and D65 (except for two: CIE RGB, based on synthetic illuminant E, and NTSC, which is based on the illuminant C that can hardly be used for studio lighting – one needs a filter composed of 2 water-based solutions to produce it). It is rather problematic to directly obtain the camera data for a D-series illuminants simply because they are synthetic illuminants and it is very hard, if even possible, to come by studio lights matching accurately enough, for example, the D65 spectrum.

To compensate for the mismatch between actual in-studio illuminant and synthetic illuminant, profiling software needs to resort to one of the approximate transforms from studio lighting to standard illuminants. The accuracy of such a transform is very important, and the transform itself is often based not only on accuracy, but also on perceptional quality. Transforms work over rather narrow ranges; don’t expect to shoot a target under some incandescent light and produce a good D65-based profile. This, of course, is not the only problem responsible for color differences while obtaining source data to calculate color transforms, others being problems with light and camera setup, as well as the choice of targets and accuracy of target reference files.

This is in no way to say shooting ColorChecker does not lead to usable results. We provide an example of its usefulness towards the end of this article.

Yet another (but minor, compared to the two above) consideration is that color science is imperfect, especially when it comes to describing the human perception of color (remember those observers we mentioned earlier?). Some manufacturers are using more recent/more reliable models of human perception, while others may be stuck with older models and/or using less precise methods of calculations.

To sum up, the interpretations differ depending on the manufacturer’s understanding of “likeable” color, processing speed limitations, the quality of the data that was used to calculate the necessary color transforms, the type of transforms (they can be anything starting from simple linear matrix to complex 3D-functions), the way white balance is calculated and applied, and even noise considerations (matrix transforms are usually smoother compared to transforms that employ complex look-up tables). All of these factors together form what is informally called the “color model”. Since the color models are different, the output color may be quite different between different converters, including the in-camera converter that produces out-of-camera JPEGs. By the way, you can see it is not always the case that an in-camera converter produces the most pleasant or accurate color.

And thus we feel that we have proved both statements what we’ve made at the very beginning of this article:

  • The out-of-camera JPEGs, including in-camera previews can’t be implicitly, without any checking, used to evaluate color (as we already know, the in-camera histogram is misleading, too);
  • It isn’t necessary that the camera manufacturer-recommended converter be specifically tuned to match the out-of-camera JPEG.

So, we definitely know how we feel about it, but what can we do about it?

What can we do to ensure that our RAW converter renders the colors close to the colors we saw?

A custom camera profile can help with such issues.

We calculated camera profile for SONY a6500, based on RAW data extracted with RawDigger from DPReview Studio Scene, and used this profile with our preferred RAW converter to open the source ARW. That’s how we obtained the right part on the figure below:

Origina ARW from SONY a6500: embedded JPEG vs. render using correct camera profile

Figure 13. SONY a6500.

Embedded JPEG (left) vs. rendering, using custom profile, with the blue very close to actual the color in the scene (right).

Here is the report of profile accuracy:

SONY a6500: delta E for the calculated profile

Figure 14. SONY a6500. Profile accuracy report

Looking at profile accuracy report on Figure 14, one may notice that though the accuracy is pretty good, reds are generally reproduced with less error compared to blues, and that the darker neutral patches E4 and D4 exhibit larger error than the others. The main reason behind irregularity over the reproduction of neutrals would be that I was forced to use a generic ColorChecker reference, as DPReview does not offer the reference for the target they are shooting. Profiling offers an approximation, a best fit, and it might be that E4 and D4 patches on the target they use deviate from the generic reference in a rather significant way. BabelColor web site offers a very good comparison on the matter of variation of the targets.

The imbalance between error in reds and error in blues can be attributed to 2 factors, mainly, first being the use of the generic reference we just mentioned, and the second is sensitivity metamerism that we discussed earlier in the article.

There are also some secondary factors, too, to watch. It is difficult to make a good profile if the spectral power distribution of the studio lighting is not measured; flare and glare can reduce the profile quality significantly, and so can light non-uniformity, be it just intensity or different spectral composition of lights on the sides of the target. However, flat field mode in RawDigger can help to take care of light non-uniformity, please have a look at “How to Compensate for Non-Uniform Target Illumination” chapter in our Obtaining Device Data for Color Profiling article. You can use RawDigger-generated CGATS files with free ArgyllCMS, MakeInputICC, our free GUI over ArgyllCMS (Windows version, OS X version), or basICColor Input.

As we can see, there is certainly good value in ColorChecker when it comes to creating camera profiles. Color becomes more stable, more predictable, and overall color is improved. Even when the exact target measurements and light measurements are unknown, ColorChecker still allows to create a robust camera profile, taking care of most of the color twists and allowing for better color reproduction. Of course, you can use ColorChecker SG or other dedicated camera targets, but due to their semi-gloss nature you may find those to be more difficult to shoot. So, before going to the next level, using more advanced targets, figure your shooting setup first to have as little flare as possible, use a spectrophotometer to measure your ColorChecker target and your studio lights – it often proves to be more beneficial in terms of color profile quality than jumping to using a complex target.

We would like to thank Thom Hogan and Andrew "Digital Dog" Rodney for their input.

3 Comments

Hi Iliah,

Hi Iliah,
Great article. And, if I look to all professional photographer, which use White Balance on their "gusto". Much of them not use a color-card!
And how much, shoot only JPEG-out-of-cam. And, even only the JPEG is used for magazine!

Oh my god, why the manufacturer cannot implement this in the camera body! Why it is so difficult? Why they not take time to implement a "real" raw-converter in the camera body? Why they not want to change to "real" color?
Why the costumer has to go with "their" color? Canon has their color-look, Nikon has their color-look, Sony has their color-look and so on. That is insane.
And how many people use RAW-file and correct the color afterwords with a RAW-Converter? Less then 20%!
Why .......
You see, if all other 90% of JPEG-Shooter do not complain, then nothing will happen! ....

Color appearance

Iliah, when you say, "The perceptual non-uniformity of the CIE color model, non-uniformity that is especially pronounced when it comes to blue-purple color regions", you are referring to CIEL*a*b*, are you not? There are other attempts at perceptually-uniform space that are also based on CIE color matching functions. None is perfect, but they don't all have that specific problem. For example, Bruce Lindbloom created one that he claims maps the Munsell hues to radial straight lines.

Dear Jim,

Dear Jim,

> you are referring to CIEL*a*b*

Yes.

In RPP we are using Mr. Lindbloom's UpLab, and indeed it behaves much better.

Add new comment