Be aware that the common camera (single gray) histogram does NOT show your color image's actual real data. There are two types of these histograms, which are not at all the same thing.
Example: a Nikon D300 camera rear LCD display showing result of a picture entirely all of this one orange color (for histogram simplicity). FWIW, the picture the D300 took (camera A mode) was of my computer monitor screen, showing a screen filled with the orange color RGB (250, 70, 50). One single color, with three RGB components. This histogram seems to show an auto exposure of reasonably normal data near mid-scale, with certainly no risk of overexposure clipping it, right?
Wrong. Histograms can also show the actual three individual Red, Green, Blue channels (the actual real data we should use), but the one channel gray value histogram is only a simulated computed value called luminosity. This single gray histogram is just a math computation, a contrivance, which is NOT the real image data any more.
This single graph is a computed value called Luminosity, which is instead showing the equivalent grayscale brightness of these colors in this image, how bright it would appear if it were seen on a grayscale display (it uses the NTSC television standards, to match grayscale to human eye response of colors). The luminosity data shows this current image would appear about midscale brightness if it were grayscale, which is also how B&W film or B&W TV would show it. Why this is prominently offered, I dunno. Because of course, our RGB camera data is Not grayscale, and this is Not our actual real image data that our camera records, and we are usually not concerned with grayscale today. However, we do need to be aware of clipping in any of the three RGB channels, which this gray single histogram simply cannot show. We definitely need to watch only the three RGB hisograms.
The color photographed was intended to be RGB (250, 70, 50), but it came out more RGB (255, 101, 80) = Luminiosity 145, slightly above mid scale. Of course, the monitor screen being photographed may not have been accurate either, but now, all three RGB components are a bit higher, and the red channel is clipped. The point is, this is our result, and the single Luminosity graph does Not begin to show that.
The down arrow control on the camera scrolls though several data screens, one of which has the three (actual) RGB graphs. This same image shows rather different actual RGB data. The red channel is obviously clipping, which needs our attention (red in this instance, but it could be any of the three channels). The single histogram does not show this, nor does it show accurate Green or Blue data positions either.
Do notice that the single gray histogram peak is NOT in a location where any of the red or green or blue data actually is. In more wide range images, the single histogram might sometimes coincidentally appear to nearly agree, but the single gray histogram is simply the wrong concept, and is unable to represent clipping. The single gray histogram is not actual real data, but is instead just a mathematical abstraction for other purposes. It is a false representation, not actually exposure, NOT real data. We must instead look at all of the three RGB histograms.
The single Luminosity histogram cannot give hints that the RGB data is clipping. It shows very different numbers computed for a different purpose. The RGB data is the Real image data - the single gray histogram is not real data.
If selecting Monochrome pictures in the camera, then all four histograms will be identical. Not actually what the RGB sensor exposure was, but all channels made identical to the single gray channel then. But if shooting color, they are quite different things.
(Numerical details of Luminance below)
If not seeing three RGB channels, on a Nikon DSLR, the Playback Display (in Playback menu) enables the three RGB histograms on the LCD, if not already enabled. You can also enable Highlights there, to cause the clipped pixels to blink warning.
We should always only watch the three Actual RBG channels. The camera's single histogram is NOT the real data, it does not show sensor clipping meaningfully. Generally, our only purpose of looking at the histogram is to ensure overexposure does not stack up pixels at 255 (clipping). Luminosity does not show that.
This mention of a "single" histogram is only speaking of the camera single gray luminosity histogram. THis is NOT speaking about Photoshop Levels, which is actually all three channels overlaid onto one, all at the proper position.
When Adobe software (Lightroom, Photoshop, etc) shows the same corresponding actual D300 camera image (same orange image as represented by the LCD display above), its histogram is a different type, the good type at right. It shows the three actual RGB channels overlaid in place, all in the right place (nothing is shifted). It appears as one single display, but it is RGB data, and NOT luminosity. The Photoshop menu Window - Histogram does have an option to show Luminosity.
The ACR (Adobe Camera Raw) histogram tries to differentiate color channels (top right).
The Photoshop Levels histogram does not show colors (bottom right). However both histograms are the same data and alignment. In both the three RGB channels are simply shown overlaid in place, showing real data values. Basically, for each horizontal position 0..255, that column shows the tallest data in any channel (unshifted). Histogram data is the number of pixels that have that 0..255 value. We are just looking for clipping, and we normally don't care which channel is clipping. But note that for this same image, the Adobe histograms show the actual data shape, very different from the cameras single gray histogram (which does not).
Photoshop does also have a Histogram Window (as opposed to Levels above) which can show different modes, including Luminosity.
FWIW, since all the image pixels are all about the same in this orange image, we would imagine the count of the RGB components of those pixels would be the same count (histogram height). But the red channel is clipped, becoming a narrow spike piled up at 255, fewer shades of red now, with an accumulated higher count of 255 pixels. But green and blue are still a wider color distribution, reducing their count over more individual tone shades. None of this is particularly important, we only want to know if the exposure is clipping or not. These individual RGB curves show this, the cameras single gray luminosity histogram does not. Clipping loses detail in the clipped highlights, and clipping one channel changes the color.
All histograms reach full height. Height is a count of pixels of each tone value, which only has relative meaning in histograms. Histograms are shown normalized, intentionally scaled so that the top of the tallest peak always reaches full scale height. Scale is relative, shown percentage-wise. This is intentional to enlarge the lowest values along the base line to be easier to see there. Note that EVERY histogram on this page (and elsewhere) always reaches full height. Note if there is no tall peak, then the entire curve is raised toward the top. So height is relative, and absolute height has no meaning in histograms, except a tall spike does indicate "many pixels" of that color value, many pixels representing a large area of that one color in the picture. The exact count is not important. These many pixels might be scattered into many areas of the picture, or it might be one large area. See "Determining which pixels are clipped", below.
Notice in these Adobe histograms here, the red peak (a clipped narrow peak at 255) is about double height of the green or blue (which are spread over a wider range, decreasing individual color pixel counts). But in the three individual RGB histograms at page top, all three are full height. Histograms are scaled to be full height. Height does not mean the peak is clipping, height is just a count of pixels (area), not related to intensity. And height is scaled so the peak reaches the top. Only a location at the far right end means clipping (data cannot exceed 255, so clipping just piles up there, instead of representing a correct higher value). So the absolute height is not important, but it is extremely important if the peak is piled up at the right end at 255, denoting clipping.
A peak in the middle somewhere doesn't mean anything more than that there was some larger area of the same color tone there. That is a property of this specific scene, it simply contained more area of that color. A histogram peak does NOT mean intensity or exposure, it only marks a larger area of that color tone, in location from 0 to 255. Only a peak at the right end (255) means overexposure. Values trying to be greater than 255 simply cannot, they get clipped to stack up at 255. Eight bit data is limited to values 0 to 255. The numbers 256 or more cannot be stored in 8 bits, so they pile up at 255. A histogram shows the distribution of TONES in the image. They are what they are, and we normally really don't care what they are, except a peak piled up at 255 shows clipping, which is not recoverable, except by reducing camera exposure.
"Average" scenes... Many scenes contain a wide range of colors, and usually have some bright colors, a white shirt or cloud or church steeple or picket fence or porcelain dish or pizza sign... even the whites of eyes. Of course other colors can be very bright too, yellow or red or green, etc. The concept of bright means the histogram data should normally extend towards the right end, but it should not actually touch the end to clip (possible exceptions here too, depending). The very vague and general assumption is that most of our pictures have a wide range of color, and probably should Not have much empty gap at either end of the histogram data. This "depends" of course, there are many exceptions, a picture of the proverbial black cat in a coal mine ought to look near black. There is no general rule, we have to be guided by the specific image itself, how we want it to look. The histogram does not define the image, the histograms shape depends on what the image tones are, and they are what they are. But the histogram does help to interpret the specific image. A little experience makes a big difference in skill. That means it is time to get started.
"Contrast" in grayscale means blacker blacks and whiter whites. Grayscale prints are kind of bland without some areas of real black and real white, to give them more dramatic contrast, which we can add (Ansel Adams prints for example, he always insisted on some area of pure black and some of pure white). Contrast means blacker blacks and brighter whites, to have wider and more dramatic range. A little intentional clipping (at both ends) is often done in grayscale for increased contrast (so long as it helps and does not hurt).
Color also creates contrast for color prints, and it's not hard to get too much tonal contrast in color images, so we are less extreme for color than for grayscale. Clipping can change colors in color images. And while it always "depends", this contrast definition vaguely and generally suggests our histogram range should be filled from end to end (more contrast). In the histogram Levels tool, raising the Black Point, and lowering the White Point, does this, for wider range, more contrast. A standard Contrast tool also does exactly that, it simply squeezes both ends together in equal amounts (sort of an unthinking way to do it). But really, more selective attention relating to what the specific image needs is usually better than the general Contrast tool (which simply squeezes both ends, regardless). Digital cameras normally have plenty of black end data, but scanners may not, often needing special attention there. In raw editors, increasing the raw "Exposure" slider lowers the white point, boosting brightness and increasing contrast (whiter whites).
In Adobe software, the histogram represents the marked selection area.
Any histogram always shows 256 levels, even for 16 bit data. A histogram is 256 pixels wide, and our video monitor width simply cannot deal with 65536 values of 16 bit data.
Histograms show Gamma Data: The important thing, you might realize too is that the data shown in the histogram represents the gamma encoded RGB data in our image files. The histogram we see is NOT linear data as implied in almost all examples. The middle gray midpoint is no longer at 128 mid-scale, but instead linear 128 is gamma encoded to be value 186 at about 3/4 scale. An intentional one stop underexposure moves 255 down to about 186 in the histogram, and it is NOT shown at 128. This is never exact, it will always vary some due to the camera also manipulating the data with white balance and contrast and color profiles (like Vivid). But the histogram is not linear data. See more about gamma.
This image shown above
is was a Raw image, which has significances.
Except for full bright direct sun, we never really know proper white balance. Because there are many WB variations of each of all other lighting, incandescent, CFL and florescent, flash, cloudy or shade, whatever. And the camera only has crude controls even if we did know. We can use Auto white balance to guess at it for us, but Auto WB is not all that reliable, it has no way to actually know what the light or subject colors were. So that's the beauty of raw, we can correct it properly after we can see it. But setting white balance can cause clipping.
Raw is raw, and the raw data has no camera settings in it (raw data contains no white balance, no Vivid or contrast, no saturation, no gamma, etc.) It is raw data, like raw meat, totally unprocessed. Raw data cannot even be shown on the camera RGB LCD, so the raw file also contains an embedded JPG image (a Large JPG, so we can zoom it in the LCD), which does have the camera settings in it, with camera white balance and gamma, etc. The JPG is actual RGB data which can be shown on the LCD, and the camera histogram is also shown from this JPG data. This JPG histogram has the camera settings in it, but which is not necessarily very close to how we might adjust it later in the raw editor. I don't trust Auto WB reliability, but it does try, so I do use it in the camera with raw, only to approximate a hopefully correct white balance on the rear LCD, and in the histogramS. But then I correct WB in Raw. Hopefully the two settings will be in approximate agreement, but if very different, it is possible some clipping can occur. Or, since we can always easily boost exposure in raw, then really, an intentional mild underexposure precaution is no big deal, always easily corrected properly in raw. We often have to tweak exposure anyway, after we can see it.
For raw images, Photoshop shows the histogram after ACR conversion to RGB and gamma is added. But White Balance is also added.
Gamma is no issue, it shifts things temporarily, but NEVER shifts the end points of 0 and 255, which are normalized to 0..1 for gamma, so the end points are absolutely fixed in gamma (0 or 1 to any power are still 0 or 1). The only histograms that we ever see do show gamma data. But gamma is always removed by the monitor or printer, so our eye never sees gamma data. We hope our eye always sees an accurate reproduction of the original linear scene.
However, adjustments like White Balance and Contrast and Saturation can shift data past the end point. Raising the color temperature higher (like toward Shade WB) tends to push the red channel into clipping, detrimental to highlight detail (same happens to JPG too, but at least the histogram shows it happened then). Lowering color temperature (toward Incandescent WB) raises the blue channel and can lose shadow detail. Both can cause clipping.
We rarely know what exact white balance should be when shooting, but my reasoning is that if the camera used an approximately correct white balance, even Auto WB, maybe it sees some of this, so that the embedded JPG histogram might indicate clipping at shoot time, when I can see it. Then if I correct the exposure, my own raw adjusted amount should come out similar later. But otherwise, expect that extremes of color temperature can cause clipping (JPG too). So even for raw, which does not use camera settings, it is good for the camera settings to approximate the final expected white balance (not because it affects the raw file, but because we can see a reasonable histogram at shoot time). We will see the same approximate result when we properly correct the similar raw file white balance. I just use Auto WB with raw, not because I think Auto WB is good, but because I think it is often halfway ballpark (good enough for histogram), and I never have to give WB a thought until later (there are still a few surprises, but normally not too serious).
White Balance can drastically affect the RGB histogram, shifting the red and blue channels. Warmer WB (daylight, cloudy, shade) moves blue left and red right (spreads them in this case). Cooler WB (incandescent) is the opposite, moving red and blue the opposite directions (in this case, compresses them together). White balance corrections typically do align the white peaks at the right end (but not in this case, because this case is one orange color, there are no white peaks). Both graphs below (in ACR) are this same orange image as before, just two different White Balance corrections from Raw. The JPG histogram will match the camera settings, but Raw processed later may not.
Next is a real photo case, a portrait that temporarily includes a white balance card. The tall spike at right is the white card. Neutral white is a special color that has (should have) equal RGB components. If we know the spike should be neutral, we can simply adjust the peaks to align. Or if we click on a known neutral color with the WB Tool, it will do that. Then when a known neutral color is made to be neutral, we have perfect white balance.
Note that the WB Temperature slider runs from Blue to Yellow. There is also a Tint slider that runs from Green to Magenta (two Lab Color axes). As seen in our RGB histogram, Temperature moves Blue and Red peaks in opposite directions. This effect is better seen in this animated image which is from the White Balance page 2 (the action is correcting a white card peak).
This next rose image used Daylight WB in the camera (for the LCD and histogram when I shoot Raw), and then I would add Daylight WB when processing the Raw version. The camera settings (WB, Contrast, saturation, Vivid, etc) do not affect the Raw image (Raw is Raw), but do affect this JPG and camera histogram. Then we probably do similar things in the Raw software, but not necessarily the same as the LCD JPG shows.
White Balance can cause clipping, often in red, routinely on red flowers, since Daylight WB shifts Red up and Blue down. This exposure was already set to -2/3 EV Exposure Compensation, but it still surprised me. I intentionally left the overexposure to show here.
This is a more typical photo than the orange one, here of a red rose (D800 LCD, camera A mode). Some moire in the photo of the LCD screen now (very slight defocus would be a good thing for closeup pictures of LCD screens).
These three RGB channels are where your attention should be. Clipping in any one is a bad thing. Here, the single luminosity or grayscale channel (top of the four) does not show any clipping at 255 or blinking. The Red channel does. Nor does it show any clipping at the black end. The blue and green channels do (and common, since more exposure is the only thing that can help see shadow detail). Again, the single gray channel is NOT real data. All four channels have a peak reaching the top, but clipping is about reaching the right end.
The single luminosity histogram is NOT showing real data. It is just a mathematical manipulation, trying to show a concept about grayscale brightness values (called luminance). It fails showing clipping of the real data.
On some models, you can set the "Select" message at bottom is how the "Blinkies" can be assigned to watch any one channel. It is set to the red channel here, which blinking now makes it harder to photograph the LCD. :) The Luminosity histogram might blink when bad enough (it did not blink here), but the meaning and real data is only in the individual RGB channels.
If any individual channel is clipping, then back off on the exposure, or with -EV Exposure Compensation, until there is no clipping. We might sometimes choose to clip intentionally later in the editor (for greater contrast), but clipping in the camera is unrecoverable, no choice then, no going back.
For example, to judge if a little clipping is OK or not. Clipping loses detail in the areas where all tones are the same 255, so it matters what is clipped. We may not care, sky for example, no detail to lose (but clipping in one channel can shift the color). But we certainly don't want clipping in our important detail.
In Adobe Levels (CTRL L in Elements and Photoshop), hold the ALT key down (Option key on a Mac) while touching or slightly moving the White Point slider with the mouse. If Preview is checked, the image will go black, and only the clipped pixels will show. If there is no clipping, nothing shows until you lower the White Point to cause it. This will change as you move the White Point slider, reflecting value at slider position. This works at the black end too, holding ALT with the Levels Black Point slider.
In Adobe Camera Raw (Lightroom, Photoshop, Elements), the Exposure slider is the White Point Slider, same thing, hold ALT key to see what gets clipped. Increasing Exposure will clip more and more. The Blacks slider is the Black Point slider.
My notion of a good rule of thumb is that skin highlights on the faces of portraits should not exceed 240, if even quite that, maybe 238 or 235. A mouse-over can show that, and the ALT key can show us the lay of the land there.
Our camera image is RGB color, but when the camera is showing only the single gray histogram, it is showing luminosity, an artificial number, which is not the actual RGB data - it is the computed grayscale brightness equivalent of the colors, perceived when viewing this image on a B&W monitor screen or print. Green looks brighter to human eyes, and blue looks darker, and grayscale should come out that way (B&W film does). This Luminosity formula computes the grayscale difference, by "weighting" the RGB components differently, to change to brightness as the eye perceives them in grayscale. The idea is about which picture grayscale tone properly shows red lipstick or green grass, etc?
Luminance is computed from this NTSC television formula: (B&W cameras recording color scenes for B&W TV)
Luminosity = Red x 0.3 + Green x 0.59 + Blue x 0.11. If now RGB (255, 101, 80), this comes out as:
Luminosity = 255 x 0.3 + 101 x 0.59 + 80 x 0.11 = 145, the Sum is slightly above mid-scale, far from indicating clipping.
The three channel sum is one grayscale luminosity value for each pixel. If our scene had contained many shades of red, green, and blue, then our pixels would have had many sums, and a more continuous histogram. But this scene was intentionally simple, one single RGB color. The luminosity histogram shows how those gray pixels would be distributed over the range. But this image is all the same one color, so one peak. The sum of the coefficients (0.3+0.59+0.11) is 1, but each is reduced to a fraction, in a way that green is weighted twice red, and about 5 times blue, representing how human eyes perceive the brightness of those colors, and this result. For example, if the RGB were (255,255,255), the grayscale would be 255 (white), but if (0,0,255), bright blue, the grayscale is 0.11x255 = 28. How a B&W photo should look, but not real data for color.
Some users imagine the single channel histogram is showing only the Green channel directly. Possibly some cameras vary technique, but this one is not. Raw is RGGB, and Green is weighted more heavily in luminosity, so sometimes luminosity can look similar (but lower at 0.59x value). Multiplying these tones by a fraction reduces their number, which eliminates any possibility that it can still show clipping. 255 becomes a smaller number.
The goal of luminosity is that this computed value indicates the "relative brightness" of this color, which this first orange picture ends up slightly above mid-range - as perceived by the human eye and brain, and also on B&W film (negatives are inverted, overexposure makes negatives black, which is inverted to white). It is a "grayscale"numerical result. This method computes the histogram to match that perception. But it does not show real data, so which often does not show clipping, and the actual RGB data in the RGB image is something rather different. In the camera, we need to watch the real data (individual RGB channels). The three RGB channels is what is recorded in the data in our image file.
This luminosity formula was developed by television as the accurate way to convert RGB colors to grayscale, according to luminosity of the colors (how bright the colors look in grayscale). This is the way the RGB color components of every RGB pixel are "weighted" to create the pixel's one gray value. That luminosity result value represents the relative brightness or the shade of gray that these colors would appear to the human eye, based on our eye being more sensitive to green, and less sensitive to blue (in grayscale, should red lipstick look darker than green grass?) In photography, luminosity is about the shade of gray that colored objects appear on B&W film. For example, the relative brightness of red lipstick, green grass, or blue skies, etc. - are different shades of gray on film. There is one correct way our brains perceive the brightness of that actual color. Luminosity computes these colors to come out the same brightness as grayscale does, matching their brightness the way it is perceived by the human eye.
But which is NOT our concern when seeking the correct exposure in our digital cameras, i.e., our concern about clipping. And if a color image, we want to see the real color.
Most normal images have wider range content and more continuous graph response, and so may appear "different" than these three spikes, but any image is the same concept. This new luminance sum shows the "relative brightness", specifically, the brightness that color would appear in B&W film. Here its luminosity peak is near the middle - but the red display is in fact bright and not dim. It may be a theoretical concept, and the luminosity formula and histogram has use to show the perceived brightness, but the physical reality that exists is the three RGB channels, and RGB and clipping is the important factor in the camera today.
Speaking of grayscale, some people are attracted to the more exotic alternate creative methods of converting color images to grayscale. The Adobe Channel Mixer for example, which provides tonal editing controls to modify the standard grayscale results for nonstandard results, about controlling how dark is red or green or blue made to actually appear as gray (effects are somewhat like using color filters on the camera with B&W film). Or Desaturation, which just removes color information, which then simply weights the three RGB colors equally (so blue becomes brighter, green dimmer), instead of according to the accurate grayscale formula that matches the human eye response. It is your picture, and this is creative license (to edit and modify it), but the techie dazzle sometimes lets us forget that there is an excellent reason for the standard Grayscale menu, provided for when you want the accurate standard conversion (luminosity), which is THE numerically precise way to convert to grayscale, to represent colors the same way real B&W film would have seen it.
The standard conversion menu is named Grayscale, and it is about retaining standard luminosity of the colors in grayscale... like our eye perceives the brightness of it, like B&W film sees it. The other choices are about changing the way grayscale comes out.
The histogram is NOT a light meter. It shows where the image tones came out, but it knows absolutely nothing about how they should have come out. White or bright things will appear to the right, and black or dark things appear to the left, but just where they ought to be is simply not known here. It really depends on the color and reflectivity of the subject. That's the photographers job to control exposure. We learn a lot through experience, but we should judge the actual picture.
And note that Gamma and White Balance (and Contrast and Saturation and other adjustments) shift the tones. The higher you advance White Balance (towards higher degree K values), the higher the red channel is shifted right, and the lower the blue channel is shifted left.
The histogram is simply not a light meter. Its importance is to show if we are clipping the digital data at the right end (255). We cannot get that clipped data back if we do. However, we can and do use it to be sure our exposure is approaching the right end (which simply assumes there is commonly some white or bright data that ought to be there, somewhere). The data should never be clipped at the right end.
Regardless of histogram type, all data in all RGB images is gamma encoded (which affects the histogram data, but it does NOT affect clipping).
So there are two ironies about histograms ...
1) Our only use for histogram in the camera is to judge clipping (or proximity to clipping), but the standard luminosity histogram does not show the real RGB data.
2) The RGB data is gamma encoded, so that mid-point is not at middle, but instead up closer to 3/4 scale.
It all goes better when we understand what we see.