www.scantips.com

B&H Photo - Video - Pro Audio

Histograms show gamma-encoded data

It seems that we never realize that our photo histogram shows data with gamma-encoded numbers. But... regardless if we are aware or not, it does. The numbers in the histogram may not be what we may imagine them to be, for two reasons, gamma-encoded data and in some cases, grayscale histograms (luminosity). This article is about gamma, and it also covers "what and why is gamma?"

Histograms

A histogram is a simple bar chart showing the count of the image pixels having each data tone 0 to 255. The height of each bar shown represents how many pixels have tonal value of 0 (black), and how many pixels have values of 1, and 2 and 3, ... all the way to how many, if any, have tonal value of 255 (white). It shows the distribution of the pixel tones, over the tonal range. More exposure will shift the graph data right, and less exposure shifts data left (because relatively more pixels will have brighter or darker data). With enough exposure, we can make black appear white, or vice versa, but the correct exposure is important to put the tones where they ought to be, which makes them be the tones they should be. A histogram is NOT a light meter, and nothing there has any clue what it is or how it "ought to be", but the histogram can show "what we do have", and can help show relationships, for us to decide.

Histogram bar charts are commonly shown 256 pixels wide on video screens, and thus always necessarily represent 8-bit color. Our monitor's width simply cannot display a 12-bit histogram 4096 bits wide, so don't kid yourself about the histograms for your 16-bit data (64K bits wide).

Where should the numeric histogram data be?   Well, that depends on the scene in front of our camera. If it is a correctly exposed black cat in a coal mine, most pixels will be near the left end. If a correctly exposed polar bear in the sun on the snow, most pixels will be near the right end. However, unless we compensate to correct it, the reflective meter in the camera will try to put both of these cases somewhere around the middle, as shown at How Light Meters Work. That numeric value will also have gamma correction added, for example, the value represented by an 18% gray card should be 18% to the power of 1/gamma (more below). But proper exposure also shifts the tones to be what they should be. Most "average" or "typical" general scenes will usually meter about accurately enough, because their subject includes a wide range of stuff that covers most of the full range - some bright stuff and some dark stuff, and all in between. The typical general scene really does often average out near middle gray, and the reflective meter tries to put all scenes at middle gray, and this often works out about right (mostly). That is simply how reflective light meters work. But many scenes are exceptions too, which we need to watch for.

The one unquestionable thing the histogram data shows us, why we watch it, is if we see a thin spike right on the right edge (too many pixels of value 255), then we have overexposed, and are clipping our tones, so that any brighter tones are clipped to remain at 255 (and we lose ability to distinguish tones there, in pixel values of 255). Generally, we do like the right end of our data to be up high, not right at 255, but fairly high. Which is not absolute, it merely assumes our wide-range image actually contains some tones which ought be up high - which is not always correct. We must use our heads too, and the image preview on the camera rear LCD is maybe better to judge this.

The problem : Regarding the specific numbers, we need to understand what histograms actually show. Many seem to imagine that 128 is midpoint of their histograms, and that 18% gray cards are middle gray, and the gray card ought to show up at midpoint of their histogram too. We are led to believe this by much of the literature. But none of that is right, at least there is more to it, so that it is not the right idea. Specifically, at the most fundamental level, the histogram shows gamma-encoded data, and all numbers in the histogram are different than "theory" - which I hope to make obvious.

The overwhelmingly significant one-line answer about numeric values:
Our histograms show gamma-encoded data values.

What is Gamma?

Gamma is very mysterious, and invisible to us. We are told that it is in there somewhere, it just happens somehow, but we can never detect any evidence of it (except our histogram data does show its values). We do know our digital images use profiles like sRGB, which we know specify gamma 2.2. Unfortunately, what no one ever seems to realize is that this obviously specifies that our digital picture data is gamma-encoded. Unequivocally, all of the digital pictures in the world are gamma-encoded. It needs to be overtly said that all of our histograms specifically show this gamma-encoded data. We seem to ignore that. The technical stuff assumes it of course, but it never seems mentioned in any basics. The basics presented to laymen seem to just blunder around it, avoiding the subject, and which is no explanation at all, misleading at best.

Gamma is a correction factor applied to data for all RGB images (moving and still images) to correct for the non-linear response of CRT monitors used to view them. Yes, you will hear some sources incorrectly claiming gamma corrects for the human eye, but that's very wrong, the eye does just fine on its own, thank you.

Here is the tricky part: There are two concepts involved - one is analog light in the real world, traveling through the air, and through our lenses, and including captured on the film, and also Raw data at the camera sensor itself. These are linear data, and analog. And then there is also the gamma-encoded digital data in the output image today - which is NOT linear (nor analog). All digital photo data is gamma-encoded. The word linear simply implies "not yet gamma-encoded". In the math sense, linear means data which graphs as a straight line, so that twice as much input will output exactly twice as much. Opening each f-stop is twice the light and half the shutter speed, that's an example of linear. Non-linear means the response does not exactly track the original data. Gamma is an exponential math curve, not linear, and common usage comes to be that "linear" in video work often simply denotes "not yet gamma-encoded". Because, it will be, before it goes into a RGB image file. The Raw data is still linear at the digital sensor itself, and in Raw data, but all output RGB data has been gamma-encoded.

Note that film and gray cards and such are analog concepts, but the histogram is a digital concept.

My entire point is that if we are going to ponder the numerical values in the histogram, then first, we must understand what it shows. And we need to realize that all discussion of gray cards, and step charts of gray tones, about Ansel Adam and his Zone System, anything about film or paper, and human eyes too, is all analog stuff. Nothing wrong with it, we used it for about a century, processing film to paper print, until we try to imagine our digital data is the same. It is not the same. Analog is 1) not digital, and it is 2) not gamma-encoded data. There are no histograms in analog data. Our digital data is a different representation of the original analog scene, and the concepts differ. Histograms show gamma-encoded binary RGB data. It seems obvious that Ansel Adams never saw histograms or digital data (back in the late 1930s).

The sources never tell us anything about gamma... Because then they would have to explain gamma, and that's like math, and many of us would tune out. However, you are never going to "get it" about the histogram until you are at least aware that gamma exists. Gamma can be made complicated, but I'm going to tell you the simple story, and it really is pretty easy... really, not hard at all, so hang in there. It is the least you should know about the histogram numbers.

Why Gamma?

Frankly, we don't even care why anymore, it just always happens. Automatically, silently in the background, simple fact, all of our digital images have always been gamma-encoded. Gamma is easy to ignore, it takes care of itself, we are not even aware of it... Until the time we get involved with the details, like pondering the numerical values in the histogram, then we have to pay attention, and know more. The histogram shows that gamma-encoded data.

Here is why gamma:

Up until thin-screen LCD monitors in recent times, for years and years, we used to watch television and computer screens on cathode ray tube displays (CRT - the old stuff). But this took some doing, because CRT screens do not have a linear response. Their electron gun has an exponential response curve. Meaning, only the brightest signals showed up on the face of the CRT, and dim values simply disappeared. Without correction, it was not possible to show tonal data.

This non-linear CRT response curve was called gamma (gamma is just a Greek letter, used in science like X in algebra). Many other things are also called gamma, typically scientific properties used in equations. Film gamma for example is a completely different concept, not remotely related in any way to CRT monitor response gamma (however, both are response curves).

Earliest television (first NTSC spec in 1941) solved this CRT problem (non-linear response curve) by encoding the transmitted broadcast signal with opposite compensation for the monitor. Television stations intentionally boosted the darkest values more, and the brighter values less. This was called gamma correction (which is the proper search term). The CRT problem was that it showed output response to data X as if it were instead (X to the power of 2.2). Because otherwise, the CRT screen images came out unusably dark, only the brightest areas remained. So to offset this problem, each data value was first intentionally oppositely encoded using an exponent of 1/2.2. This 1/2.2 exponent happens to be numerically near 1/2, so the idea roughly approximates using the square root of all data values (so it will then come out about right). This reduces the bright areas more, but then after subsequent amplification, in effect, low values are boosted more than strong values. Then when viewing it with the CRT losses, it came out right.

We do see some silly ideas a couple of places on the internet, that say gamma is done to correct for the human eye response somehow. That is totally wrong, they are just making stuff up. The eye never sees gamma data, the image is decoded back to linear (one way or another) before any human eye ever sees it. Our eyes expect to see a faithful reproduction of the same linear scene that the camera lens sees. It would be bizarre if otherwise. The eye of course can only handle linear data, and gamma is done as a correction to help the CRT to output linear data again, to be a reproduction of the original linear scene, like our eye expects. Even the Wikipedia article has been corrupted that way now, but to assure you of this truth, see a correct description of gamma correction from the W3C standards body.

You've seen the math now, so hang in there. :) But seeing the graphs probably would help understanding.

That corrective 1/2.2 gamma curve boost compensation corresponds to about how much the CRT display loses low level signals, and the result comes out looking just about right on the screen of the CRT. Doing this encoding in the broadcast transmitter one time was much better than adding circuits in EVERY television receiver to do it (at least in 1941). Boosting the low level signals helped noise on the transmitted analog signals too. Because the data was gamma-encoded, then all a CRT had to do was to display it, which was lossy, but the losses were precomputed, so it came out right. It became the original linear signal again, the correct brightness on the CRT screen output, same as the original scene, now for the human eye to see.

When we started using CRT for computer monitors, they had the same CRT losses as TV. This didn't matter in the first days, when all we viewed was text - early text only had eight colors and two intensities, and the hardware managed the levels. But when we started getting into tonal computer images (showing photos was increasingly popular by 1990), this gamma correction necessarily became the world standard for all images... All digital images, computer and TV, all are gamma-encoded, with exponent of 1/2.2. Again, this was to compensate for the CRT losses which were roughly exponential with a 2.2 exponent, which curve we call gamma.

This means EVERY photo image (and every television or movie frame) digitized into any computer has been gamma-encoded (for the CRT monitor on every computer then). The only exceptions are the Raw images (which are not gamma-encoded yet, and our monitors cannot show Raw anyway, but certainly these will be encoded later, before we can have any use for them). And also, one bit images (called line art, bi-level, B&W, fax, etc ... which is not tonal data - not photos - contains only two values, 0 or 1). But any photo image (color or grayscale, still or movie) that we can view on a computer is gamma-encoded (and digital televisions are computers today). Other exceptions are of course film images while actually on film or print paper - all still analog - but when in a computer, they were scanned to be digital, and everything is gamma-encoded then. All scanners and all digital cameras have always output gamma-encoded digital image data.

So printers have to deal with it too. Publishing and printer devices also need gamma too, not as much as 2.2 for the CRT, but the dot screen methods need most of it (for dot gain, etc). Until recently (2009), Apple Mac computers used gamma 1.8 images. They could use the same CRT monitors as Windows computers, and those monitors obviously required gamma 2.2, but Apple split this up. This 1.8 value was designed for Apple's early laser printers (and for publishing), to be what the printer needed. The Mac video hardware added another 0.4 gamma correction for the CRT monitor, so the video result was an unspoken gamma 2.2, roughly - even if their files were gamma 1.8. The last few Mac versions (since OS 10.6) now observe the world standard gamma 2.2 in the file, because all the world's images are already encoded that way, and we indiscriminately share them via the internet now. But yes, printers are programmed to deal with the gamma 2.2 data, and to adjust it to their needs.

Today's LCD monitors have a linear response... a very different technology. LCD has no gamma losses like CRT - therefore their images really could do without gamma correction. However, the world standards now are that all of the image data in the world is gamma-encoded, digital TV signals too. The original reason may have been years ago about CRT, but now the LCD has to deal with it too. A LCD monitor specifically must have a chip to decode gamma first, where a CRT does the same by merely showing it. Either way, it is built-in, it just always happens. Either way, it is simply easier to continue this now, than to change everything - and I mean absolutely everything. ALL of the existing digital image data in the world has been gamma-encoded, and continues to be, and the LCD monitor must deal with it, and the CRT monitor is still compatible. Today's silicon chips make that be an easy job. Your little pocket mp3 player that shows images deals with it too.

The gamma image is always decoded back to linear analog, one way or another, before it is shown to the analog human eye - so that it is hopefully still a faithful reproduction of the original real world analog scene. The CRT and printer merely output it to analog, and their losses decode it to be linear again. The LCD monitor just does some math - it is only numbers, nothing physical which can fade or erode. When this image is in the air on its path to the eye from the monitor, the image is no longer digital. So linear is always sent to the eye, as the eye expects - to be the same linear view that the camera lens saw.

But then (another tricky part) the human eye's own response is logarithmic (exponential). This is NOT related to gamma in any way, any encoded image is always decoded to analog first before any human eye sees it. But it is said that our brain perceives that we see 18% as "middle gray" (in common lighting situations, not too bright or dark). This human eye response is the ONLY relationship of "18%" with the term "middle gray" (technically, 18% is not the middle of anything, it is only 18%). The term "middle" is certainly not about histograms or digital (we hear the word "middle" used various ways, and then make far too many false assumptions about what it means). 18% is 18%, and fairly dark, and the term midpoint is only about how the human eye perceives actual 18% tones on paper. In the late 1930s, Ansel Adams hoped his 18% Zone V on film and print would be perceived as middle gray to the human eye. It is certainly NOT about histograms.

The digital camera CCD and CMOS sensors are linear (not yet gamma-encoded), speaking of the Raw data at the sensor. However, we don't have tools to see Raw data. Our monitors and our printers display RGB data. The camera's Raw file includes a RGB JPG to show a preview on the cameras rear LCD, and the histogram shows that JPG's gamma-encoded histogram. All digital image data (all but Raw) is gamma-encoded.

The word linear has two meanings in video, one is the "graph as a straight line" idea in math, but in video, linear usually simply implies "not yet gamma-encoded". Because it will be gamma-encoded at RGB output. This is the world standard - all image data in the world is gamma-encoded. All cameras and all scanners, and all Raw processors and all photo editor programs, output gamma-encoded images. Excluding Raw images (which we cannot view anyway), all photo images are gamma-encoded. All histograms show gamma-encoded data. The histogram numbers are simply different than the linear numbers we might imagine (chart list below). Just how life is. We may as well expect it. :)

Simplest test to show obvious truth - Histograms show gamma data

With your camera, carefully expose a white subject, adjusted so the histogram data edge lines up very near 255 (but without significant clipping). Then intentionally underexpose it exactly one stop. We know that one stop is half of the light, and we might imagine that the histogram right edge will move down to 255/2 = 128. And it would, in a linear histogram at the Raw sensor, just like we have always been told. However, we cannot see Raw data (no tools). Anything we can see on a monitor is RGB data, and anything we can ever see in a histogram is the gamma-encoded RGB data. 128 may be the linear midpoint of 0..255 Raw data, but the RGB data we see is gamma-encoded, which shifts the linear 128 up to about 187 (73%, about 3/4 scale) on the gamma-encoded histogram that we see. In that gamma-encoded histogram we see, it is a mistake to call 128 the midpoint. This concept is sort of big, and it obviously exists, just look. It doesn't actually affect our pictures, because it will be decoded back to linear before our eyes see it - but it obviously affects the numbers in "storage", in the encoded histogram data.

-1 stop = (0.5 ^ 1/2.2) = 0.73, x255 = 187 73% of 255 full scale. This is obviously the midpoint we see, closer to 3/4 scale. (the 0..255 data is normalized to be 0..1 so the gamma exponent can work right. So 128 becomes 0.5, midrange).

Here is the ugly part: These numbers vary. This 187 is not a precise number. It depends ... the camera is busy doing other shifts - white balance and contrast S-curves, and Vivid picture controls, and other manipulations, like brightness and saturation, etc. These are tonal changes, and when it changes things, of course it changes things, which are data changes. But it is clear that this new value of "half intensity" is far from 128 in gamma data. The 255 shifted down one stop will be much closer to 3/4 than to 1/2. This is obvious, when you just look once. This is due to gamma encoding the data, done according to sRGB specifications. The histogram data we see is gamma-encoded. The midpoint we see is not at 128, not even close.

If we use incident metering to correctly expose a photo of an 18% gray card, 18% is of course about 18% in the linear Raw data. It is not the middle of anything digital. But the required gamma encoding coincidentally does make it appear close, which confuses the troops, who heard it called a middle gray card, and they heard 128 called midpoint of the histogram. So without understanding any of the terms, they assume 18% somehow becomes 50%. But 18% is 18%, it is not 50%. 46% is the gamma-encoded representation of linear 18%. 46% of what? The meaning of 18% is that the card reflects 18% of the incident light hitting it, which our meter might see. How much was 100% of that light? We imagine that if we make that 18% amount be 18% of full histogram scale (linear data), then 100% ought to be full scale. Then gamma-encoded 18% is 46% of full scale. This is not the middle of anything, it still represents 18%.

After gamma encoding, 18% Gray card = (0.18 ^ 1/2.2) = 0.46, x255 = 117, or 46% of full scale. A little below "center". (the ^ symbol here means "raised to the power of").

46% is coincidentally near center, but not because anything is midpoint of anything. Midpoint went up to 187 at 73% anyway, and 18% gamma-encoded is 117 at 46%. Our light meters are calibrated to give a 12.5% result (three stops down below), regardless of what we aim at. It is not midpoint of anything either (however, nonlinear human eyes/brains do seem to recognize it as a middle tone, anyway).

If we normalize the 0..255 range to be 0.0..1.0 (percentages), and compute underexposed stops down from 255, we should see about this:

Gamma data

Full scale = (1.0 ^ 1/2.2) = 1, x255 = 255,    100%
-1 stop = (0.5 ^ 1/2.2) = 0.73, x255 = 187,    73% of 255 full scale
-2 stops = (0.25 ^ 1/2.2) = 0.54, x255 = 137,    54%
Center = (0.218 ^ 1/2.2) = 0.50, x255 = 128,    50%
18% Gray card = (0.18 ^ 1/2.2) = 0.46, x255 = 117    46%
-3 stops = (0.125 ^ 1/2.2) = 0.39, x255 = 100,    39%
-4 stops = (0.0625 ^ 1/2.2) = 0.29, x255 = 73,    29%
-5 stops = (0.0312 ^ 1/2.2) = 0.21, x255 = 54,    21%
-6 stops = (0.0156 ^ 1/2.2) = 0.15, x255 = 39,    15%
-7 stops = (0.00781 ^ 1/2.2) = 0.11, x255 = 29,    11%
-8 stops = (0.00390 ^ 1/2.2) = 0.08, x255 = 21,    8%
-9 stops = (0.00195 ^ 1/2.2) = 0.06, x255 = 15,    6%
-10 stops = (0.00098 ^ 1/2.2) = 0.04, x255 = 11,    4%
Zero = (0.0 ^ 1/2.2) = 0.0, x255 = 0,    0%

 

Linear data - Just for reference

Full scale = 255,    100%
-1 stop = 128,    50% of 255 full scale
-2 stops = 64,    25%
18% Gray card = 46    18%
-3 stops = 32,    12.5%
-4 stops = 16,    6%
-5 stops = 8,    3.1%
-6 stops = 4,    1.56%
-7 stops = 2,    0.78%
-8 stops = 1,    0.39%
Zero = 0    0%

Again, camera manipulations (like contrast, white balance, brightness, etc) can shift these values a little.

I am not saying our data has 8 stops of range. Sure, we can compute any infinitesimal fractional value, but no claim is made that we can distinguish the seventh stop from the sixth. The gamma encoding does increase these weak values (was also very good for noise suppression in transmitted analog signals), but all data is decoded back to linear again before we ever see it.

The linear chart is only applicable to the Raw data at the camera sensor. The only reason to mention it here is because much literature we see refers only to it. However, we are unable to see Raw data - our eyes and our monitors are only RGB. And any digital RGB data we can see has previously been gamma-encoded (and is decoded just before our eyes see its analog equivalent).

Idle discussion: There are some tricky points : Gamma does not affect the two end points, values exactly 0 or 255, which are 0 and 1 when normalized to the 0..1 range. 0 and 1 to any power are still 0 or 1. So while the clipped red 255 in the right hand column above is gamma-encoded, in this case, so this full scale case was in fact 255 at the raw sensor too. So gamma does not affect clipping, or decoded value. However, other data shifts, like White Balance or Saturation, certainly can. The RGB histogram shows the JPG data at output (or shows an embedded JPG thumbnail in Raw).

In the gamma encoding, the end points do not move - the dynamic range stays the same - but the binary data definitely is gamma-encoded (in the histogram too). And of course, the data is ultimately decoded back to linear analog just before the analog human eye sees it.

The funny thing is that some people recalibrate their light meter to make a gray card come out at histogram 128 (calling it "midpoint"). They heard 18% was middle gray, and they heard 128 was the midpoint of the histogram, so naturally, middle is middle (if we don't know or care what the terms refer to), and they assume there is some magic so that the gray card ought come out at middle. It doesn't, 18% is not the middle of anything digital. Simply unrelated. If it were a 21.8% card, gamma would put it at midpoint. Plus, if a reflected meter, it does not matter if they meter a 18% card, or a white card, or a black card... the reflected meter adjusts exposure to try to make them all come out the same, near middle, more or less. The right idea is that we can meter the 18% card (an average subject) to get a reflected reading to use for the external scene, also assumed to be average. It is NOT about photographing the dumb card.

Coincidentally, this "midpoint" misunderstanding causes only a small error, maybe 8%, not really the worst thing as a rough guide. Certainly it can be compensated, but certainly it is pointless, and plainly is the wrong idea. It is only coincidentally close to "center" because of gamma. 18% is not the midpoint of anything digital, and actual midpoint is up near 187 at 73% anyway. And it can vary with different shifting manipulations in digital cameras. Better read Sekonic's calibration procedure again, which does not mention gray cards. They know a thing or two, and according to Sekonic (and the ISO organization), the right idea to calibrate your meter is that it repeatedly and continually gives a consistent error on a wide range of scenes, THEN adjust the meter to compensate. Any one reading is suspect anyway, too many variables.

Yes, Ansel Adams did promote 18% as middle gray at Zone V back in the 1930s, but he was not using digital, histograms, or gamma-encoded data. He likely never saw a histogram back then. His reference was the human eye's response of analog tones printed on paper. And Kodak said to open 1/2 stop, after metering on their 18% gray card.

The value used by light meters today is more near 12.5% (which is Kodak's half stop under 18%).

Menu of other photo pages here

Copyright © 2011-2014 by Wayne Fulton - All rights are reserved.

Menu