www.scantips.com

Gamma Correction

Histograms show gamma-encoded data

It seems that many may not realize that our photo histogram shows data with gamma-encoded numbers. But... regardless if we are aware or not, it does. The numbers in the histogram may not be what we may imagine them to be, for two reasons, the data is gamma-encoded, and in some cases, not using the proper RGB Histogram in the camera. Frankly, the article some of us possibly may need to read is Surprises in the Use of Histogram. But this article is about gamma, including "what and why is gamma?"

Histograms

A histogram is a simple bar chart showing the count of the image pixels having each data tone 0 to 255. The height of each bar shown represents how many pixels have tonal value of 0 (black), and how many pixels have values of 1, and 2 and 3, ... all the way to how many, if any, have tonal value of 255 (white). The histogram simply shows the distribution of the pixel tones, over the tonal range. The absolute counts are not important, instead graph height is relative, the histogram is scaled so that the tallest peak always reaches the top of the chart (which helps the lowest counts along base line to be more visible). The histogram is NOT about the absolute count or height, but is instead about the distribution over the tonal range.

More exposure will shift the graph data right, and less exposure shifts data left (because relatively more pixels will have brighter or darker data). With enough exposure, we can make black appear white, or vice versa, but the correct exposure is important to put the tones where they ought to be, which makes them be the tones they should be. Normally, the only histogram detail that we are very concerned about is to warn if we are clipping at 255 (overexposure). Clipping is unrecoverable loss of detail, and should be avoided. However, seeing clipping requires using the three RGB histograms, because the histogram page shows that the single gray histogram (luminosity) CANNOT show clipping (or any real data). The three channel RGB histograms can and do show it.

A histogram is NOT a light meter, and it has no clue what the scene is, or how the result "ought to be". The human photographer has a good idea, but the camera has no brain able to use any experience. The data curve is what is is (simply what the scene data is), but the histogram can show "what we do have", and can help show relationships, for us to decide.

Where should the histogram look like? Regarding the shape of the curve, that depends on the scene in front of our camera. If it is a correctly exposed black cat in a coal mine (a black picture), most pixels will be near the left end. If a correctly exposed polar bear in the sun on the snow (a white picture), most pixels will be near the right end. These are both correct desirable results which show the scene correctly.

However, unless we compensate to correct it, the reflective meter in the camera will try to put both of these cases somewhere around the middle (overexposing the black scene, or underexposing the white scene), as shown at How Light Meters Work. That numeric value will also have gamma correction added, for example, the value represented by an 18% gray card should reasonably be 18% to the power of 1/gamma (to maybe 46%, more below). But exposure also shifts the tones. Most "average" or "typical" general scenes will usually meter about accurately enough, because their subject includes a wide range of stuff that covers most of the full range - with their brightest colors reaching towards the right end of the histogram. The typical general scene really does often average out near middle gray, and regardless, the reflective meter tries to put all scenes at middle gray. And this often works out about right (mostly, but many exceptions). That is simply how reflective light meters work. But many scenes are exceptions too, which we need to watch for.

The one unquestionable thing the RGB histogram data shows us, and the reason why we watch it, is if we see a thin spike right on the right edge (too many pixels of value 255), then we overexposed, and are seeing clipping our tones. Clipping means that brighter tones cannot exceed 255, and so are clipped to remain at 255 (and we lose ability to distinguish tones there, in pixel values of 255) Again, seeing clipping requires using the three RGB histograms (NOT the single gray one). Generally, vaguely speaking of average scenes of many mixed colors and some brighter tones, we do like an exposure so the data does approach the right end, not quite touching it, but fairly high. Which is not absolute, it depends on the scenes colors. It merely assumes our wide-range image actually contains some tones which ought be up high - which is not always correct. We must use our heads too, and the appearance of the image preview on the camera rear LCD is probably better to judge this, appearance being what counts.

Regarding the specific numbers, we need to understand that histograms show the gamma data values, and this article is about gamma. Many seem to imagine that 128 is midpoint of their histograms, and that 18% gray cards are middle gray, and the gray card ought to show up at midpoint of their histogram too. We are led to believe this by much of the literature being too simplified. But in the histogram, none of that is right, at least there is more to it, so that is not the right idea. Specifically, at the most fundamental level, the histogram shows gamma-encoded data, and the numbers in the histogram are different than "popular theory" leads us to believe - which I hope to make obvious.

Gamma Calculator

Gamma     2.2 is the standard profile

Values are converted both as linear or gamma

1. Value [0..255]  

2. Percent [0..100%]   %


3. Difference of two values [0..255]
for Both
Linear or Gamma

-->


4. Difference of two values [0..100%]
for Both
Linear or Gamma

% --> %


5.   - stops down from 255 end
(formats can be 1 or 1/3 or 1 1/3 or 1.3333)

6 & 7 are for article below
Decoded gamma values are truncated to 8-bits (never rounded up)

6.   Convert linear to gamma and back

7. All values, linear to gamma and back


Numbers Only. An error result of NaN will mean input is Not A Number.

The "values" are the 0..255 values in a histogram (which are gamma numbers in the histogram, and linear numbers at the sensor). The percentages are of the 255 full scale histogram value. We never see linear histograms for our photo images, tonal images are always gamma (one bit line art is not). The camera sensor chip was linear, and raw files are linear, but RGB images are gamma, and even raw files show gamma histograms (from an embedded JPG image in the raw file).

Histogram values are integer [0..255] values, so entering numbers like 127.5 cannot actually exist in our JPG files. The calculator will use the decimal fraction value though, if it pleases you.

Linear value 1 is Gamma value 20.5 (at gamma 2.2), so Gamma less than 20 is not very meaningful.

Note that digital cameras also make several tonal adjustments, for White Balance, Color Profile like Vivid, Saturation, Contrast, etc, which shift the histogram. And which are unknown to a calculator, therefore the gamma values in histogram data may be a little different than the exact values predicted. The ballpark concept is still very visible though, and Option 5 (stops down from 255) offers a rough approximation about how exposure affects the gamma histogram. 1/3 stop down from 255 should be 230 at 90%, and 1 stop down from 255 should be 186 or 73%, but the numbers are not exact due to other actions going on. This ballpark is an excellent guess however.

Raw images are of course affected by camera exposure settings, but raw file data is Not affected by those actual camera tonal adjustments mentioned just above (nor does raw have gamma either). But we don't/can't see raw data or raw images (Raw data is NOT RGB format, so we cannot show it). So raw files also embed a JPG image which is shown on the camera rear LCD, which is the source of the histogram that we see, which that embedded JPG DOES correspond to the current camera settings and gamma. But the camera tonal settings do Not affect the raw data, and this JPG histogram might not necessarily match the way you adjust the raw image later. Shooting Raw is of course a desirable philosophy, and it IS more a philosophy than just a setting.

Gamma Correction

Terms linear and gamma:   All digital tonal image data is gamma encoded (tonal data contains many tones, which is color or grayscale images, but 1-bit line art doesn't need or do gamma). In video use, the common use of these terms is that gamma data means gamma correction has been added, and linear data means still linear, it is not gamma encoded (either not yet encoded, or no longer encoded).
So linear also implies the analog scene data in the lens, and the way the linear reproduction is presented to our eye, what we see when either standing in front of the original scene, or viewing a reproduction on a video screen. Our eye always sees only the linear version.

Linear in math means a straight line graph, twice the input produces twice the output (proportionally, 10x input produces 10x the output, etc.) Gamma is non-linear logarithmic processing, done to correct for CRT screens which show data as if it had been raised to the exponent of 2.2 (roughly numerically approximate to squared, exponent of 2), with the high values leaving the low values far behind (lost, dark). So for the CRT, we have to prepare by first doing Gamma Correction, by applying exponent 1/2.2 to the data first, offsetting the expected losses, so the image will come out of the CRT right, i.e., linear again for the eye to see.

Gamma is NOT in any way related to the human eye response

Now and then, I get email... :) Charles Poynton's gamma articles have made the statement that we would still need to use gamma even if we no longer use CRT monitors. It's impossible to agree with that or his reason, however I am definitely all for continuing to use gamma for the reason that we do continue, because continuing to use gamma for compatibility prevented obsoleting and rendering incompatible 100% of the world's existing images and video systems. That must have been a very easy decision, it is vastly easier than starting over, especially since gamma has become very easy and inexpensive to implement (it's just a chip now). Compatibility is a great and best reason to continue using gamma.

The big problem I do see is that this bad notion (that we still need gamma for the eye somehow?) has recently caused some false internet "explanations" (just a notion which is impossible to explain). Their only argument is that the eye has a similar nonlinear response, which even Poynton says is "coincidental". But human eyes never see gamma data, they only see analog linear scenes or data. Our eyes of course evolved without our gamma, and eyes don't need gamma now. We did use gamma for decades, knowing it was designed only for the CRT. Now CRT is gone, so suddenly newbies decide gamma must still be done for the human eye response somehow? Come on guys, don't be dumb. :) No matter what gamma might do, the eye simply never has any opportunity to ever see any gamma data, and gamma data is still totally unrelated to our vision response. Our eye only wants to view a properly decoded linear image reproduction, the same as the lens saw, the same as our eyes see if we were still standing there. However, all of the worlds images are in fact already gamma encoded, so we do continue gamma for that compatibility (LCD displays simply decode it first). The "why" we use gamma may not actually matter much, gamma is pretty much an invisible automatic operation to us, a noop now, encoded and then decoded, but still, novices are being told wrong things, and it's a pet peeve for me.

Our eye of course always expects to see only linear data, a linear reproduction of the original linear scene, same as if we were still standing there. Any "need" for gamma for the eye or for the LCD is laughable technically, but our need for compatibility is still extremely important, so we still do it. The eye has no use for gamma. The eye never sees gamma data. We may still encode gamma into the data for other reasons (compatibility), but it is always necessarily decoded back to linear before any eye ever sees it. Anything else would be distortion. The eye has no use for gamma. Luckily, the eye never sees gamma data.

So my argument is about fundamentals, impossible to ignore: Our eye only looks at linear scenes, or at linear reproductions of scenes. The eye never ever sees gamma, which would be distorted tones and unnatural if it did. For that reason, any gamma encoding is of course always decoded back to the same original linear values before any eye ever sees it. Thankfully, the eye never has opportunity to see gamma data. Gamma is obviously not done for the eye response.

The only reason to do gamma correction is to correct CRT response. However, the big reason we still continue gamma today is for compatibility, which is all-important.

But gamma in 8-bit files and 8-bit target video space can create tiny errors in the numbers ("off by one"). This is no big deal (it is what we do, and it obviously works fine), and this is Not related to the eye.

A Summary of the Gamma Procedure, and how it works:

A CRT display is not a linear device. CRT displays have serious tonal losses, only the brightest tones get through and the rest are lost (dark). We used CRT display for over 75 years for television, so this is very well understood. This nonlinear CRT response curve is named gamma, which is shown at right. These graph axes represent the 0..255 range, but the numbers are normalized to show 0..1 values (percentage scales). Read it like a regular curve tool, with input at bottom, and output at left. The straight 45 degree line shows a hypothetical perfect unchanged linear response, so that any input is the same numerical output (50% in is 50% out, no change). But the CRT response curve (named gamma) shows 50% response is down to 22% (so much of the output will be very dark).

To correct this nonlinear response, the image data is first boosted nonlinearly, modified to new values equally in the opposite direction of the losses. This curve is named gamma correction, after which the image will display properly (linearly). Even after suffering these expected CRT losses, the corrected output will come out linear (the straight line). That correction curve is shown, the image data is boosted so that midpoint 50% is raised to 73% (calculator above, Option 2, 50%).

The correct Google search term for this subject is Gamma Correction. Don't believe everything you read now though. It is the internet, and there are good and poor sources, and that part you see about the purpose of gamma being to aid the human eye response is utter nonsense. The eye never sees gamma data, the eye only sees the decoded data, exactly reversed back to be the same original linear version again. That's the purpose of gamma, and gamma is only to correct the response of CRT.

Today, LCD display is linear and does not need gamma, however gamma is still necessarily continued to provide compatibility with all the world's previous images and video systems. The LCD display simply uses a chip to decode it first. Note that gamma is a Greek letter used for many variables in science (like X is used in algebra, used many ways), so there are also several other unrelated uses of the word gamma, in math and physics, or for film contrast, etc, but all are very different concepts. One use of the term gamma is to describe the CRT response curve.

Film cannot be shown on a CRT directly (we digitize an image first), so CRT is not a concern for film. But digital cameras and scanners all always automatically add gamma to all tonal images. A color or grayscale image is tonal (has many tones), but a one-bit line art image (two colors, black or white, 0 or 1) does not need or get gamma.

Gamma correction is automatically done to any image from by any digital camera (still or movie), and from any scanner, or any way a digital tonal image might be created. Gamma is an invisible background process, it just always happens. This does mean that all of our image histograms contain and show gamma data. The 128 value that we may think of as midscale is not middle tone of the histograms we see. This original 127 middle value (middle at 50% linear data, 1 stop down from 255) is up at about 186 in gamma data, and in our histograms.

Normalization: Every raw camera image is eventually converted to RGB, and then for computing gamma, the [0..255] data is normalized into a [0..1] range (divide each value by 255 to be a fraction [0..1], see formula). Normalization is necessary because the values of 0 or 1 raised to any exponent are still 0 or 1 (unchanged). Therefore, the overall range is not extended or changed, no clipping added, etc. The end points remain fixed (see the curve.) Then for each red, green, or blue component of every pixel,
the encode math is Gamma = 255 * (Linear / 255) 1/2.2

Then we store the result in a 8 bit file, which is an integer in range [0..255]. We probably take time to do any necessary rounding, but it is an integer value (cannot store the fractional number). Then when displayed, the goal is to get that same tonal number back when decoded to linear again to view the accurate reproduction of the original image.

CRT displays take care of their own decoding (the CRT losses occur to decode simply by showing the image on a CRT). This is of course the purpose that gamma plans for, but even with LCD today, we still continue to do gamma for compatibility with the world.

LCD displays are considered to be linear, not needing gamma, but our images are all gamma encoded, so a LCD chip simply decodes it back to original linear (so we can show our gamma images). There is more than one way to do it, but LCD monitors and television normally use look up tables. Math is slow for millions of pixels, so these lookup tables have previously been computed for all values, and are now used by the device to avoid the math (see example LUT below). Then the table can simply provide the linear values for substitution to be shown.
The decode math is Linear = 255 * (Gamma / 255) 2.2

It's just math, the formulas graph out the curves above. There's no mumbo-jumbo involved, and it's not rocket science. Gamma is used to exactly offset CRT losses, to be able to correct and use a CRT display. However, on a LCD display, a LUT chip first simply decodes back with the exact reversed math operation to simply recover the same exact linear value we started with (leaving no change the eye could see). Decode uses exponent of 2.2 instead of reciprocal 1/2.2 for Encode, which is reversible math. It's like 8/4 = 2, and 4x2 is 8 again. Reversible math, we simply get the same value back. However there can be slight 8-bit rounding variations of gamma in between, which might change the value by a difference of one sometimes. A small error, but not really a big deal, virtually all of our images and video systems and printers are 8 bits. If it were not acceptable, we would be doing something else.

The reason we use gamma. For many years, CRT was the only video display we had. But CRT is not linear, and requires heroic efforts to use them for tonal images (photos and TV). The technical reason we needed gamma is that the CRT light beam intensity efficiency varies with the tubes electron gun signal voltage. CRT does not use the decode formula, which was simply the study of what the non-linear CRT losses already actually do in the act of showing it on a CRT ... the same effect. The non-linear CRT simply shows the tones, sort of as if the values were squared first (2.2 is near 2). These losses have variable results, depending on the tones value, but the values that were not bright will pretty much go dark.

How does CRT gamma correction actually do its work? Gamma 2.2 is roughly 2, and this example will use 2 instead because it is simpler math. Encoding input to the power of 1/2.2 is roughly 1/2 or square root, which condenses the image data range smaller (yes Poynton fans, the tones are compressed, CLOSER together). Gamma values less than 60 decode to 10 or less linear. And 78% of the linear values encode to be above the 127 50% point (boosted, see LUT below, or see curve above). So gamma boosts the low values higher, they move up more near the big boy bright values. For an example (two tones 225 and 25, and using easier exponent 2 instead of 2.2), value 225 is 9x brighter then 25 (225/25). But the square roots are 15 and 5, which is only 3 times more then, compressed together... 3² is 9 (and only 2.7 times more if we used 2.2). So we simply store the square root, and then show it squared, for no net change. The reason of course is because the CRT losses are going to show it squared regardless (specifically, the CRT result is power of 2.2).

Not to worry, our eye is NEVER EVER going to see any of these gamma values. Because, then the non-linear CRT gamma output is a roughly squared response to expand it back (restored to our first 225 and 25 linear values by the actual CRT losses that we planned for). CRT losses still greatly reduce the low values, but which were first boosted in preparation for it. So this gamma correction operation can properly show the dim values linearly again (since dim starts off condensed, up much closer to the strong values, and then becomes properly dim when expanded by CRT losses.) It has worked great for many years. But absolutely nothing about gamma is related to the human eye response. We don't need to care how the eye works. :) The eye NEVER sees any gamma data. The eye merely looks at the final linear reproduction of our image on the screen, after it is all over. The eye can only tolerate seeing an accurate linear reproduction of the original image. How hard is that?

Then we more recently invented LCD displays, and these were linear devices, so technically, they didn't need gamma anymore. But if we did create and use gamma-free devices, then we couldn't show any of the world's images properly, and the world could not show our images properly. So, we're locked into gamma, and for full compatibility, we simply just continue encoding our images with gamma like always before. This is easy to do, it just means the LCD device simply includes a chip to first decode gamma and then show the original linear result. Perhaps a slight wasted effort, but it's easy, and exactly reversible, and the compatibility reward is huge (because all the worlds images are gamma encoded). So no big deal, no problem, it works great. Again, the eye never sees any gamma data, it is necessarily decoded first back to the linear original. We may not even realize gamma is a factor in our images, but it always is. Our histograms do show this numerical gamma data, but the eye never sees it. Never ever.

Our printers naturally expect to receive gamma images too (because that's all that exists). And like CRT, a printer actually does need most of it anyway (dot gain losses, etc.) Not exactly the same gamma 2.2 number internally, but they must expect 2.2, and then printers know what they need, and how to adapt it. But compatibility is a huge deal, because all the worlds grayscale and color images are gamma images. All tonal images are gamma encoded.

The sticky part: We do have to store computed gamma into 8 bit JPG files or 8 bit video space. A decimal value like 100.73 has to become 101 or 100. We can round it or truncate it. Even if we use the rounded value, some of the values work out very well, but other possible 8-bit values might still be off by one (one is a tiny number, any number's least significant bit). For example, values of linear 72 or 80 or 86 and others are simply not exactly reproducible in rounded 8-bits (see Option 7 above, or the LUT can show it too), so these will always be off by one (in 8-bits). (Poynton fans, note the low values are NOT the problem). Using the truncated value may sound crude, but it is fast, and in fact not so bad. Rounding is random, and it somewhat changes which values are affected, and changes the count of values affected by 8-bit one-off errors from 28% rounded to 50% truncated.

However, these rounded 28% errors are roughly evenly distributed between +1 or -1 difference, a variation range of 2. But the 50% truncated errors are all -1 (except only one value 143 was -2, only because actual -241.986 was truncated as 241). An engineer might claim the overall truncated result was better, more consistent, less variation (evidence offered in Option 7). But we are speaking of tiny rounding effects on precision, and for speed, LUT are commonly used, which are rounded.
FWIW, any 16 bit file converted to 8 bits always only used truncation. We never notice, and it is fast.

If we did not use gamma for a LCD, then our JPG 8 bit file could instead simply already contain any linear values directly. We could simply store the linear value in the file, and then read the same original value back out, and simply show it as is (speaking of on a LCD). No technical issues then. But we instead necessarily and desirably still use gamma for compatibility with all the images and video and printer systems in the world.

How big a deal is 8-bit gamma? What actually does happen? We've come a long way, but 8-bit gamma had a pretty large effect in film scanners before technology allowed them to use 12 bits. It seems much less critical in video display. I offer evidence to examine this 8-bit rounding effect in the calculator above, Options 6 & 7.

Gamma values stored in an 8-bit JPG file, or decoded into 8-bit video space, are necessarily 8-bit integers, which means the values can only be a limited set of numbers of [0..255]. When computing gamma, we can compute a number like 19.67. But the result can only be 19 or 20, and not a fraction. So the values might change by a value of one (almost always only one, Option 7). Rounding up or down appropriately certainly helps accuracy, and video lookup tables can provide that. Or code could do that, but it's slow processing math. For the huge load of millions of pixels (x3 for RGB), an expedient method to convert to 8-bits is to truncate. It's really not all that bad.

The Truncate gamma values checkbox in the calculator will do the same rounding down, or not (just click it, on and off, and watch the values). Output storage results go into 8-bit integers [0..255], i.e., floating point fractions are Not stored, and may not be rounded, same as storing into 8-bits would do. Random values might change by one. 8-bits may not be perfect, but the point is to show worst case isn't too bad.

So 8-bits has effect, but not a big difference. Our accepted computer color system plan is 8-bits (called 24 bit color, 8 bits each of RGB). We all use 8-bits and find no problems. Yes, linear values might decode to come back one less than they went in. A difference of 1 down at 5 or 10 or 20 could possibly be a significant percentage at the low end (where it is very black, and our monitors can hardy show it anyway), but this is nothing at higher values. And this change of One is random, no pattern to it, don't count on the eye to help figure it out. The 8-bit "problem" is largely only about if the integer should have randomly rounded up instead of down. The computer can of course compute gamma conversions to any high precision desired, but the final act of storing a precise result value into an 8-bit file MAY expediently truncate it to be a value of maybe one less. It is the tiniest error, which depends on the decode procedure.

For example, if to be stored in an 8-bit file, linear 20 goes to 80 in 2.2 gamma, which then decodes back to 19 or 20 in 8-bit video space (to see this, just enter 20 into calculator field 6, choose Option 6, and then toggle the Truncate gamma values checkbox repeatedly). Option 6 is only for this purpose, and Option 7 just counts the values with the different differences for the two rounding cases. You probably may never detect this difference in a screen image. And it's just math, we certainly don't see any way here that the human eye could help with it? Notice that this difference is Not just because the gamma data was 8-bits, it is also because the target video space was 8-bits. But 8-bits is not a big problem. It is our standard.

The math: You can repeat the math yourself for the concept. Here is how: Gamma must be normalized to a [0..1] data range (divide by 255) before we do the math. Because when normalized to [0..1], end point values 0 or 1 to any exponent are still 0 or 1, so the gamma boost never extends end ranges or causes clipping. The end points are fixed, it only boosts the midrange, more at the low end (see the graph). Then our computation must be rounded to an integer, both in the 8-bit file, and in the 8-bit video space.

This next shows the work to convert linear value 20 to 2.2 gamma value 80 and then back to linear 19 (19 because it was stored as 8-bit integers). Again, the computer can do any precise calculations, but the 8-bit file is limited to storing integers.

Normalize: 20 / 255 = 0.078431 [0..1] range (so this is 7.8% of 255 full scale)
Gamma value: 0.078431 ^ (1/2.2) = 0.314409 [0..1] range (this is 31% scale on the histogram)
Value = 0.314409 * 255 = 80.174369, rounded to store as integer 80 [0..255] in 8-bit file.

Converting 80 gamma back to linear value 20:

Normalize: 80 / 255 = 0.313725 [0..1] range
Gamma value: 0.313725 ^ 2.2 = 0.078057 [0..1] range
Value = 0.078057 * 255 = 19.90443, stored as 19 [0..255] (8-bit video space)

19, or 20, depending on how the decoding software processes the rounding. It has to be an integer, which the receiving hardware can choose to process by truncating or rounding. If computed, truncation is simpler and faster than rounding. But either way, if 20 comes back as 19, this is what Option 7 calls a difference of one.

Look up Tables (LUT)

This article is about gamma, so to be more complete, and for example of concept, here are 8-bit video lookup tables (LUT) for gamma 2.2. Both Encode and Decode are shown. Instead of computing math millions of times (three times per RGB pixel), a scanner or camera would use Encode, and a LCD display or printer would use Decode. This table can be in a ROM with the input address being all of the possible gamma values, and with the output being the corresponding rounded linear value for that input. Low gamma inputs are rather coarse (and gamma less than 25 is 0 or 1 linear.) For an example of use, input gamma 132 (decode) simply reads at address 132 to see and use output rounded 60 linear. Or linear 60 (encode table) outputs 132. The idea is that the table is fast, and prevents the video from having to do the math on millions of pixels.

This Calculator requires JavaScript be enabled in your browser.

An example of the LUT is that Encode address linear 81 is gamma value 152. And then Decode address gamma 152 is linear value 81. It works without any additional math (fast and simple), but in 8-bits, some of these could be one off, like 80 is. Now we could of course store any modified value we wish in the table, but that can't help, it's already computed right.

79 -> 150, 150 -> 79
80 -> 151, 151 -> 81
81 -> 151, 151 -> 81   But what could we change?
82 -> 152, 152 -> 82

FWIW, notions about the human eye can't help either (the eye never sees gamma data. An analog CRT is the only thing that receives gamma directly, but the eye argument is the opposite, gamma still somehow needed without CRT?) Anyway, gamma is simply not related to the eyes response. Our only problem is that 8-bit values can only show values 150, 151, 152, ... But linear 80 computes about gamma 150.5. But off by one is not a big deal, since there are so many other variables anyway. White Balance for one for example, Vivid for another, these skew the data. So this is an instance to not sweat the small stuff.

The table could be modified to additionally correct for any other color nonlinearities in this specific device, the result of applying color calibration procedures for example. That would require three such tables, for each of red, green and blue. This table is for gamma 2.2, but other tables are quickly created. A 12-bit encode table would need 4096 values, a larger table, but it still reads just as fast.

The lowest linear values, like 4, 5, 6, may seem to be coarse steps (percentage-wise), but they are what they are (still the smallest possible steps of 1), and are what we've got to work with, and are the exact values that we need to reproduce, regardless if we use gamma or if we hypothetically somehow could bypass it. These low values are black tones, and the monitor may not be able to reproduce them well anyway (monitors cannot show the blackest black).

But gamma is absolutely Not related to the response of the eye, and gamma is obviously NOT done for the eye (that's nonsense, and of course, we don't even need to know anything about the eyes response). The eye never ever sees any gamma data, because gamma is always first completely undone back to linear. We couldn't care less what the eye does with it, it does what it does, but it does want to see an exact linear reproduction of the original linear scene, the same as if it were still standing there looking.

Note that Options 6 & 7 convert linear values to gamma, and then back to linear, and looks for a difference due to 8-bit rounding. That's all 6 does, but Option 7 does all possible values, to see how things are going. But our photos were all encoded elsewhere at 12 bits (in the camera, or in the scanner, or in raw, etc), so encoding is not our 8-bit issue. So my procedure is that Option 6 & 7 always round the input encoded values, and only uses the Truncate gamma values checkbox for the decoding, which will convert the 8-bit output values by either truncating or by full rounding. This still presents 8-bit integer values to be decoded, which matches the real world, but rounding the input introduces less error, which the camera 12-bits likely would not cause.

Yes, it is 8-bits, and not perfect. However, the results are still good, very acceptable, which is why 8-bit video is our standard. It works. Gamma cannot help 8-bits work, actually instead, gamma is the complication requiring we compute different values then requiring 8-bit round off. Gamma is a part of that problem, not the solution. But it is only a minor problem, necessary for CRT, and today, necessary for compatibility with the worlds images.

Stop and think. This is very obvious. Gamma is absolutely not related to the response of our eye. This perceptual step business at the low end may be how we might wish the image tones were, but it is Not an option. Instead, they are the numbers that actually are, which we hope to reproduce accurately. The superficial "gamma is for the eye" theory fails when we think about it once. If the eye needs gamma, how can the eye see scenes in nature without gamma? And when would the eye ever even see gamma? All data is ALWAYS decoded back to linear before it is ever seen. Gamma is obviously NOT related to the response of the eye. A linear value of 20 would be given to a CRT monitor as gamma value 80, and we expect the CRT losses will leave the corresponding linear 20 visible. A LCD monitor will simply first decode the 80 to 20, and show that. Our eye will always hopefully see this original linear value 20, as best as our monitor can reproduce it.

But the most obvious fact, if we had instead simply stored this linear integer value 20 in an 8-bit file, then we would of course have easily read back linear 20, which is what computers do, very reliably. There's no problem with that, no need to alter the values then. It could be a great plan for a LCD (only for LCD not needing gamma). Showing that original linear 20 value to the eye is the only goal of reproduction. It can't get better than that. We continue to do gamma for compatibility, but certainly this LCD display does not "still need" CRT gamma then. In fact, it is obvious that gamma manipulation actually increases the 8-bit problem. It's certainly Not the solution, regardless of the mumbo-jumbo trying to call it a virtue. However it's a tiny problem, and it is easy, and it does work, and the actual need and use of gamma now is for compatibility, so we do have much larger considerations. It is still tremendously more worthwhile to continue compatibility with all the worlds images and video systems. But gamma is certainly Not in any way related to matching the response of our eye. :)

Gamma has been done for nearly 80 years to allow television CRT displays to show tonal images. CRT are very non-linear, and require this correction to be useful. Gamma is very well understood, and the human eye response is NOT any part of that description. The only way the eye is involved is that we hope the decoded data will show a good linear reproduction of the image for the eye to view (but the eye is NOT part of the correction process). Gamma correction is always done, automatically, pretty much invisible to us. We may not use CRT today, but CRT was all there was for many years, and the same gamma is still done automatically on all tonal images (which are digital in computers), for full compatibility with all of the world's images and video systems (and it is very easy to just keep doing it). LCD monitor chips simply easily decode gamma now (the specification 2.2 value does still have to be right). The 8-bit value might be off by one sometimes, but you will never know that. Since all values are affected about this same way, there's very little overall effect. More bits could be better, but the consensus is that 8-bits work OK.

I am all for the compatibility of continuing gamma images, but gamma has absolutely nothing to do with the human eye response. Gamma was done so the the corrected linear image could be shown on non-linear CRT displays. Gamma is simply just history, of CRT monitors. We still do it today, for compatibility of all images and all video and printer systems.

We probably should know that our histogram data values are gamma encoded. Anything you see in the histogram is gamma values.


18% gray cards seem a common confusion relating to gamma. It was known (halftone printing, 1880s) that the eye and brain perceives 18% reflectance as about middle gray. The first 18% gray cards were to help printers regulate their ink flow. Then later, film became popular, and then light meters, and today, the cards do still have exposure uses with our meters. But histograms are a digital concept.

Yes, Ansel Adams did popularize the 18% gray card as being middle gray at his Zone V back in the 1930s. The card was available then, and it was B&W then, and he was not using digital, histograms, or gamma-encoded data. His "middle" was whatever the card was (he understood it was 18%, and was only 50% in his mind's eye). My guess is that he had likely never seen a histogram back then. His reference was how the analog tones printed on paper.

The funny thing is that today some people recalibrate their light meter to make a gray card come out at histogram 128 (calling it "midpoint"). They heard 18% reflectance appears as middle gray, and they heard 128 was the midpoint of the histogram, so naturally, middle must mean middle. They assume there is some magic so that the gray card ought come out at middle.

However, it doesn't, 18% is not the middle of anything digital. 18% is simply not 50%. 18% is 18% in (linear) digital. Hopefully, we reproduce it as still 18%, so the eye will see 18%, and will think it looks middle gray. We do assume proper exposure will make 18% reflectance come out as value 46 (18% of 255, linear, assuming grayscale instead of color.) However, when this data becomes gamma encoded in our histogram, gamma 2.2 makes 46 be value 117, or about 46% (gamma 2.5 could reach 128, or if it were a 21.8% card, gamma 2.2 would compute at 128. Except that now, this midpoint in gamma 2.2 is near 187, about 3/4 scale.)
So anyway, they shoot the card in a picture, and then examine the histogram result, and gamma at 46% does make it coincidentally close to 50% (0.29 stop difference), and they say "Oh yeah, I've heard about that, and it probably ought to be middle".
Another problem of course, if directly metering a card with a reflected meter, it does not matter if they meter a 18% card, or a white card, or a black card... the reflected meter adjusts exposure to try to make them all come out about the same, near middle (an example is shown at what reflected meters do, and it is the reason for incident meters, which don't). The exposures of each card will vary to do that, but the histogram result, not so much. All three cards should come out middle gray, but that calibrated reference is what the camera meter does, not what the card is.

Still, the right idea of metering on the 18% card is that to us, it actually is "middle gray", and a reflected meter can directly meter the 18% gray card to simulate the exposure of an "average" subject. Some scene colors will be brighter, and some colors will be darker, but this middle gray should be about in the middle (maximum possible range up and down from it). Real subjects often contain a wide variation of many reflecting colors (sky, trees, snow, beaches, people, red McDonalds signs, white, black, light, dark colors), which average out to some tone value which the meter reads. The scene may not always cause a middle value, but the meter will set exposure to put it in the middle, whatever that means (probably it means 12.5%, but there are other digital influences, like White Balance or Vivid). But with reflected meters, predominately white scenes will be underexposed, and predominately black scenes will be over exposed. It's just how reflective meters work. Scene colors affect the reflective metering. White reflects well and reads high. Black doesn't reflect much, and reads low. The meter adjusts exposure to put both in the middle.

So specifically, the purpose of metering on the 18% card is: 1) it is seen as middle gray in grayscale, so hoping for a correct average exposure suitable for the range of most scenes, and 2) to give an actual reading of the LIGHT which is INDEPENDENT of the subjects actual colors which could meter wrong. The 18% card is assumed to give near correct exposure in this light for an "average" or "typical" subject, whatever that is, so we hope this middle 18% exposure is often about right for the actual scene in front of us. And it normally is pretty close, is actually more like using an incident meter, metering the light directly, independent of the subjects reflection. I am certainly NOT knocking the gray card method (but an incident meter would be easier). I do question the wisdom of trying to calibrate our meters with the card.

However reflected light meters actually use 12.5%, at least Sekonic, Nikon and Canon do. And except that Kodak always said to open 1/2 stop more if metering on their 18% card, which agrees with the meters. However, there is confusion now... Kodak sold this card printing business over 20 years ago (and it has been sold again since). We can still buy new cards showing Kodaks name, but sadly, the 1/2 stop is no longer mentioned. And the difference of gamma 117 to 128 is 0.29 stop, which used to round to this 1/2 stop, but we can use 1/3 stops today.

Coincidentally, this "midpoint" calibration misunderstanding (due to gammas help) causes only this small error, maybe not really the worst thing as a rough guide (which is all reflective metering is). Certainly it can be compensated, but certainly it is pointless, and plainly is the wrong idea. It is only coincidentally close to "middle" because of gamma, not because of exposure. 18% is not the midpoint of anything digital, and actual midpoint in gamma data is up near 187 at 73% anyway, at about 3/4 scale. And it varies with different color shifting manipulations in digital cameras, for example White Balance or Vivid.

Gamma is usually a constant 2.2, but even if you hear pros promoting it, calibration of the meter is certainly NOT about photographing the card one time. Not a great theory. Read Sekonic's calibration procedure, which does not mention gray cards. They know a thing or two, and according to Sekonic (and the ISO organization), the right idea to calibrate your meter is that IF it repeatedly and continually gives a consistent error on a wide range of scenes, THEN adjust the meter to compensate. Any one single reading is suspect anyway, too many variables.

The value used by most reflected light meters today (at least Sekonic, Nikon, Canon) is 12.5% (close to Kodak's half stop under 18%). Reflected K = 12.5 is shown in every Sekonic specification. That same Wikipedia article says Minolta, Kenko and Pentax meters used 14%.


This poll box is new, with very few responses yet. Your response would be evidence that people do read down this far. :) But I'm curious about opinions about gamma, and hope you will enter yours? It is anonymous, and it is of interest. Thank you.

Gamma still helps the human eye work right or better
Gamma is CRT correction, but still done for compatibility
Other explanation? (can use comments)
No response. Or undecided or don't care

Please rate this page, 1 Poor to 10 Good?  
(Zero means "no response", otherwise has to be 1..10)

Any Comments you would share? (up to 500 characters)

   

Menu of the other Photo and Flash pages here

Copyright © 2011-2016 by Wayne Fulton - All rights are reserved.

Previous Menu