www.scantips.com

What and Why is Gamma Correction?

All of the world's photo image files contain Gamma Correction (gamma 2.2 is specified in the specifications, sRGB, in digital HDTV, and before that, in analog TV NTSC and PAL).
Because, Gamma Correction oppositely corrects for the deficiencies of CRT monitors (which we used for many years, including television). For this purpose, all of our digital cameras and scanners always output images already corrected for gamma (except Raw images defer this step until later, and 1-bit line art images don't need gamma).
Gamma is pretty much an invisible operation, it just always happens in background, but it does affect our histogram data. When you edit a RGB(210, 145, 20) color in Photoshop, that is gamma data, which may not be important to us there. But this also affects the 18% gray card, which in the histograms gamma data is near 46% (which is Not how many of us think of it).

Terms linear and gamma:   All digital tonal image data is gamma encoded (tonal data contains many tones, which is color or grayscale images). In video use, the common use of these terms is that gamma data means gamma correction has been added, and linear data means still linear, it is not gamma encoded (either not yet encoded, or no longer encoded).

So linear also implies the analog scene data in the lens, and also implies the linear reproduction that is presented to our eye, same as we see when either standing in front of the original scene, or viewing a reproduction on a video screen. Our eye of course always wants to see only the linear version, like the original scene. Gamma data is always necessarily "decoded" first before any eye ever sees it.

In math, linear means a straight line graph, meaning twice the input produces twice the output (proportionally, linearly, also 5x input produces 5x the output, etc.) Gamma is non-linear logarithmic processing, done to correct for CRT monitors which show data as if it had been raised to the exponent of 2.2 (roughly numerically approximate to squared, exponent of 2), with the high values leaving the low values far behind (lost, dark). So for the CRT, we have to prepare by first doing Gamma Correction, by applying exponent 1/2.2 to the data first, offsetting the expected losses, so the image will come out of the CRT right, i.e., linear again for the eye to see.

Gamma is NOT in any way related to the human eye response

Now and then, I get email... :) Charles Poynton's gamma articles have made the statement that we would still need to use gamma even if we no longer use CRT monitors. It's impossible to agree with his suggestion about the eye, however I am definitely all for continuing to use gamma for the reason that we do continue, because continuing to use gamma for compatibility prevented obsoleting and rendering incompatible 100% of the world's existing images and video systems. That must have been a very easy decision, it was vastly easier than starting over, since the world is full of gamma images, and especially since gamma has become easy and inexpensive to implement (it's just a chip now). Compatibility is a great and best reason to continue using gamma.

The big problem I do see is that this ridiculous notion (that we still need gamma for the eye somehow?) has recently caused some false internet "explanations" (just notions which are impossible to explain). Their only argument is that the eye has a similar nonlinear response, which even Poynton says is "coincidental". I think it may be an opposite effect, but be that as it may, human eyes simply NEVER see any gamma data. We only see analog linear scenes or data. Our eyes of course evolved without CRT gamma, and eyes don't need gamma now. We did use gamma for decades, knowing it was designed only for the CRT. Now CRT is about gone, so suddenly newbies decide gamma must instead still be done for the human eye response somehow? Come on guys, don't be dumb. :) No matter what gamma might do, the eye simply never has any opportunity to ever see any gamma data, and gamma data is still totally unrelated to our vision response. Our eye only wants to view a properly decoded linear image reproduction, the same as the lens saw, the same as our eyes see if we were still standing there. However, all of the worlds images are in fact already gamma encoded, so we do continue gamma for that compatibility (LCD displays simply decode it first). The "why" we use gamma may not actually matter much, gamma is pretty much an invisible automatic operation to us, a noop now, encoded and then decoded, but still, novices are being told wrong things, and it's a pet peeve for me.

Our eye of course always expects to see only linear data, a linear reproduction of the original linear scene, same as if we were still standing there. Any deviation would be data corruption. Any "need" for gamma for the eye or for the LCD is laughable technically, but our need for compatibility is still extremely important, so we still do it. The eye has no use for gamma. And the eye never sees gamma data. We may still encode gamma into the data for other reasons (compatibility now), but it is always necessarily decoded back to linear before any eye ever sees it. Anything else would be distortion. The eye has no use for gamma. Luckily, the eye never sees gamma data.

The fundamentals are impossible to ignore: Our eye only looks at linear scenes, or at linear reproductions of scenes. The eye never ever sees gamma, which would be excessively bright and unnatural if it did. The reason to do gamma correction is to correct CRT response. Today, CRT is no longer popularly used, however, since all of the worlds tonal images are gamma encoded, the obvious reason we still continue gamma today is for compatibility, which is all-important.

Our eyes never see gamma data (it's always decoded first). Our camera sensors also see linear data, images do begin and end as linear, and it seems every minimal article about our histograms portray them as linear data. However all our tonal image files and all of their histograms contain gamma data, which changes the numerical values you will see in your images. Their data is gamma data, our files and histograms contain gamma data, the numerical values are gamma encoded.

Gamma Calculator

Gamma     2.2 is the standard profile

Show decimal places on computed values

Values are converted both as linear or gamma

1. Value [0..255]  

2. Percent [0..100%]   %


3. Difference of two values [0..255]

Linear or Gamma

-->


4. Difference of two values [0..100%]

Linear or Gamma

% --> %


5.   - stops down from 255 right end

(formats can be 1 or 1/3 or 1 1/3 or 1.3333)


6 & 7 are for article below
Decoded gamma value truncated, not rounded

6.   Convert linear to gamma and back

7. All values, linear to gamma and back


Numbers Only.

The "values" are the 0..255 values in a histogram (which are gamma numbers in the histogram, and linear numbers at the sensor). The percentages are of the 255 full scale histogram value.

Histogram values are integer [0..255] values, so entering numbers like 127.5 cannot actually exist in our JPG files. The calculator and math will use the decimal fraction value though, if it pleases you. And you can show a couple of decimal places on Option 1, 2, 5 histogram values (if interested in rounding maybe).

Option 5 (measuring stops down from 255 right end) offers a rough approximation about how exposure affects the gamma histogram. In the gamma data, one stop underexposed from 255 should be about 186 or 73%, about 3/4 scale. Or 1/3 stop down from 255 should be about 230 at 90%. However, note that digital cameras also are making a few simultaneous tonal adjustments, for White Balance, Color Profile like Vivid, Saturation, Contrast, etc, which shift the histogram data. Therefore the gamma values in histogram data probably are a little different than the exact values predicted. It is a ballpark guess.

We never see linear histograms for our photo images, our image files and histograms are RGB gamma data. These tonal images are always gamma (but one bit line art is not). The camera sensor chip was linear, and raw files are still linear, but until we show them to the eye, RGB image data is gamma. Even raw files show gamma histograms (from an embedded JPG image included in the raw file).

But gamma in 8-bit files and 8-bit target video space can create tiny rounding errors in the numbers (possibly "off by one"). This is no big deal (8 bits is what we do, and it obviously works fine), and this is Not related to the eye.

The Gamma Procedure, and How It Works:

It can be made to sound like voodoo, but it's actually pretty simple. A CRT display is not a linear device. CRT displays have serious tonal losses, the brightest tones get through and the rest are lost (dark). We used CRT display for over 75 years for television, so this fix is very well understood.

This nonlinear CRT response curve is named gamma, which is shown at right. These graph axes represent the 0..255 range, but the numbers are normalized to show 0..1 values (percentage scales). Read it like a regular curve tool (which I have marked in Blue, with input at bottom, and output at left). The straight 45 degree line shows a hypothetical perfect unchanged linear response, so that any input is the same numerical output (60% input from bottom is 60% output at left, no change). But the CRT response curve (named gamma) shows 50% response is down to 22% (so much of the output will be too dark).

To correct this nonlinear response, the image data is first boosted nonlinearly, modified to new values equally in the opposite direction of the losses. This curve is named gamma correction, after which the image will display properly (linearly) on a nonlinear CRT. Even after suffering these expected CRT losses, the corrected output will come out linear (the straight line in graph). That correction curve is shown, the image data is boosted so that midpoint 50% is raised to 73% in gamma data (calculator above, Option 2, 50%).

The correct Google search term for this subject is Gamma Correction. Don't believe everything you read now though. It is the internet, and there are good sources, and also those that frankly don't know. That part you see about the purpose of gamma being to aid the human eye response is utter nonsense, made-up gobbledygook. The eye never even sees gamma data, the eye only sees the decoded data, exactly reversed back to be the same original linear version again. That's the purpose of gamma, meaning gamma is only to correct the response of CRT, so we can in fact see the original linear image.


The Adobe Levels tool (CTRL L in Elements and Photoshop) has a gamma option. Adobe Help calls the center slider "Midtones", but describes it as "The middle Input slider adjusts the gamma in the image." It is certainly a very good tool to adjust overall image brightness (much better than "Brightness" tools, which merely add a constant to all tones, and can cause clipping). This tool raises the center of the curve, but the endpoints stay fixed (same range is fixed).

It does that by changing gamma, raising the curve shown above. This center slider of Levels shows 1.00 by default, which means gamma of 1x of existing value (whatever it was, but probably 2.2, and default 1x x 2.2 is still 2.2, no change at 1x). But other slider values are multipliers of that existing gamma.


Center slider at 0.45 = 1/2.2, opposite action, now an image with no gamma correction (with 1/2.2 x 2.2 = gamma 1, linear). Too dark because it simulates CRT losses showing an image with no gamma, because now your LCD also decodes gamma 1 to gamma 0.45, also too dark. This is the effect of CRT gamma losses. CRT is why we use gamma correction.

Center slider at 1.0, normal default, normal gamma 2.2. However, an LCD monitor has to specifically decode gamma first, by applying the 0.45 curve before showing it as linear, or gamma 1. CRT losses would also show it this proper way. This is the plan. Your LCD also decodes 2.2 to (2.2 x 1/2.2) = gamma 1, linear, reproducing the original scene in front of the camera.

Center slider at 2.2, adding gamma 2.2 to already 2.2 (i.e., 2.2 x 2.2 = gamma 4.8 now). Too bright, but we never see this. If we could see gamma data, our histogram data should look this way (too bright, the point is data does have gamma 2.2 added to linear). This is done so when a CRT shows it darker suffering the gamma losses, it will look right after all.
The Levels center slider is a multiplier of the current image gamma. I don't find that "multiplier" written about any more, but it was widely known and discussed 15-20 years ago (CRT days, back when we knew what gamma was). Gamma used to be very important, but today, we still encode with 1/2.2, and the LCD monitor must decode with 2.2, and it's just an automatic no-op now.
Evidence of the tool as multiplier: An eyedropper on the gray road at the curve ahead in the middle image at gamma 2.2 reads 185 (I'm looking at the red value). Gamma 2.2 puts that linear value at 126 (midscale). 126 at gamma 1 is 126 (measured in top image, 0.45x x 2.2 = gamma 1). 126 at gamma 4.8 is 220 (measured in bottom image, 2.2x x 2.2 = gamma 4.8). Q.E.D.


Today, our LCD display is considered linear and does not need gamma. However we still necessarily continue gamma to provide compatibility with all the world's previous images and video systems. The LCD display simply uses a chip to decode it first (discarding gamma correction to necessarily restore the original linear image). Note that gamma is a Greek letter used for many variables in science (like X is used in algebra, used many ways), so there are also several other unrelated uses of the word gamma, in math and physics, or for film contrast, etc, but all are different unrelated concepts. One use of the term gamma is to describe the CRT response curve.

Film cannot be shown on a CRT directly (we must digitize an image first for our computer video system), so CRT is not a concern for film. But digital cameras and scanners all always automatically add gamma to all tonal images. A color or grayscale image is tonal (has many tones), but a one-bit line art image (two colors, black or white, 0 or 1) does not need or get gamma.

Gamma correction is automatically done to any image from by any digital camera (still or movie), and from any scanner, or any way a digital tonal image might be created. Gamma is an invisible background process, it just always happens. This does mean that all of our image histograms contain and show gamma data. The 128 value that we may think of as midscale is not middle tone of the histograms we see. This original linear 128 middle value (middle at 50% linear data, 1 stop down from 255) is up at about 186 in gamma data, and in our histograms.

Normalization: When the digital image is converted to RGB, then for computing gamma, the [0..255] data is normalized into a [0..1] range (divide each value by 255 to be a fraction [0..1], see formula). Normalization is necessary because the end values of 0 or 1 raised to any exponent are still 0 or 1 (unchanged). Therefore, the overall range is not extended or changed, no clipping added, etc. The end points remain fixed (see the curve.) Then for each red, green, or blue component of every pixel,
the encode math is Gamma = 255 * (Linear / 255) 1/2.2

Then we store the result in a 8 bit file, which is an integer in range [0..255]. We probably take time to do any necessary rounding, but it is an integer value (cannot store the fractional number). Then when displayed, the goal is to get that same tonal number back when decoded to linear again to view the accurate reproduction of the original image.

CRT displays take care of their own decoding (the CRT losses occur to decode simply by showing the image on a CRT). This is of course the purpose that gamma plans for, but even with LCD today, we still continue to do gamma for compatibility with the world.

LCD displays are considered to be linear, not needing gamma, but our images are all gamma encoded, so a LCD chip simply decodes it back to original linear (so we can show our gamma images). There is more than one way to do it, but LCD monitors and television normally use look up tables. Math is slow for millions of pixels, so these lookup tables have previously been computed for all values, and are now used by the device to avoid the math (see example LUT below). Then the table can simply provide the linear values for substitution to be shown.
The decode math is Linear = 255 * (Gamma / 255) 2.2

It's just math, the formulas graph out the curves above. There's no mumbo-jumbo involved, and it's not rocket science. It is simply an exponential function. Gamma is used to exactly offset CRT losses, to be able to correct and use a CRT display. However, simple CPUs are not equipped for much math, so on a LCD display, a LUT chip (below) first simply decodes back with the exact reversed math operation to simply recover the same exact linear value we started with (leaving no change the eye could see). Decode uses exponent of 2.2 instead of reciprocal 1/2.2 for Encode, which is reversible math. It's like 8/4 = 2, and 4x2 is 8 again. Reversible math, we simply get the same value back. However there can be slight 8-bit rounding variations of gamma in between, which might change the value by a difference of one sometimes. A small error, but not really a big deal, virtually all of our images and video systems and printers are 8 bits. If 8 bits were not acceptable, we would be doing something else.

The reason we use gamma. For many years, CRT was the only video display we had. But CRT is not linear, and requires heroic efforts to properly use them for tonal images (photos and TV). The technical reason we needed gamma is that the CRT light beam intensity efficiency varies with the tubes electron gun signal voltage. CRT does not use the decode formula, which was simply the study of what the non-linear CRT losses already actually do in the act of showing it on a CRT ... the same effect. The non-linear CRT simply shows the tones, and the response is sort of as if the values were squared first (2.2 is near 2). These losses have variable results, depending on the tones value, but the values that were not bright will pretty much go dark.

How does CRT gamma correction actually do its work? Gamma 2.2 is roughly 2, and my example will use 2 instead because it is simpler math. Encoding input to the power of 1/2.2 is roughly 1/2 or square root, which condenses the image gamma data range smaller (yes Poynton fans, the tones are compressed, CLOSER together). And 78% of the values encode to be boosted above the 127 50% midpoint (see LUT below, or see curve above). So gamma boosts the low values higher, they move up more near the big boy bright values. Specifically, for a numerical example (two tones 225 and 25, and using easier exponent 2 instead of 2.2), value 225 is 9x brighter then 25 (225/25). But the square roots are 15 and 5, which is only 3 times more then, compressed together... 3² is 9 (and only 2.7 times more if we used 2.2). But we simply store the square root, and then the CRT shows it squared, for no net change, which is the plan. The reason of course is because the CRT losses are going to show it squared regardless (specifically, the CRT response result is power of 2.2).

Not to worry, our eye is NEVER EVER going to see any of these gamma values. Because, then the non-linear CRT gamma output is a roughly squared response to expand it back (restored to our first 225 and 25 linear values by the actual CRT losses that we planned for). CRT losses still greatly reduce the low values, but which were first boosted in preparation for it. So this gamma correction operation can properly show the dim values linearly again (since dim starts off condensed, up much closer to the strong values, and then becomes properly dim when expanded by CRT losses.) It has worked great for many years. But absolutely nothing about gamma is related to the human eye response. We don't need to even care how the eye works. :) The eye NEVER sees any gamma data. The eye merely looks at the final linear reproduction of our image on the screen, after it is all over. The eye can only tolerate seeing an accurate linear reproduction of the original scene. How hard is that?

Then we more recently invented LCD displays, and these were considered linear devices, so technically, they didn't need CRT gamma anymore. But if we did create and use gamma-free devices, then we couldn't show any of the world's images properly, and the world could not show our images properly. No advantage of that, so we're locked into gamma, and for full compatibility, we simply just continue encoding our images with gamma like always before. This is easy to do today, it just means the LCD device simply includes a chip to first decode gamma and then show the original linear result. Perhaps it is a slight wasted effort, but it's easy, and exactly reversible, and the compatibility reward is huge (because all the worlds images are gamma encoded). So no big deal, no problem, it works great. Again, the eye never sees any gamma data, it is necessarily decoded first back to the linear original. We may not even realize gamma is a factor in our images, but it always is. Our histograms do show this numerical gamma data, but the eye never sees it. Never ever.

So our printers naturally expect to receive gamma images too (because that's all that exists). Publishing and printer devices also need some of gamma too, not as much as 2.2 for the CRT, but the screening methods need most of it (for dot gain, etc). Until recently (2009), Apple Mac computers used gamma 1.8 images. They could use the same CRT monitors as Windows computers, and those monitors obviously were gamma 2.2, but Apple split this up. This 1.8 value was designed for the early laser printers that Apple manufactured then (and for publishing prepress), to be what the printer needed. Then the Mac video hardware added another 0.4 gamma correction for the CRT monitor, so the video result was an unspoken gamma 2.2, roughly - even if their files were gamma 1.8. But that was before internet, and now, the last few Mac versions (since OS 10.6) now observe the world standard gamma 2.2 in the file, because all the world's images are already encoded that way, and we indiscriminately share them via the internet now. Compatibility is a huge deal, because all the worlds grayscale and color photo images are tonal images. All tonal images are gamma encoded. But yes, printers are programmed to deal with the gamma 2.2 data, and to adjust it to their needs.

While we're on history, this CRT problem (non-linear response curve named gamma) was solved by earliest television (first NTSC spec in 1941). Television broadcast stations intentionally boosted the dark values (with gamma correction, encoded to be opposite to the expected gamma CRT losses). That was less expensive in vacuum tube days than building gamma circuitry into every TV set. Without this "gamma correction", the CRT screen images came out unacceptably dark.

Today, 8 bits is the sticky part: We do store computed gamma into 8 bit JPG files or 8 bit video space. A decimal value like 100.73 has to become integer 101 or 100. We can round it or truncate it. Even if we use the rounded value, some of the values work out very well, but other possible 8-bit values might still be off by one (one is a tiny number, any number's least significant bit). For example, values of linear 72 or 80 or 86 and others are simply not exactly reproducible in rounded 8-bits (see Option 7 above, or the LUT can show it too), so these will always be off by one (in 8-bits). (Poynton fans, note the low values are NOT the rounding problem). Using the truncated value may sound crude, but it is fast, and in fact not so bad. Rounding is random up or down too, and it somewhat changes which values are affected, and changes the count of values affected by 8-bit one-off errors from 28% rounded to 50% truncated (speaking of Option 7).

Options 6 & 7: However, these rounded 28% errors are roughly evenly distributed between +1 or -1 difference, a variation range of 2. But the 50% truncated errors are all -1 (except only one value 143 was -2, only because actual -241.986 was truncated as 241). We might claim the overall truncated result appears better, more consistent, less variation (evidence offered in Option 7). But we are speaking of tiny rounding effects on precision (only "off by one" errors), and for speed, LUT are commonly used, which are rounded.
FWIW, converting any 16 bit file to 8 bits always only uses truncation. We never notice, and it is fast.

If we did not use gamma for a LCD, then our JPG 8 bit file could instead simply already contain any linear values directly. We could simply store the linear value in the file, and then read the same original value back out, and simply show it as is (speaking of on a LCD). No technical issues then. But we instead necessarily and desirably still use gamma for compatibility with all the images and video and printer systems in the world.

How big a deal is 8-bit gamma? What actually does happen? We've come a long way, but 8-bit gamma did have a pretty large effect in film scanners before technology allowed them to use 12 bits. It seems much less critical in video display. I offer evidence to examine this 8-bit rounding effect in the calculator above, Options 6 & 7.

Gamma values stored in an 8-bit JPG file, or decoded into 8-bit video space, are necessarily 8-bit integers, which means the values can only be a limited set of numbers of [0..255]. When computing gamma, we can compute a number like 19.67. But the result can only be 19 or 20, and not a fraction. So the values might change by a value of one (almost always only one, Option 7). Rounding up or down appropriately certainly helps accuracy, and video lookup tables can provide that. Or code could do that, but it's slow processing math. For the huge load of millions of pixels (x3 for RGB), an expedient method to convert to 8-bits is to truncate. It's really not all that bad.

The Truncate gamma values checkbox in the calculator will do the same rounding down, or not (just click it, on and off, and watch the values). Output storage results go into 8-bit integers [0..255], i.e., floating point fractions are Not stored, and may not be rounded, same as storing into 8-bits would do. Random values might change by one. 8-bits may not be perfect, but the point is to show worst case isn't too bad.

So 8-bits has effect, but not a big difference. Our accepted computer color system plan is 8-bits (called 24 bit color, 8 bits each of RGB). We all use 8-bits and find no problems. Yes, linear values might decode to come back one less than they went in. A difference of 1 down at 5 or 10 or 20 could possibly be a significant percentage at the low end (where it is very black, and our monitors can hardy show it anyway), but this is nothing at higher values. And this change of One is random, no pattern to it, don't count on the eye to help figure it out. The 8-bit "problem" is largely only about if the integer should have randomly rounded up instead of down. The computer can of course compute gamma conversions to any high precision desired, but the final act of storing a precise result value into an 8-bit file MAY expediently truncate it to be a value of maybe one less. It is the tiniest error, which depends on the decode procedure.

For example, if to be stored in an 8-bit file, linear 20 goes to 80 in 2.2 gamma, which then decodes back to 19 or 20 in 8-bit video space (to see this, just enter 20 into calculator field 6, choose Option 6, and then toggle the Truncate gamma values checkbox repeatedly). Then compare it to value 21. Option 6 is only for this purpose, and Option 7 just counts the values with the different differences for the two rounding cases. The point is, 8 bits can cause minor "off by one" errors in gamma data. You probably may never detect this difference in a screen image. And it's just math, we certainly don't see any way here that the human eye could help with it? Notice that this difference is Not just because the gamma data was 8-bits, it is also because the target video space was 8-bits. But 8-bits is not a big problem. It is our standard, and it works well.

The math: You can repeat the math yourself for the concept. Here's how: Gamma must be normalized to a [0..1] data range (divide by 255) before we do the math. Because when normalized to [0..1], end point values 0 or 1 to any exponent are still 0 or 1, so the gamma boost never extends end ranges or causes clipping. The end points are fixed, it only boosts the midrange, more at the low end (see the graph). Then our computation must be rounded to an integer, both in the 8-bit file, and in the 8-bit video space.

This next shows the work to convert linear value 20 to 2.2 gamma value 80 and then back to linear 19 (19 because it was stored as 8-bit integers). Again, the computer can do any precise calculations, but the 8-bit file is limited to storing integers.

Normalize: 20 / 255 = 0.078431 [0..1] range (so this is 7.8% of 255 full scale)
Gamma value: 0.078431 ^ (1/2.2) = 0.314409 [0..1] range (this is 31% scale on the histogram)
Value = 0.314409 * 255 = 80.174369, rounded to store as integer 80 [0..255] in 8-bit file.

Converting 80 gamma back to linear value 20:

Normalize: 80 / 255 = 0.313725 [0..1] range
Gamma value: 0.313725 ^ 2.2 = 0.078057 [0..1] range
Value = 0.078057 * 255 = 19.90443, stored as 19 [0..255] (8-bit video space)

19, or 20, depending on how the decoding software processes the rounding. It has to be an integer, which the receiving hardware can choose to process by truncating or rounding. If computed, truncation is simpler and faster than rounding. But IF 20 comes back as 19, this is what Option 7 calls a difference of one.

Look up Tables (LUT)

This article is about gamma, so to be more complete, and for example of concept, here are 8-bit video lookup tables (LUT) for gamma 2.2. Both Encode and Decode are shown. Instead of computing math millions of times (three times per RGB pixel), a scanner or camera could read an Encode LUT, and a LCD display or printer could read a Decode LUT. This table can be in a ROM with the decode input address being all of the possible gamma values, and with the output being the corresponding rounded linear value for that input. Low gamma inputs are rather coarse (and gamma less than 25 is 0 or 1 linear.) For an example of use, input gamma 132 (decode) simply reads at address 132 to see and use output rounded 60 linear. Or linear 60 (encode table) outputs 132. The idea is that the table is fast, and prevents the video from having to do the math on millions of pixels.

An example of use of the LUT is that Encode address linear 81 is gamma value 152. And then Decode address gamma 152 is linear value 81. It works without any additional math (fast and simple), but in 8-bits, some of these could be one off, like 80 is. Now we could of course store any modified value we wish in the table, but that can't help, it's already computed right.

79 -> 150, 150 -> 79
80 -> 151, 151 -> 81   But what could we change?
81 -> 151, 151 -> 81
82 -> 152, 152 -> 82

Our only problem is that 8-bit values can only show integer values 150, 151, 152, ... but linear 80 computes about gamma 150.56, stored in 8-bits as 151. 151 computes linear 80.52, stored in 8-bits as 81. But off by one is not a big deal (and rounding puts it closer), since there are so many other variables anyway. White Balance for one for example, Vivid for another, these skew the camera data. So this is an instance to not sweat the small stuff.

In an 8-bit integer, where could any supposed so called perceptual improvements even be placed? :) The lowest linear steps (1,2,3,4) are separately distinguished (with numbers 1,2,3,4), perhaps coarsely percentage wise, but in 8 bits, 1,2,3,4 is all they can be called. And of course, they are the exact values we hope to reproduce (however real world video monitors probably cannot show levels that black, which cause is neither gamma nor 8-bits). Notions about the human eye can't help (the eye never sees gamma data). An analog CRT is all that sees any gamma data directly, but the eye notion is the full opposite, imagining gamma is still somehow needed without CRT? Anyway, gamma is obviously not related to the human eyes response in any way.

Except for the 8-bits, the LUT is simply a faster way to do the same math ahead of time. The LUT is not entirely wasted effort for LCD, because the table can also be modified to additionally correct for any other color nonlinearity in this specific device, monitor color calibration procedures for example, or just routine corrections. The LUT provides the mechanism to expand it further (if the data is this, then show that). Color correction would require three such tables, for each of red, green and blue. This table is for gamma 2.2, but other tables are quickly created. A 12-bit encode table would need 4096 values, a larger table, but it still reads just as fast.

The lowest linear values, like 4, 5, 6, may seem to be coarse steps (percentage-wise), but they are what they are (still the smallest possible steps of 1), and are what we've got to work with, and are the exact values that we need to reproduce, regardless if we use gamma or if we hypothetically somehow could bypass it. These low values are black tones, and the monitor may not be able to reproduce them well anyway (monitors cannot show the blackest black).

But gamma is absolutely Not related to the response of the eye, and gamma is obviously NOT done for the eye (that's nonsense, and of course, we don't even need to know anything about the eyes response). The eye never ever sees any gamma data, because gamma is always first completely decoded back to linear. We couldn't care less what the eye does with it, the eye does what it does, but it is happiest to see an exact linear reproduction of the original linear scene, the same as if it were still standing there looking.

Note that Options 6 & 7 convert linear values to gamma, and then back to linear, and looks for a difference due to 8-bit rounding. That's all 6 does, but Option 7 does all possible values, to see how things are going. But our photos were all encoded elsewhere at 12 bits (in the camera, or in the scanner, or in raw, etc), so encoding is not our 8-bit issue (it's already done). So my procedure is that Option 6 & 7 always round the input encoded values, and only uses the Truncate gamma values checkbox for the decoding, which will convert the 8-bit output values by either truncating or by full rounding. This still presents 8-bit integer values to be decoded, which matches the real world, but rounding the input introduces less error, which the camera likely would not cause.

Yes, it is 8-bits, and not perfect. However, the results are still good, very acceptable, which is why 8-bit video is our standard. It works. Gamma cannot help 8-bits work, actually instead, gamma is the complication requiring we compute different values then requiring 8-bit round off. Gamma is a part of that problem, not the solution. But it is only a minor problem, necessary for CRT, and today, necessary for compatibility with the worlds images.

Stop and think. It is obvious that gamma is absolutely not related to the response of our eye. This perceptual step business at the low end may be how we might wish the image tones were, but it is Not an option, not in 8 bits. Instead, the numbers are what they actually are, which we hope to reproduce accurately. The superficial "gamma is for the eye" theory falls flat (fails) when we think about it once. If the eye needs gamma, how can the eye see scenes in nature without gamma? And when would the eye ever even see gamma? All data is ALWAYS decoded back to linear before it is ever seen. And short of using 10 or 12 bits, how could we possibly improve it? So before you email nonsense to me, proponents need to show a specific numeric example, showing data numbers from start to finish, showing how gamma could possibly help results (of course, it doesn't, so they cannot, it's funny actually). Gamma is obviously NOT related to the response of the eye. A linear value of 20 would be given to a CRT monitor as gamma value 80, and we expect the CRT losses will leave the corresponding linear 20 visible. A LCD monitor will simply first decode the 80 to 20, and show that. Our eye will always hopefully see this original linear value 20, as best as our monitor can reproduce it.

So the most obvious fact, if we had instead simply stored this linear integer value 20 in an 8-bit file, then we would of course have easily read back linear 20, which is what computers do, very reliably. There's no problem with that, no need to alter the values then. It could be a great plan for a LCD (only for LCD not needing gamma). Showing that original linear 20 value to the eye is the only goal of reproduction. It can't get better than that. If it were possible, it would obviously be far better to simply store and use the original linear number 20, because reproducing it is our only goal. But CRT response made that impossible, and now compatibility with years of gamma makes that impossible. We of course continue to do gamma for compatibility, but certainly this LCD display does not "still need" CRT gamma then. In fact, it is obvious that gamma manipulation actually increases the 8-bit problem. Gamma is certainly Not the solution, regardless of the mumbo-jumbo trying to call it a virtue. However it's necessary, and a tiny problem, and it is easy, and it does work. The actual necessity of gamma now is for compatibility, so we do have much larger considerations. It is tremendously more worthwhile to continue compatibility with all the worlds images and video systems. But gamma is certainly Not in any way related to matching the response of our eye. :)

Gamma has been done for nearly 80 years to allow television CRT displays to show tonal images. CRT are non-linear, and require this correction to be useful. Gamma is very well understood, and the human eye response is NOT any part of that description. The only way the eye is involved is that we hope the decoded data will show a good linear reproduction of the image for the eye to view (but the eye is NOT part of the correction process). Gamma correction is always done, automatically, pretty much invisible to us. We may not use CRT today, but CRT was all there was for many years, and the same gamma is still done automatically on all tonal images (which are digital in computers), for full compatibility with all of the world's images and video systems (and it is easy to just keep doing it). LCD monitor chips simply easily decode gamma now (the specification 2.2 value does still have to be right). The 8-bit value might be off by one sometimes, but you will never know that. Since all values are affected about this same way, there's little overall effect. More bits could be better, but the consensus is that 8-bits work OK.

I am all for the compatibility of continuing gamma images, but gamma has absolutely nothing to do with the human eye response. Gamma was done so the the corrected linear image could be shown on non-linear CRT displays. Gamma is simply just history, of CRT monitors. We still do it today, for compatibility of all images and all video and printer systems.

We probably should know that our histogram data values are gamma encoded. Anything you see in the histogram is gamma values.

Copyright © 2011-2016 by Wayne Fulton - All rights are reserved.

Previous Menu