www.scantips.com

Have we hit a megapixel resolution limit?

We are doing quite well, but it's actually the opposite - in a few cases, we have finally achieved the minimum resolution, and some cameras have even started removing the anti-aliasing filters.

Our images have become large. Maybe the camera is 24 megapixels, but our common use is 2 megapixels for the video screen. Or we might print 7 megapixels in an 8x10 inch print. But we might print 30x20 inches, which if at 300 dpi is 54 megapixels. Some people say that since some of our camera sensors now contain 200 or 250 pixels per mm, and since good lenses resolve maybe 100 line pairs per mm, that we must have hit a limit for resolution. They make a serious mistake though, not understanding how digital sampling works. This has approached about two pixels per line pair of lens resolution now, but that's an absolute minimum, finally.

We're not near a limit. I'm not sure there is any concept of a limit. The manufacturers keep increasing the megapixels, they obviously don't see a limit. And certainly as long as we keep saying Wow about the new sensors, we're not there yet.

Resolution test targets are often printed as closely spaced rows of parallel lines. Resolutions of film and lenses generally are expressed in line pairs per mm resolved. Black lines have white lines between them, which is a pair of two lines, black and white, which have to be resolved by two pixels. The minimum resolution needs to be 2x the black lines, called line pairs. And the really big point is, that's just the Minimum. More is good.

35 mm format lenses resolve maybe 100 lp/mm ± 40 maybe, depending on the lens. Resolve means we can make it out to be lines. Not necessarily good clear sharp lines, but resolve means at least we can recognize vague smudges of lines. But higher resolution would see more distinct lines, with more detail in the lines.

Panatomic-X film can resolve 170 lp/mm. Color film maybe half of that.

But digital and film are extremely different worlds, very different rules. Film will have a limit. For one thing, film cannot oversample. Oversample is a keyword.

In the earliest days of inventing digital, Nyquist (of Nyquist sampling theorem) showed that we must sample AT LEAST at 2x the detail level to prevent aliasing. Aliasing is false detail created by artifacts of insufficient sampling resolution. One example of false detail is moire patterns, added detail that was not actually in the image. Jaggies (called aliasing) is another example, where straight lines appear as staircase steps (steps of pixel size). Basically the 2x requirement is the line pair thing, but the theorem is much deeper than that. One result is that 2x sampling is the absolute minimum level for accurate reproduction without creating false detail (aliasing). A rate even higher than 2x (oversampling) is always a better quality of reproduction. However, our camera sensors have always required (until very recently) anti-aliasing filters to reduce the lens detail, to slightly blur the image enough so that the detail will not be greater than our sensor resolution can resolve. Meaning, we had not reached even the minimum resolution. We could use it this way however, by limiting the full resolution that the lens could do. And now we are starting to be able to remove some of the anti-aliasing filters (but that's a minimum we have finally reached).

The mistake made is to imagine that image detail corresponds to pixel detail in any one for one relationship. Sampling simply doesn't work that way. Some might imagine 1x sampling is a limit, but instead, 1x sampling is simply insufficient. We need lots of pixels. Of course, the detail in vast areas in our images doesn't approach whatever maximum we do accomplish, so the problem is not always difficult. Depth of field sees to that, since we are focused at only one distance. And the scene content also contributes to that. We can do pretty good now at lower levels. But we are not near a maximum limit, if a limit even exists. There is no concept of that limit. We might do more than we need, or more than is convenient to use, but there is no point where things start going bad.

A camera sensor with a 256 pixels per mm number can at best minimally resolve 128 line pair per mm, at the Nyquist 2x minimum. That may sound like a limit, but it's a minimum limit (not a maximum limit). I hope to show that oversampling with more than 2x sampling pixels is always better. Making this part up (call it a joke), but possibly 2x sampling could be excellent if we could get all the lines perfectly aligned and exactly centered on the pixels, with the same spacing as the pixels, and very straight, not slanted to the pixels? But the real world is random and chaotic, things don't line up. If the lines were slanted or curved, that is additional finer detail (detail within the lines) that may not be resolved.

How Digital Sampling Works

Below is an image from a printed Smithsonian Magazine, September 2014 page 52, a 9000 year old man in North America (it being my government's publication, I assume I share their rights to use it). These are scanner images, but a digital camera samples with exactly the same principles, same sampling concepts, be it pixels per inch, or pixels per mm (and the scanner is adjustable, handy here). It is a CCD flatbed scanner, which has a lens in it, focusing the 8.5 inch glass bed onto about a 2 inch digital sensor, like the camera does, which then samples it with normal digital sampling, like the camera does. The scanner resolution dpi is referenced to the original inches on the glass bed. The magazine image is printed at the normal 150 halftone dots per inch. Our brain may see a subject pattern in those ink dots, but the ink dots are the detail in this image.

This digital reproduction job is to resolve those 150 ink dots per inch. This first image is scanned at 150 dpi, which is 1x sampling (of the 150 dots per inch on the original).

We knew that sampling theory tells us that 1x sampling would be insufficient. The very word "sampling" means we only see a sample of the actual detail there, less than 100%. The 1x sampling does reproduce a picture, but it has the expected moire (aliasing, which is false detail due to insufficient sampling, less than the Nyquist minimum). This is a 100% view (scan is shown full size). The scan is 150 pixels per inch, of 150 halftone dots per inch (much like 150 line pairs per inch), which is 1x sampling of that detail. The yellow arrow points to a little bump corresponding to the area that is also shown enlarged, to see some pixels. This enlargement is 3200% size, shown for consistency with those below. This enlarged crop only shows about 12 pixels wide. A pixel is just digital numbers for the sample representing one single color, the averaged color of that pixel area (not unlike pictures set with mosaic tile chips. Each tile or pixel is one color, and our brain recognizes the image in the pattern they make).

Next scan is 2x sampling, the Nyquist minimum, at 300 dpi. The scanned image was twice as large as 150 dpi, so it is shown here at half size (to be same size). It looks pretty good, better than above. There is no aliasing, because we accomplished 2x sampling. But 2x sampling is a minimum, and in the enlargement of the larger original at 1600% (to be same size), we still don't see any ink dots, simply not resolved well enough to recognize them. The small image looks fine (has the minimum resolution), but we don't have sufficient resolution to reproduce the dot detail actually there. What we see is of course the image detail, but it's not an adequate reproduction of the subject detail. We see no halftone dots, however we can recognize where the edge and the little bump is (we see larger detail).

Let's try more, next scan is 4x sampling at 600 dpi. It is 4x size, reduced here to show same size. And a 800% enlargement of the original, same size, which is starting to show a strong hint of the dots, spots at least. We could claim to resolve the dots, so far as we can tell something is there, but like minimum line pairs, fuzzy stuff is not a great result. With only about four pixels from dot to dot, we really don't see any circles. Simply not enough sampling resolution. Better is possible.

Again more, to 8x sampling, next scan is 1200 dpi. It is 8x size, reduced to show same size here. And a 400% enlargement (same size). Bingo, we actually resolve some dots now - with about eight pixels from dot to dot, it has a few pixels across a dot to indicate its round shape. We did not see this before. Of course, it's much greater detail than our small complete picture can use (small was only one choice), but it does better show the actual detail present, of the ink dots printing it. Oversampling (more than Nyquist 2x) does improve the resolution seen if shown at full size. Yes, perhaps this resolution is overkill (wasted effort) for our reduced size reproduction. The greatly larger number of pixels is not helping our small reproduction at left if that was our goal, however if applicable, other larger uses might take advantage of the higher resolution.

And next scan is a 2400 dpi scan, 16x size, shown reduced to 1/16 for same size, and also shown 200% enlarged (same size). About 16 pixels dot to dot now, and it is noticeably smoother, but not a big difference in detail. Oversampling is obviously a better image than at the point before where we could just make out that dots existed. We probably are approaching a limit of usefulness in this case, for this data (but again, it is 16x sampling, or 8x more than Nyquist, or 8x oversampled). Of course, when the image is resampled smaller to 1/16 size on the left (resolution is discarded), it's the same as the 300 dpi image then. But there is more detail in it if we see it larger. Again, here we are scanning printed ink dots, NOT a real photo. These dots are the detail in this image.

Note that these black spots are NOT holes in the skull or in the background. They are only black printed ink dots for the purpose to influence the average color we perceive there. However, nevertheless, these dots are the detail in our screened magazine reproduction. Camera images don't do that screening, they just show more pixels of the actual reproduced color. But printing only has four colors of ink, and cannot do that. We may seem to see some random colored dots, but it is a carefully calculated attempt to mimic the actual color being reproduced. Printing uses dithered ink dots of the four colors (magenta, cyan, yellow and black) to average out to simulate one of the 16.7 million possible colors. Notice the skull in this last sample above has black dots on white paper, and also has traces of magenta, cyan, and yellow dots added in, to influence the final average color we see. The background overprints magenta and yellow to make red. Anyway, now we can see that the detail has its own detail (for example, like the roundness of the black dots).

Any of these left side pictures (300 dpi to 2400 dpi which are at least 2x sampling) are fine if the purpose and goal is to reproduce it small (relatively near original size). So it is true that if the 2x image size was all you wanted, then it is enough. Except even then, oversampling significantly, and then reducing significantly smaller, is a noise reduction technique. And if you want to see more detail enlarged, more sampling resolution is simply a better reproduction. There is more detail available that oversampling resolutions can capture. And of course, we can always find a subject needing even greater resolution.

We said at the top above that "A camera sensor with a 256 pixels per mm number can at best minimally resolve 128 line pair per mm, at the Nyquist 2x minimum." Hopefully we just showed that 8x oversampling (2048 pixels per mm) can more clearly show finer detail than the minimum. However, that can of course be much more detail than you plan to show in your smaller image, but it can always be resampled smaller.

It should be also noted that digital camera sensors have required an anti-aliasing filter - to blur and remove the finest detail (high frequency content, finer detail than the sensor resolution can resolve), to prevent moire - which was necessary because sensor sampling has always been insufficient. Megapixels are finally becoming sufficient now to allow not using any AA filter in the general case, implying that sensors are reaching the minimum 2x level - except we can still see moire now and then in special cases, yet more pixels could help some cases. Or, just as a comment emphasizing the relationship, using a lesser lens with less resolving power could also act as an anti-aliasing filter. Moire is caused by the lens resolving greater detail than the sensor sampling can resolve.

So maybe some sensor megapixel cases have reached Minimum resolution, but certainly we have not reached any Maximum sampling resolution limit, whatever that might mean.

A camera example, Nikon D800 with 36 megapixels (f/8, 70-200 mm lens at 130 mm)


Full frame

100% crop, no sharpening

800%, 33x30 pixels

The 800% view shows this case has about 2 pixels across an eye lash. Aliasing causes the jaggies (false detail due to insufficient resolution for this use) to generally be shown as 3, or maybe 4 pixels. This is certainly not excessive resolution - if we had more and smaller pixels, the detail would be smoother, smaller jaggies, etc. However, the small photo reproduction here has no use for so many pixels, we only need what our use needs. High resolution simply allows greater enlargement, but is pointless for much less enlargement.

The digital pixels do not create any detail, a pixel merely shows the color of a sample spot of the detail already created by the lens. This one is adequate resolution for most purposes, but if we want to see maximum detail, there is no reason to imagine any sampling limit exists. This is 36 megapixels. If only 9 megapixels, pixels would be created as 2x larger dimension (jaggies become rather large). If we had 144 megapixels, the pixels would be half this size, which could show smaller finer detail.

So if all you want is 2x sampling of most image detail (the absolute minimum requirement), a sensor density of 256 pixels per mm would reproduce 256/2 = 128 line pairs per mm lens resolution at that level, and which probably will prevent moire.
But if you want 8x sampling, a sensor density of 256 pixels per mm would reproduce 256/8 = 32 line pairs per mm lens resolution at that level.

We can have more sampling than we need for our goal, more than many purposes need, more than a small image needs, but in more extreme cases, it's pretty difficult to have too much sampling resolution. So don't let them tell you we already hit a limit. If we incorrectly equate pixels per inch with lines per inch, we can't even start. :) At least not without an anti-aliasing filter (which is required when the sampling resolution is insufficient).

So more sampling is always a better quality reproduction (trying to reproduce the original lens image well). I hate to say it that way (could be misunderstood), because certainly we can scan resolutions much higher than our goal needs (to copy a paper photo print for example... scanning at 300 dpi is sufficient, because our printers cannot print more, and the color print probably didn't have more to give anyway). But if we're going to zoom in to examine finest detail, then it does show, if the detail was there (for example, scanning film).

But certainly there is no one for one relationship between pixels and line pairs. If we expect to zoom in and see more detail, we always need lots more pixels.


NOTE, FWIW: There are also other factors.

If we assume some hypothetical image that actually shows detail at say 40 line pair per mm, and if we then enlarge that image to view it at double size, obviously now its detail shows only 20 lp/mm. Enlargement reduces resolution.

A DX camera like the 24 megapixel Nikon D7200 has 6000 pixels across a DX 23.5 mm sensor, which computes 255 pixels per mm. But we must enlarge this to view it, maybe ten times larger, reducing viewed resolution to 25 pixels per mm, but we still have maybe 600 pixels per inch (capable of greater things).

A 36 megapixel Nikon D810 is 7360 pixels over a FX 35.9 mm sensor, which is "only" 205 pixels per mm (larger pixels).

However, since the DX image is cropped smaller (24x16 mm), it must be enlarged 50% more to be viewed at the same size as the FX (36x24 mm), comparing as if originally 255/1.5 = 170 pixels per mm in what we see. Enlargement reduces resolution.


Menu of the other Photo and Flash pages here


Copyright © 2015-2017 by Wayne Fulton - All rights are reserved.

Previous Menu