Diffraction Limited Pixels? Really?

In Support of Depth of Field

We read on the internet how our digital cameras can become "diffraction limited", due to our digital sensor's pixel size. We often hear how our DSLR has an aperture limit, maybe around f/11, due to pixel size. Sounds bad, but such reports never show evidence in comparison pictures, they cannot show if and how any problem exists, or actually matters Their evidence is that they compute some numbers for the Airy disk, which is the larger fuzzy diffraction circle of the tiniest point source... and then they draw it perfectly centered on a pixel, and then become alarmed if the that Airy disk can be larger than one pixel size. That might make more sense if the Airy disk is somehow magically always centered on a pixel, but I imagine it much more likely straddles two or four pixels. So they assume this is a larger effect, but it is no different than any image detail, large pixels simply can't show detail as well. The actual image detail is not shown as well either. :)

Smaller pixels have absolutely no effect on creating diffraction (only the lens can do that). Smaller pixels are simply higher resolution to better show ANY detail that is present. More pixels just allows us to see things better, more clearly. A better view of whatever detail is there. We like smaller pixels, always good. Diameter of the Airy disk is a measurement of diffraction, regardless how many pixels it might cover. Sure, diffraction is real, and can be a problem, however this pixel relationship has absolutely no importance to diffraction (simply NOT a factor of it). The purpose of smaller pixels (more pixels) is to better resolve whatever detail might be there. Higher resolution is always good, but it creates Nothing, it just better shows what is already there.

It also seems quite obvious that greater depth of field is often greatly more important than diffraction. Diffraction is never a good thing, but in real life, this is of course a trade off of different properties. Very often depth of field can help tremendously more than diffraction hurts. When it's critical, depth of field should easily win. It seems obvious that (in some situations) sometimes stopping down to f/22 or more can give better than f/11 results.

My goal is to point out that when you hear you should never go past about f/11 only because of some notion about pixels diffraction limit, that you can simply laugh and ignore it when and if you have a situation needing more depth of field. When it can obviously help so much, it seems dumb not to do it. The f/stops are provided on the lens to be used when it can help. Just try it both ways, and look at the results, and decide if the depth of field helps much more than the diffraction hurts. Of course, diffraction does hurt, a little. Of course, depth of field can help, often a lot, especially obvious when depth of field is limiting you. That is the purpose of those higher f/stops. But if you listen to the wrong information, you might be missing out on the proper tools. Try it, see what happens. Don't just walk away without knowing.

A real world example of the actual evidence, ruler markings showing 1/16 inch rulings (about 1.6 mm). The ruler length shown is about 4.2 to 5.5 inches.

Both pictures are D800 FX, 105 mm, 1:1 macro. Only difference is aperture. Both are cropped to about 1/3 frame height, and then resampled to 1/4 size here.
Falsely imagining that we ought to be limited to f/11 can be a real detriment to results.
Sure, certainly f/40 is very extreme, certainly it's not perfect, but sometimes depth of field helps far more than diffraction hurts. It can solve problems.
However, f/40 does also require four stops more light and flash power than f/10. :)
But nothing blew up when we reached f/16. Not one of the megapixels said anything to us.

Anyway, advice to "never" exceed about f/11 is obviously pretty dumb advice, so don't shy away, afraid to stop down when necessary, in the special cases when it obviously can help. Certainly I don't mean routinely every time - because diffraction does exist, so do have a need and a reason for it. Yes, f/5.6 and f/8 are special places to routinely be, when possible. But depth of field can really help sometimes. When it is needed, there is no substitute. So try some things, see both versions before deciding. Don't be afraid of stopping down. That's what it's for.

Common situations needing more depth of field: Macro work always needs more depth of field, all we can get (so stop down a lot, at least f/16, and more is probably better). Landscapes with near foreground objects need extraordinary depth of field to also include infinity (using hyperfocal focus distance). Telephoto lenses typically provide f/32 capability, and can often make good use of it. Wide angle lenses, maybe not so much.

Hyperfocal distance is defined as focusing at this special intermediate distance into the desired depth of field range, chosen so that the depth of field range includes both near and distant extremes - specifically depth of field extending from half of the hyperfocal distance to infinity. Said another more casual way, it is the focused distance at which that the depth of field will reach to infinity. Obviously stopping down will increase the depth of field to aid this effort. And obviously, the focused distance will always be sharper than infinity then, but infinity is still barely in the limits of perceived depth of field.

These are basic ideas which have been known for more than 100 years. The poor alternative of simply focusing on the near side of the subject typically wastes much of the depth of field range in the empty space out in front of the focus point, where there may be nothing of interest. Focusing more into the depth centers and maximizes the DOF range, generally more useful. We hear it said about moderate distance scenes (not including infinity) that focusing at a point 1/3 of the way into the depth range works for this, which is sometimes true, maybe a little crude, better than knowing nothing, but situations vary from that 1/3 depth. Close and macro focus situations are closer to the middle at 1/2 way in, and don't include infinity.

A good Depth of Field calculator will show correct hyperfocal focus distance that does include infinity for various situations (focal length, aperture, sensor size).

Many lenses have a DOF calculator built into them. Speaking of prime lenses (i.e. those lenses that are not zoom lenses) which normally have marks at the distance scale showing the depth of field range at the critical aperture f/stops. However, this tremendous feature is becoming a lost art today. Zoom lenses cannot mark this for their many focal lengths. Also todays faster AF-S focusing rates can put the marks pretty close together (the 85 mm is shown from an ad, but it still gives a DOF clue). (the "dots" are the focus mark correction for infrared).

For example of hyperfocal distance, at right is a 50 mm FX lens, with focus adjusted to place the f/22 DOF mark at the middle of the infinity mark, which then actually focuses at about 12 feet, and the other f/22 DOF mark predicts depth of field from about six feet to infinity (assuming we do stop down to f/22). This places the focus at about 12 feet. A DOF calculator says this example (FX, 50 mm, f/22, 12 feet) DOF is 6 feet to 587 feet, and 12 feet is more like 2% into that range (but infinity is a very awkward number for math). Other focal lengths and other sensor sizes are different numbers.

Or another case, not including infinity. If we instead focus this lens at 7 feet, then the FX f/11 marks suggest DOF from about 5.5 to 10 feet (at f/11, which is about 1/3 back in this case). The idea of the markings (which only appear on prime lenses, zooms are too complex to mark) is to indicate the extents of the DOF range. And done because it can be very helpful. Sometimes f/22 is the best idea, sometimes it is not. f/22 causes a little more diffraction, but it can also cause a lot more depth of field.

We cannot read the distance scale precisely, but it should indicate ballpark, generally adequate to convey the DOF idea. Of course, depth of field numbers are vague anyway. Do note that any calculated depth of field and hyperfocal distances are NOT absolute numbers at all. The numbers instead depend on a common but arbitrary definition of acceptable blurriness (called Circle of Confusion, CoC, the diameter of the blurred point source). This CoC limit is used in DOF calculations and varies with sensor size due to its necessary enlargement. This is because CoC also specifically assumes the degree of enlargement in a specific standard viewing situation (specifically an 8x10 inch print held about ten inches from eye, which size allows seeing the size of that CoC spot). If your enlargement and viewing situations are different, your mileage will vary... DOF is NOT an absolute number. Greater enlargement reduces perceived depth of field, and less enlargement increases it (changes the degree of CoC our eye can see).

And of course, make no mistake, the sharpest result is at the one distance where the lens is actually focused. Focus is always gradually and continually becoming more blurry as we move away from the actual focus point, up until the DOF math computes some precise numerical value that is suddenly judged not acceptable (thought to become bad enough to be noticeable there, by the enlargement of the arbitrary CoC definition). But of course, focus is about equally blurry on either side of that distance. DOF does Not denote a sharp line where blurriness suddenly happens, it is gradual. The sharpest focus is of course only at the focused distance, but a wider range can often be good enough, within certain criteria based on how well we can see it. DOF numbers are NOT absolutes. But DOF certainly can be a useful helpful guide.

Back to the pixels thing, still wondering how pixel size could affect lens diffraction... There is optical resolution of the image that the lens reproduces on the sensor surface. Then there is also sampling resolution, reproducing that lens image digitally. If comparing the same size senors, a sensor with smaller pixels is higher sampling resolution, and (other than perhaps greater sensor noise in smaller pixels) there's simply no way higher resolution will ever take a worse picture than less resolution. The optical lens image is what it is, and the better that the pixels can reproduce this image digitally is always a good thing (regardless of the detail that is there... a pristine image or one suffering diffraction). However, a small sensor (like compact cameras) has to be enlarged more to be same enlarged size as larger sensors, which enlargement does reduce resolution. I have pictures below, even 100% crops, that certainly indicate diffraction occurs, but they don't show any pixel size limit effect kicking in.

Sure, stopping down the lens more causes more diffraction, but it was always true, of film too. Diffraction certainly can be a problem, but due to the diffraction, not the pixels. The novice geeks blow this pixel thing out of proportion, scaring beginners away from ever considering use of f/16 or f/22 or f/32 - when it can sometimes in fact obviously be a very great help. They cannot show any evidence, they are just calculating vague notions (that a point source exists, and that it is somehow magically exactly centered on a pixel - which seems ludicrous), when they instead would learn more if out taking pictures and trying things. Our real photographs instead have larger features and edges that already cover many pixels. Our pixels just try to resolve whatever they see there, which includes showing the diffraction. However, sometimes we really do need the maximum depth of field instead. Diffraction is something that simply just always happens. Diffraction certainly does exist, but it is Not due to one pixel. It is the stopping down that makes it worse. And also the greater enlargement of tiny senors makes it worse. Little compact cameras rarely approach stopping down past f/4, because the greater enlargement necessary for tiny sensors shows enlarged diffraction so well. Remember the tiniest film sizes? (Kodak Disk or 110). No pixels, but it was the same for them too. Greater diffraction certainly is a factor, but there is no strange effect triggered when an Airy disk reaches the size of a pixel.

My goal here is to suggest that, no matter what you have heard about diffraction limited pixel size, yes, of course you can still usefully stop down to f/16 and f/22 as they are intended to be used for the goal of greater depth of field. The overall result can be a lot better. It can be a great benefit, when you need it. Yes, stopping down so much certainly does cause diffraction losses which should be considered. But Yes, stopping down certainly can help depth of field much more than diffraction can hurt. This is why those f/stops are provided, for when they can help. If they help, they help.

When you need maximum DSLR lens sharpness, of course do think f/5.6, or maybe f/8. But when you need maximum depth of field, consider f/16, or maybe f/22, or more. That's what it's for. Sure, f/8 will be a little sharper for general use, stick with it when you can, but when you need depth of field, that's hard to ignore. So when you need extra depth of field, try stopping down, that's how the basics work. Test it, see it for yourself, and don't believe everything you read on the internet. :) It's good to be able to actually see and verify that which we profess to believe.

Lens resolution certainly can be limited by diffraction. The lens situation has a resolution, and the digital reproduction of that lens image can also be limited if insufficient pixel resolution. Pixel resolution simply tries to reproduce the image that the lens resolution created. Neither maximum is much important if we necessarily resample much smaller anyway, for example to show a 24 megapixel image on a 2 megapixel HD video screen, or to print a 7 megapixel 8x10 print. Today, we typically have digital resolution to spare.

At right is a (random but typical) lens resolution test from Photozone. They have many good lens tests online, tests which actually show the numbers. This one is 24 mm, and the red lines are drawn by me. Lenses do vary in degree, expensive vs. inexpensive is a factor, but in general, all lenses show about the same characteristics. The aperture when wide open is more soft (optical aberration issues in the larger glass diameter), but resolution typically increases to a maximum peak when stopped down a couple of stops (not necessarily f/5.6, but two stops is half the diameter, avoiding the difficult outer diameters). The border sharpness can be a little harder (edges are at larger diameter from center of lens).

Then resolution gradually falls off as it is stopped down more, due to increasing diffraction as the aperture becomes small. Yes, we can assume f/16 and f/22 gets worse. The edge of the aperture hole bends or diffracts the light (paths very near the edge, causing diffraction and blurring). The clear center area is unobstructed, but a tiny hole is nearly all edge. Diffraction causes a blurring loss of the smallest detail (a loss of maximum resolution), caused by the smaller aperture diameter. The term "diffraction limited" is usually a good thing, meaning and used as: "An optical system with the ability to produce images with angular resolution as good as the instrument's theoretical limit is said to be diffraction limited" - meaning as good as it is possible to be. However stopped down lens apertures do limit resolution more, regarding how small the detail the lens can reproduce. Still, real world is that we often have sufficient resolution to spare, to trade for depth of field. Stopping down can be a big benefit, when it is needed.

We don't need to mention pixels. And f/22 might not always be a good plan for a short lens - or any lens, but not always bad either - detail depends on image size. Subject magnification is a factor of detail (more below). Focal length magnifies the subject detail. So a longer lens can often benefit greatly from the increased depth of field from f/22 or even f/32. It's one advantage of the 105 and 200 mm macro lenses. But next is what a very short lens looks like:

14-24mm lens, at 14mm f/2.8
(aperture 5 mm diameter)

14-24mm lens, at 14mm, f/22
(aperture 0.63 mm diameter)

The definition is: fstop number = focal length / aperture diameter. This definition causes f/4 to be the same exposure on all lenses.

f/22 on a 20 mm lens has an aperture diameter of 20/22 = 0.9 mm. That is a tiny hole, which causes trouble. f/5 is sharper.
f/22 on a 50 mm lens has an aperture diameter of 50/22 = 2.2 mm. Borderline small, but rather bearable when it helps DOF.
f/22 on a 10 5mm lens has an aperture diameter of 105/22 = 4.6 mm, much more reasonable, piece of cake.

Yes, stopping down causes greater diffraction which limits the smallest detail we can see. The larger diffraction hides the smallest details in the lens image, which might otherwise be seen... which is normally about sharp edges on details. This diffraction is a property of the lens aperture diameter, and is true regardless of pixel size (of course it was always true of film lenses too). Combining the other regular optical problems normally reduce resolution below this theoretical diffraction limit anyway. We don't need pixels to know that, but this pixel notion is that when the Airy disk size exceeds the size of a pixel - or really two pixels (Nyquist), or really four pixels (Bayer), which is really eight pixels (Nyquist again), or really even more pixels because of the random misalignment of Airy disks on pixels - but however many pixels we decide matters, those small pixels resolution capability is limited by the larger diffraction disk size and coarseness. The pixel is certainly not the problem though, the only problem is the diffraction disk is large. It's too late to worry with pixels anyway, the diffraction has already occurred, it is what it is. The best job the pixels can do is to reproduce what they see. The pixel analogy is like, if you don't wear your glasses to inspect your image, not seeing anything is not the same as improving the diffraction. :) Of course, pictures of faces or trees or mountains are larger than a pixel anyway, so this does not mean all is lost. The diffraction issue is NOT about pixels. The pixel size (hopefully small) is already the smallest possible detail, and the diffraction is already what it is.

Who can see a pixel in 24 megapixels? Diffraction may eliminate the smallest detail, but even so, of course there are still vast amounts of larger detail (the whole picture). We don't always need maximum resolution. We typically resample our images to be smaller sizes anyway. I think that comparison to the pixel size is not a helpful thing. It is outright harmful when it scares many of us away from the thought of ever considering using f/16 or f/22 or f/32, when certainly there are situations when those values can be very helpful. That is what they are for, to help when needed. Try them for yourself. Often, there will be no actual bad effect.

This is my protest about imagining pixel limited diffraction. Of course using around f/5.6 or f/8 is a fine general plan, but the warning to never use past about f/11 is not so helpful. There certainly are a few more details to say about it. One of those details is, sometimes stopping down more can obviously help very considerably.

Depth of field and diffraction are different things. One is generally a good thing, and one is a bad thing (but it happens). Situations can vary, so that sometimes, one factor can be of greater importance than the other. We don't have to compute Airy disk or pixel size, we can just try things, and look at the results. Don't believe it? Try it. You probably will like it a lot (in certain situations needing depth of field).

To explain this situation shown, next are the original images from which 100% crops are taken. D800 and D300 cameras, ISO 400, same 105 mm VR AF-S lens on both, both on the same tripod at the same spot. FX is of course the first wider view, and the DX sensor crops the lens view smaller, which makes it look closer up. The two frames are shown the same size here, so DX is seen enlarged more than FX (but both were the same lens image, from the same lens). Point is, both had the same crop box in ACR, both marked crops are about 8.6% of the frame width. Sharpening can always help a little, but there was no processing done on this page. There was a slight breeze to wiggle a few leaves between pictures.



The point of these next 100% crops (a tiny central area cropped, then shown 100% full size, actual pixels) is not just to show depth of field, because we already know what to expect of that. It is more to show there is no feared diffraction limit around f/11 or wherever. Sure, f/8 is often better (because of diffraction), and sure, diffraction does increase, but sure, you can of course use f/16 and f/22, maybe f/32, because, sure, it can often help your picture. Diffraction does continually increase as the lens is stopped down, but which is about the aperture, it is not about pixel size. This is a 105 mm lens, and yes, we might debate about f/32 (but it does increase depth of field). Any diffraction would be much less visible if the full image was displayed resampled to smaller size instead of shown at 100% size. But obviously there is no reason to always fear some limit at f/11, if the depth of field can help more than the diffraction hurts. You can do this test too.

The near tree and focus are 20+ feet, the light pole about 250 feet, the power wires are about 900 feet. Watch the background too.

FX D800, 105 mm lens, 100% crop, about 8.6% of the original 7360 pixel width.
Frame Width: 7360 pixels / 35.9 mm sensor = 205 pixels per mm density (pixel size).

DX D300, 105 mm lens, 100% crop, about 8.6% of the original 4288 pixel width.
Frame Width: 4288 pixels / 23.6 mm sensor = 181.7 pixels per mm density (pixel size)

Both are the same image from the same lens at the same distance. The DX frame is simply cropped smaller, which shows a smaller view, and has to be enlarged more to view at the same size (not done yet here). In this one specific case, this larger FX sensor happens to have a greater "pixels per mm" sampling resolution, which is of course a plus, not a minus.

The big deal here, and the real point, is that we should also look at the background details. Do you want Depth of Field, or not?

D800 is 36 megapixels, and D300 is 12 megapixels, so this case is slightly larger pixels on DX, about 13% larger in this case. That is not important (not to diffraction). However, what we can see is that the smaller DX sensor cropping does require half again more enlargement to reach the same size as FX (not done above). That shows the diffraction larger too. Normally we think of DX having more depth of field than FX, however that assumes DX with same lens would stand back 1.5x farther to be able show the same image view in the smaller frame. We didn't here. Everything was the same here (except DX has to be enlarged half again more, below).

Degree of enlargement is a big factor. Below are the same two f/32 images above, with the smaller DX image enlarged more so it will view at the same size as FX now. FX D800 first, then DX D300 next. Both are same 105 mm lens on the same tripod in the same spot. But the DX looks telephoto because its sensor is smaller (sees a smaller cropped view), so it needs to be enlarged more here (done below), which also enlarges the diffraction too. FX is still shown about 100%, and DX is shown larger than 100%. We would not normally view these this hugely large - the uncropped frames were 7360x4912 and 4288x2848 pixels size - so a smaller view would look better than this.

The pixel density spacing (205 pixels/mm FX, and 182 pixels/mm DX) shows this case of DX has the larger pixels (in this case). I don't see the larger pixels offering any greater diffraction advantage or limit however, so I will definitely need to see some evidence of that. The results here are rather backwards to that notion (but there are other differences too, sensor size and enlargement). I mostly see that both are f/32, and DX is enlarged more, and this FX has the greater resolution (not true of all FX/DX comparisons). The smaller lens aperture does create the diffraction, but the smaller aperture is all-important to depth of field, which can help greatly, when needed.

Now the same 100% crops at f/32 again, but now shown at same viewing size. FX first, then DX. Again, in this case, FX here has the smaller pixels, which boosts resolution and looks good. And other factors that actually do matter, the larger FX frame does not have to be enlarged as much to view at the same size.

FX f/32, 205 pixels/mm - smaller pixels, allegedly affecting f/32 diffraction more?
DX f/32, 182 pixels/mm - larger pixels, allegedly affecting f/32 diffraction less?
Enlarging DX more is the necessary hardship, but it's unfair to FX if we don't.
But Depth of Field is often very much more important than diffraction.

An old rule of thumb, considered a good trade off combining both sharpness AND depth of field, says:
To limit excessive diffraction, unless depth of field is more important than all else:

Generally don't exceed f-stop number = focal length / 4.

(Just meaning, have a reason if you do. DOF is certainly a reason.)

This is about diffraction, not about sensor pixels. For this 105 mm lens, then 105/4 is f/26, so f/22 is a good try, and f/32 is close (again, these are 100% crops). It's not that bad, when you really don't want to give up depth of field. The results above show this. But, when more heroic efforts are necessary to get even more essential depth of field, consider going farther. If important, at least try it, see if you like it.

You may have read about Ansel Adam's "Group f/64" in the 1930s (a purist photography group, promoting sharper details). For a 8x10 inch view camera, a "normal" lens was around 300+ mm, but he also used 600 mm and 800 mm often. The /4 limits are:   800 mm, f/200.   600 mm, f/150.   300 mm, f/75.   200 mm, f/50.   100 mm, f/25.   50 mm, f/12.5.   24 mm, f/6.

Since f/stop number = focal length / aperture diameter, this rule is technically just specifying at least a 4 mm aperture diameter, so that diffraction doesn't excessively limit resolution. But this also means short focal length wide angle lenses can still be a concern, because 20 mm comes out f/5. When bright ambient and/or high ISO is forcing stopping down wide lenses, you can try to stand closer to the subject. Long lenses are better because they magnify the subject, and the larger detail offsets the lower resolution. Sometimes you can do that with short lenses too. Standing closer is a subject magnification too.

Diffraction: A techie note about the numbers (some may be interested)

Telescope users know that telescopes with larger diameters (larger aperture, with better resolution due to less diffraction due to smaller Airy disk diameter) can better resolve (separate) two very closely spaced stars, to be seen and distinguished as two close stars instead of as one unresolved star (blurred together). Known double star pairs are the standard measure of telescope resolution.

The Airy disk diameter inversely depends on aperture diameter (half the aperture diameter creates twice the Airy disk size). The ability to separate two close points (resolution, two touching Airy disks) also depends on focal length magnification (twice the focal length shows twice the separation distance). Wikipedia has the derivation of this minimum separation (x) to resolve two points. Nothing new, from George Airy, 1834. Green light wavelength is about 0.000055 mm. The wavelength of the mean for all visible light is 0.000022 mm.

In this formula, the combination f/d is the f/stop number. That would say the width x of our camera diffraction blur increases directly with f/stop number, regardless of focal length. And it does, however, focal length is also the magnification of the subject detail (relative to that blur diameter), and in practice, a longer focal length supports a higher f/stop number (because of the larger aperture diameter, and the greater magnification of subject detail), hence the simple rule of thumb above.

The reciprocal 1/x of such minimum separation x (of two adjacent point sources) is clearly the theoretical maximum resolution allowed by diffraction, directly in line pairs per mm. Which applies to our camera lenses too, except we rarely photograph point sources. Measured numbers of real world complex lenses are of course less, but they can't be greater. Diffraction limited gear would be the feat of reaching those theoretical numbers, with very sharp glass.

This is not about pixels. Sensor pixels will have their own resolution limits, unrelated. The pixels job is to merely try to digitally reproduce our analog lens image it sees. The lens image is what it is, and the better that the pixels can reproduce this image is a good thing (regardless of the detail that is there... a pristine image or one suffering diffraction).

More images (maybe too many) are on next page

Copyright © 2014-2017 by Wayne Fulton - All rights are reserved.

Previous Menu Next