We read on the internet how our digital cameras can become "diffraction limited", due to our digital sensor's pixel size. We often hear how our DSLR has an aperture limit, maybe around f/11, due to small pixel size. Sounds bad, but add a big grain of salt. Such reports never show any evidence in comparison photos. They cannot show if and how any such problem exists. Diffraction of course exists, increasing at each f/stop, but there is no specific limit where it kicks in. Their only evidence is that they compute some numbers for the Airy disk, which is the larger fuzzy diffraction circle of the tiniest point source (see more below). And then they draw it imagined to be perfectly centered on one digital pixel, and then become alarmed if that Airy disk can be larger than one pixel size. That might make more sense if the Airy disk were somehow always magically centered on a pixel, but I imagine it much more likely straddles two or four pixels just because of borders. If we want to be geeky, we'd worry about the Bayer pattern geometry, and that each of the Bayer RGB pixel colors have a different Airy wavelength. But certainly we'd worry about the viewing enlargement of sensor size. We likely resample smaller for final use size. But why not instead worry when Airy size exceeds the diameter of depth of field CoC? That could be visible. :) Anyway, yes, of course we know stopping down causes diffraction, which reduces resolution, and that the Airy calculation represents a maximum limit on lens resolution. But it's not about pixel size.
They confuse lens resolution with sampling resolution. Pixel resolution is instead just a method of reproduction, not a source of detail. The lens creates all detail. The first basics of digital sampling rules (Nyquist) require digital sampling resolution to be at least twice greater than the maximum lens resolution, to prevent moire (that improvement that reduces artifacts being entirely due to the smaller pixels). Sampling hasn't been twice higher until very recently when megapixels became just sufficient (in some cases). So until now, we always had to put anti-aliasing blurring filters on the camera sensor, to intentionally reduce the maximums of lens detail to be less than half of the sensor resolution (to prevent moire which is aliasing artifacts due to insufficient sampling resolution). But 2x is absolutely minimal for the sampling physics (meaning, for no false artifacts). A sampling resolution even higher (a few times more than the lens detail) is called oversampling, and is desirable to resolve the actual detail better, more clearly, more completely. 2x sampling might barely minimally resolve that "something" is there. Then higher resolution (smaller pixels) fills in its shape to show what it is, detail of the detail that is already there, just reproduced better (more of this is shown here). The higher sampling resolution of smaller pixels is a good thing, used to better reproduce the original lens detail.
Smaller pixels are simply higher sampling resolution to hopefully better clearly show whatever detail that is already present (diffraction and all). If the diffraction is there, then regarding pixel size, trying to hide it with less resolution hurts the overall reproduction. More pixels just allows us to resolve and see lens detail better, more clearly... a better view of whatever detail is there. Higher sampling resolution of smaller pixels is always good. Well, noise can of course be an issue, but advances are moving us on.
The Airy disk is a measurement of lens diffraction, regardless how many pixels it might cover. It's about the lens, and has the same meaning for film cameras, which have no pixels, and for telescopes which may have no film. The pixels resolve existing lens detail, digital reproduction. Sure, diffraction is real, and can be a problem which we try to prevent, however this pixel relationship does not differentiate diffraction from real detail. The pixels just try to show what's there. The advantage of smaller pixels is higher sampling resolution to better resolve whatever detail might be there. The problem is the diffraction, not that we have sufficient resolution to show it. Higher resolution is always good, however sampling resolution does not create any detail, it just hopefully shows what is already there.
It also seems quite obvious that in some situations, greater depth of field can be greatly more important than diffraction. Diffraction is never a good thing, but in real life, there are of course trade-offs of different properties. Very often depth of field can help tremendously more than diffraction hurts. When it's critical, depth of field should easily win. When greater depth of field is not needed, sharpness is a good way to bet. But there can be more ways of perceptual improvement than just sharpness and resolution. It seems obvious that (in some situations) sometimes stopping down to f/22 or more can give better than f/11 results. In other situations, maybe not. The lens provides these tools to choose when we need them, when they can help us.
My goal is to point out that when you hear you should never go past about f/11 only because of some notion about pixels diffraction limit, that you can simply laugh and ignore it when and if you have a situation specifically needing more depth of field. Yes, stopping down does increase diffraction, we should be aware. But in the cases when you can see that it can obviously help so much, it seems dumb not to do it. The f/stops are provided on the lens to be used when it can help. Just try it both ways, and look at the results, and decide if the depth of field helps much more than the diffraction hurts. Yes, of course, diffraction does hurt resolution and sharpness, a little. You do need a good reason, but yes, of course, depth of field can help, often a lot, especially obvious when depth of field is limiting you. That is the purpose of those higher f/stops. But if you listen to the wrong information, you might be missing out on the proper tools. Try it, see what happens. Don't just walk away without knowing.
A real world example of the actual evidence, ruler markings showing 1/16 inch rulings (about 1.6 mm). The ruler length shown is about 4.2 to 5.5 inches.
I think the f/40 seriously improves a difficult problem this time. :)
Both pictures are D800 FX, 105 mm, mild macro. Only difference is aperture. Both are cropped to about 1/3 frame height, and then resampled to 1/4 size here.
(The number f/40 is possible here because macro lens f/stop numbers increase when focused up close, because focal length increases then. Typically at 1:1 macro, all marked f/stop numbers increase two full stops. This used f/40.)
Sure, f/5.6 or f/8 are always generally very desirable (speaking of DSLR class use), when they are adequate. Use them when they work.
Sure, certainly f/40 is very extreme, certainly it's not perfect, and not always ideal.
But sometimes it's wonderful, when depth of field helps far more than diffraction hurts. It can solve serious problems. When we need more depth of field, falsely imagining that we ought to be limited to f/11 can be detrimental to results. Use the tools that are provided, when they will help.
However, f/40 does also require four stops more light and flash power than f/10. :)
But nothing blew up when we reached f/16.
Anyway, advice to "never" exceed about f/11 is obviously pretty dumb advice, because there are strong reasons to do it sometimes. So don't shy away, or be afraid to stop down when necessary. Necessary is necessary, in the special cases when it obviously can help. Certainly I don't mean routinely every time - because diffraction does exist, which we generally want to avoid, so do have a need and a reason for stopping down extremely. But needs and reasons do exist. Saying it bluntly: To minimize diffraction (speaking of DSLR lenses), sure, stay around f/5.6 or f/8 when possible, when it works, which is much of the time for anything routine. But when needed, when you need more depth of field, only an idiot would fail to see the advantage of stopping down more to help.
Don't misunderstand, certainly f/5.6 and f/8 are special good places to routinely be, when possible. Back in the late 1950's, we marveled how sharp Kodachrome slides were. And it was sharp, but some of it was that it was still ASA/ISO 10 then, requiring like f/5.6 at 1/100 second in bright sun. That f/5.6 helped our 1950s lenses too (before computer design). But depth of field can also really be a major help sometimes, results are typically poor if DOF is inadequate for the scene. When DOF is needed, there is no substitute. So try some things, try and see both choices before deciding. Don't be afraid of stopping down. Have a reason, but then that's what it's for, when it's needed, when it can help.
Common situations always needing more depth of field: Macro work always needs more depth of field, all we can get (so stop down a lot, at least f/16, and more is surely better). Landscapes with very near foreground objects need extraordinary depth of field to also include infinity (using hyperfocal focus distance). Telephoto lenses typically provide a f/32 stop, and can often make good use of it. But wide angle lenses already have much greater depth of field, and maybe diffraction affects them more.
A good Depth of Field calculator will show hyperfocal focus distance, which does include DOF to infinity for various situations (determined by focal length, aperture, sensor size).
The practice of simply focusing on the near side of the subject typically wastes much of the depth of field range on the empty space out in front of the focus point, where there may be nothing of interest. Focusing more back into the depth centers the DOF range, often more useful. We hear it said about moderate distance scenes (not including infinity) that focusing at a point 1/3 of the way into the depth range works for this, which is maybe a little crude, better than knowing nothing, but situations vary from that 1/3 depth (more about that Here). Macro will instead be at 1/2. These are basic ideas which have been known for maybe 150 years.
Many lenses have a DOF calculator built into them. Speaking of prime lenses (i.e. those lenses that are not zoom lenses) which normally have f/stop marks at the distance scale showing the depth of field range at the various aperture f/stops. However, this tremendous feature is becoming a lost art today, because zoom lenses cannot mark this for their many focal lengths. Also todays faster AF-S focusing rates can put the marks pretty close together (the 85 mm shown still gives a DOF clue). (the "dots" marked there are the focus mark correction for infrared use).
For example of hyperfocal distance, the photo at right (ISO 400 f/16) is a 50 mm FX lens, showing focus adjusted to place the f/22 DOF mark at the middle of the infinity mark, which then actually focuses at about 12 feet, and the other f/22 DOF mark predicts depth of field from about six feet to infinity (assuming we do stop down to f/22). The DOF calculator says this example is precisely hyperfocal 12.25 feet (for FX, 50 mm, f/22) giving DOF 6.1 feet to infinity, FX. Stopping down to f/22 does cause a little more diffraction, but it can also create a lot more depth of field. Sometimes f/22 is the best idea, sometimes it is not. Other focal lengths and other sensor sizes cause different numbers.
Or another case, not including infinity. If we instead focus this 50 mm lens at 7 feet, then the f/11 marks suggest DOF from about 5.5 to 10 feet (at f/11). The 7 feet is is about 1/3 back into the DOF zone in this case. This is a FX lens, so that DOF applies to FX sensors. The idea of the markings (which only appear on prime lenses, zooms are too complex to mark) is to indicate the extents of the DOF range. And as marked directly on the lens, it can be very handy and helpful. In the prime lens days, this is how it was done.
We cannot read the distance scale precisely, but it can indicate ballpark, generally adequate to convey the DOF idea. Of course, depth of field numbers are vague anyway. Do note that any calculated depth of field and hyperfocal distances are NOT absolute numbers at all. The numbers instead depend on a common but arbitrary definition of acceptable blurriness (called Circle of Confusion, CoC, the diameter of the blurred point source). This CoC limit is used in DOF calculations and varies with sensor size due to its necessary enlargement. This is because CoC also specifically assumes the degree of enlargement in a specific standard viewing situation (specifically an 8x10 inch print held about ten inches from eye, which size allows seeing the size of that CoC spot). If your enlargement and viewing situations are different, your mileage will vary... DOF is NOT an absolute number. Greater enlargement reduces perceived depth of field, and less enlargement increases it (changes the degree of CoC our eye can see).
And of course, make no mistake, the sharpest result is always at the one distance where the lens is actually focused. Focus is always gradually and continually becoming more blurry as we move away from the actual focus point, up until the DOF math computes some precise numerical value that is suddenly judged not acceptable (thought to become bad enough to be noticeable there, by the enlargement of the arbitrary CoC definition). But of course, focus is about equally blurry on either side of that distance. DOF does Not denote a sharp line where blurriness suddenly happens, it is gradual. The sharpest focus is of course only at the focused distance, but a wider range can often be good enough, within certain criteria based on how well we can see it. DOF numbers are NOT absolutes. But DOF certainly can be a useful helpful guide.
A very old rule of thumb, considered a good trade-off combining both diffraction AND depth of field, says:
To limit excessive diffraction (unless depth of field is more important):
|The /4 limits are|
Generally don't exceed f-stop number = focal length / 4.
(Just meaning, have a reason when you do. Depth of field is certainly a good reason.)
You may have read about Ansel Adam's Group f/64 in the 1930s (a purist photography group, promoting the art of the "clearness and definition of the photographic image", named for the f/64 DOF). For his 8x10 inch view camera, a "normal" lens was around 300+ mm, but he also used 600 mm and 800 mm often. So f/64 really wasn't much of a stretch for him (other than exposure time).
Since f/stop number = focal length / aperture diameter, this FL/4 rule is technically just specifying at least a 4 mm aperture diameter, so that diffraction doesn't excessively limit resolution.
I don't mean to promote use of this old rule of FL/4. It is old, was for film, and it does not take enlargement of sensor size into consideration (and of course, neither does the Airy calculation). But it's not a bad rule, and /4 does place a 50 mm lens very near the f/11 diffraction limit we might hear about. Of course, that 50 mm is often related to the "normal lens" for 35mm film (considered small in its day), but today there are many even smaller sensors. Compact camera automation rarely stops down past f/4, and is still diffraction limited. Today's digital sensors can be literally tiny, and any necessary greater enlargement will show both diffraction and depth of field limits larger. A DSLR sensor might be 1/10 the dimension of Ansel's 8x10 film - that was several inches then, sensors today may be a few millimeter. :) A compact or phone camera sensor might be 1/50 that CoC dimension. Diffraction is not affected by which sensor was attached, but the necessary sensor enlargement does affect how well we see it.
Diffraction absolutely does happen, however there definitely are also times when greater depth of field can be more important.
Speaking of DSLR, it is true today that f/32 can be a pretty good match for our 200 mm lenses, when needed, when it can help. It is there to be used, provided by our lens, for when needed. Try it, don't let them scare you off. You would be missing out on a really big thing.
All of this is about lens diffraction, it is Not about sensor pixel size (pixel size also does not take enlargement into account). For a 105 mm lens (the tree samples below), then 105/4 is f/26, so f/22 is a good try, and f/32 is close (again, these are 100% crops below, which is enlargement). The results below show (that on a DSLR sensor size) it's not that bad when you really don't want to give up depth of field. Lenses of 100mm or longer typically offer f/32, because it's good stuff (at times). So when more heroic efforts are necessary to get even more essential depth of field, consider doing what you need to do to get it. If important, at least try it, see if you like it.
It does coincidentally in fact imply f/11 or f/12 could be a reasonable sharpness concern for a 50 mm lens (normal lens for DSLR class cameras). That is a concern, which we've understood for almost forever. But it would not be the same situation for a 200 mm lens. Or an 18 mm lens. And it is Not about pixels, diffraction exists regardless. Diffraction affects film cameras too.
Using a shorter lens, or standing back at farther distance, improves depth of field, but both also reduce the size of the subject in a wider image frame. Or simply stopping down aperture offers great improvements to depth of field which are so easy and so obvious to actually see. But any limit due to effects of f/stop diffraction reaching pixel size seems very difficult to demonstrate.
Yes, of course diffraction does increase as we stop down. But diffraction is a fairly minor effect, at least as compared to depth of field which can be a huge effect. Saying, the detail suffering from diffraction is still recognizable, but the detail suffering from depth of field might not be there at all. Diffraction is serious, and not meaning to minimize it, but there are times when the need for depth of field overwhelms any real concern about diffraction. Yes, stopping down a lot can cause some noticeable diffraction which is less good. But greater depth of field sometimes can be a night and day result, make or break. So the tools are provided for when we need to use them, when they can help.
One tool is the Smart Sharpen in Photoshop (specifically with its Lens Blur option). Diffraction is pretty much linear, the same effect in all photo areas (whereas for example, depth of field is not linear, its blur is mild close to focus but much worse far from focus). So diffraction can often be reasonably helped in that post processing sharpening (but none was done here).
My goal here is to suggest that, no matter what you have heard about diffraction limited pixel size, yes, of course you can still usefully stop down to f/16 and f/22 as they are intended to be used for the goal of greater depth of field. You wouldn't always use f/22, not routinely nor indiscriminately, but in the cases when you do need it, the overall result can be a lot better. It can be a great benefit, when you need it. Yes, stopping down so much certainly does cause diffraction losses which should be considered. But Yes, stopping down certainly can help depth of field much more than diffraction can hurt. This is why those f/stops are provided, for when they can help. When needed, if they help, they help.
When you need maximum DSLR lens sharpness, of course do think f/5.6, or maybe f/8, if that will work for you. But when you need maximum depth of field, consider f/16, or maybe f/22, or maybe even more at times. That's what it's for, and why it is there. Sure, f/8 will be a little sharper for general use, stick with it when you can, but when you need depth of field, that's hard to ignore. So when you need extra depth of field, try stopping down, that's how the basics work. Test it, see it for yourself, and don't believe everything you read on the internet. :) It's good to be able to actually see and verify that which we profess to believe.
Lens resolution certainly can be limited by diffraction. The lens situation has a resolution, and the digital sampling reproduces it. Pixel resolution simply tries to reproduce the image that the lens resolution created. This is less important if we necessarily resample much smaller anyway, for example to show a 24 megapixel image on a 2 megapixel HD video screen, or to print a 7 megapixel 8x10 print. Today, we typically have digital resolution to spare.
At right is a (random but typical) lens resolution test from Photozone. They have many good lens tests online, tests which actually show numbers. This one is 24 mm, and the red lines are drawn by me. Lenses do vary in degree, expensive vs. inexpensive is a factor, but in general, all lenses show about the same characteristics.
The aperture when wide open is more soft (optical aberration issues in the larger glass diameter), but resolution typically increases to a maximum peak when stopped down a couple of stops (not necessarily f/5.6, but two stops down is half the diameter, avoiding the difficult outer diameters of glass far from center). The border sharpness can be a little harder (edges are at larger diameter from center of lens).
Then resolution gradually falls off as it is stopped down more, due to increasing diffraction as the aperture becomes small. Yes, we can assume f/16 and f/22 get worse. The edge of the aperture hole bends or diffracts the light near it (paths very near the edge, causing diffraction and blurring). The clear center area is unobstructed, but a tiny hole is nearly all edge. Diffraction causes a blurring loss of the smallest detail (a loss of maximum resolution), caused by the smaller aperture diameter. The term "diffraction limited" is usually a good thing, meaning and used as: "An optical system with the ability to produce images with angular resolution as good as the instrument's theoretical limit is said to be diffraction limited" - meaning as good as it is possible to be. However stopped down lens apertures do limit resolution more, affecting the smallest detail the lens can reproduce. Still, real world is that we often have sufficient resolution to spare, to trade for depth of field. Stopping down can be a big benefit, when it is needed.
We don't need to mention pixels. And f/22 might not always be a good plan for a short lens - or any lens, but not always bad either - detail depends on image size. Subject magnification is a factor of detail (more below). Focal length magnifies the subject detail. So a longer lens can often benefit greatly from the increased depth of field from f/22 or even f/32. It is why macro and longer lenses normally provide f/32, it provides an important feature that is of great interest and capability.
Next is what a very short lens looks like: (the lens is 3.75 inches or 95 mm diameter)
The definition is: fstop number = focal length / aperture diameter. This definition causes the same f/stop number to be the same exposure on all lenses.
f/22 on a 20 mm lens has an aperture diameter of 20/22 = 0.9 mm. That is a tiny hole, which causes trouble. f/5 is sharper.
f/22 on a 50 mm lens has an aperture diameter of 50/22 = 2.2 mm. Borderline small, but rather bearable when it helps DOF.
f/22 on a 105 mm lens has an aperture diameter of 105/22 = 4.6 mm, much more reasonable, piece of cake.
Yes, stopping down causes greater diffraction which limits the smallest detail we can see. The larger diffraction hides the smallest details in the lens image, which might otherwise be seen... which is normally about sharp edges on details. This diffraction is a property of the lens aperture diameter, and is true regardless of pixel size (of course it was always true of film lenses too). Combining the other regular optical problems normally reduce resolution below this theoretical diffraction limit anyway. We don't need pixels to know that, but this pixel notion is that when the Airy disk size exceeds the size of a pixel - or really two pixels (Nyquist), or really four pixels (Bayer), which is really eight pixels (Nyquist again), or really even more pixels because of the random misalignment of Airy disks on pixels - but however many pixels we decide matters, those small pixels resolution capability is limited by the larger diffraction disk size and coarseness. The pixel is certainly not the problem though, the only problem is the diffraction disk is large. It's too late to worry with pixels anyway, the diffraction has already occurred, it is what it is. The best job the pixels can do is to reproduce what they see. The pixel analogy is like, if you don't wear your glasses to inspect your image, not seeing anything is not the same as improving the diffraction. :) Of course, pictures of faces or trees or mountains are larger than a pixel anyway, so this does not mean all is lost. The diffraction issue is NOT about pixels. The pixel size (hopefully small) is already the smallest possible detail, and the diffraction is already what it is.
Who can see a pixel in 24 megapixels? Diffraction may eliminate the smallest detail, but even so, of course there are still vast amounts of larger detail (the whole picture). We don't always need maximum resolution. We typically resample our images to be smaller sizes anyway. I think that comparison to the pixel size is not a helpful thing. It is outright harmful when it scares many of us away from the thought of ever considering using f/16 or f/22 or f/32, when certainly there are situations when those values can be very helpful. That is what they are for, to help when DOF is needed. Try them for yourself. Often, there will little no actual bad effect, it can be mostly all good.
This is my protest about imagining pixel limited diffraction. Of course normally trying to use around f/5.6 or f/8 is a fine general plan (for DSLR class camera lenses), but the warning to never use past about f/11 is not so helpful. There certainly are a few more details to say about it. One of those details is, sometimes stopping down more can obviously help very considerably. Yes, diffraction certainly does increase as we stop down, that is what lenses do, but it is Not due to pixels. We can easily experiment with seriously stopped down apertures, and obviously see that there certainly are times that the depth of field helps much More than the diffraction Hurts. You should try this. The lenses routinely offer these stop values because in general, there are special times when they can benefit us greatly.
To explain this next situation shown, these are the original images from which 100% crops are taken. D800 and D300 cameras, ISO 400, same 105 mm VR AF-S lens on both, both on the same tripod at the same spot. FX is of course the first wider view, and the DX sensor crops the lens view smaller, which makes it look closer up. The two frames are shown the same size here, so DX is seen enlarged more than FX (but both were the same lens image, from the same lens). Point is, both had the same crop box in ACR, both marked crops are about 8.6% of the frame width. Sharpening can always help a little, but there was no processing done on this page. There was a slight breeze to wiggle a few leaves. Shutter speed at f/32 got slow, around 1/40 second.
The point of these next 100% crops (a tiny central area cropped, then shown 100% full size, actual pixels) is not just to show depth of field, because we already know what to expect of that. It is more to show there is no feared diffraction limit around f/11 or wherever. There is no large step representing any limit around f/11, or anywhere. Sure, f/8 is often better (because of diffraction), and sure, diffraction does increase, but sure, you can of course use f/16 and f/22, maybe f/32, because, sure, it can often help your picture. Diffraction does continually increase as the lens is stopped down, but which is about the aperture, it is not about pixel size. This is a 105 mm lens, and yes, we might debate about f/32 (but it certainly does increase depth of field). Any diffraction would be much less visible if the full image was displayed resampled to smaller size instead of shown at 100% size. But obviously there is no reason to always fear some limit at f/11, if the depth of field can help more than the diffraction hurts. You can do this test too.
The near tree and focus are 20+ feet, so that will always be the sharpest point. The light pole is about 250 feet, the power wires are about 900 feet. These are 100% crops of a tiny area.
Both crops are the same, 8.3% of frame width.
Specifically, the FX crop is 613x599 of 7360x4912 pixels, or 1% of total full frame pixels.
The DX crop is 357x347 of 4288x2848 pixels, or 1% of total full frame pixels.
This is ENLARGEMENT. At this scale, the uncropped full frame FX would be about 6 feet wide on a monitor large enough. The full frame DX would be nearly 4 feet wide.
Both are the same image from the same lens at the same distance. This is the same crop of the frames, but the DX sensor is simply smaller, and has to be enlarged more to view at the same size (not done here yet). In this one specific case, this larger FX sensor happens to have a greater "pixels per mm" sampling resolution (smaller pixels), which is of course a plus for resolution, Not a minus for diffraction. Diffraction does increase a little at f/32, but DOF increases a LOT.
So the question is, do you want more Depth of Field, or not?
The images tell it, but here are Depth of Field values from the calculator. Subject at 20 feet, and background 880 feet behind it. So f/32 is not quite as sharp due to diffraction (again, this is an enlarged 100% view), but the DOF improvement is dramatic. Do you need that or not? In this case, the best f/32 Depth of Field does not extend past about 42 or 31 feet, and focus remains at less than hyperfocal (DOF does not reach infinity). However at f/32, the background CoC (BKCoC, at 900 feet here) becomes only around 2x larger (FX) than the DOF CoC limit at the 42 or 31 feet (more BKCoc detail at the calculator). Not quite to full DOF this time, but pretty close. We can see DOF looks pretty good, and if DOF is needed, I call that better, a lot better. Note this 100% crop is greatly enlarged here, depending on your screen size, but several times larger than the DOF CoC formula has computed.
|105 mm 36x24 mm FX at 20 feet|
|5.6||18.3 to 22 ft||213.5 ft||10.6x|
|8||17.7 to 23 ft||151 ft||7.5x|
|11||16.9 to 24.5 ft||106.9 ft||5.3x|
|16||15.9 to 27.1 ft||75.7 ft||3.7x|
|22||14.6 to 31.7 ft||53.6 ft||2.7x|
|32||13.1 to 41.8 ft||38 ft||1.9x|
|105 mm 24x16 mm DX at 20 feet|
|5.6||18.8 to 21.3 ft||320 ft||15.9x|
|8||18.4 to 21.9 ft||226.4 ft||11.2x|
|11||17.8 to 22.8 ft||160.2 ft||8x|
|16||17 to 24.2 ft||113.4 ft||5.6x|
|22||16.1 to 26.5 ft||80.3 ft||4x|
|32||14.8 to 30.7 ft||56.9 ft||2.8x|
Depth of Field can be confusing on cropped sensors. We do of course routinely say that in usual practice, the cropped cameras do see greater DOF than a larger sensor. But this example was Not usual practice. If standing in same place, normally we would substitute a shorter lens (equivalent view of 105/1.5 crop = 70 mm) on the DX body, to capture the same view. The expected short lens is what helps small sensor DOF, but we didn't here. Or if using the same lens, DX has to stand back 1.5x further back to see the same field of view, and that greater distance helps DOF in usual practice. But we didn't. We didn't do anything, we just stood in the same place here at 20 feet with the same lens, so DX did see a smaller cropped view (see the first uncropped image views just above). And so here, the only difference is that the smaller DX sensor still has to be enlarged 1.5x more to compare its images at same size as FX. Greater enlargement hurts DOF, which is why sensor size is a DOF factor. So, the DOF numbers are correct (for the assumed standard 8x10 inch print size).
Degree of enlargement is a big factor. The same two f/32 images above are repeated below, with the smaller DX image enlarged more so it will view at the same size as FX now. FX D800 first, then DX D300 next. Both are same 105 mm lens on the same tripod in the same spot. But the DX looks telephoto because its sensor is smaller (sees a smaller cropped view), so it needs to be enlarged more here (done below), which also enlarges the diffraction too. FX is still shown about 100%, and DX is shown larger than 100%. We would not normally view these this hugely large - the uncropped frames were 7360x4912 and 4288x2848 pixels size - so a smaller view would look better than this.
The FX D800 is 36 megapixels, and the DX D300 is 12 megapixels, so this case is slightly larger pixels on DX, about 13% larger in this case. That may hurt resolution, but it does not affect lens diffraction. However, what we can see is that the smaller DX sensor cropping does require half again more enlargement to reach the same size as FX (not done above). That shows the diffraction larger too. Normally we think of DX having more depth of field than FX, however that assumes DX with same lens would stand back 1.5x farther to be able show the same image view in the smaller frame. We didn't here. Everything was the same here (except DX has to be enlarged half again more, below).
The pixel density spacing (205 pixels/mm FX, and 182 pixels/mm DX) shows this case of DX has the larger pixels (in this case). I don't see the larger pixels offering any greater diffraction advantage or limit however, so I will definitely need to see some evidence of that. The results here are rather backwards to that notion (but there are other differences too, sensor size and enlargement). I mostly see that both are f/32, and DX is enlarged more, and this FX has the greater resolution (not true of all FX/DX comparisons). The smaller lens aperture does create the diffraction, but the smaller aperture is all-important to depth of field, which can help greatly, when needed.
Now the same 100% crops at f/32 again, but now shown at same viewing size. FX first, then DX. Again, in this case, FX here has the smaller pixels, which boosts resolution and looks good. And other factors that actually do matter, the larger FX frame does not have to be enlarged as much to view at the same size.
Again, this is the same lens on both cameras, both standing in the same spot (on same unmoved tripod). The actual difference in this tiny 100% crop is the sensor pixel density. And of course, smaller pixels are simply greater sampling resolution, always good to show any detail present in the lens image.
And the enlargement is different (see same enlargement just above). Enlarging DX half again more is the necessary hardship, but that is what normal practice always has to do for smaller sensors. It's unfair to compare to FX if we don't compare the same image. But Depth of Field is often more important than diffraction.
On both of these two FX and DX samples at f/32, the Airy calculation (for green light) is 35 or 40 times larger than the sensor pixel size. Pi r² with r = 20 pixels is an area of 1250 pixels. The large Airy disk does of course limit lens resolution, it's certainly not as sharp now (but it obviously does not cut off and die at one pixel size). And while we are aware of this, yet we do still have quite a bit of resolution left, possibly adequate, very possibly good enough this time, for some cases and some uses. Our final use is likely resampled much smaller than this extreme 100% crop view. You can say we have sufficient sampling resolution to even show the diffraction clearly. :) Digitals job is to reproduce whatever the lens creates, and more pixels help to do that better.
But in this case, f/32 also creates awesomely better depth of field. And in various situations, we could decide that could be overwhelmingly important, could make all the difference, perhaps make or break. Or maybe sometimes not, we may need to back off a bit on f/stop. If we do need really maximally sharp results at f/5.6 or f/8, then we know to set up a situation not needing extreme depth of field. It's a choice, it depends on goals, and we can use the tools we have to get the result we want. Photography is about doing what you have to do to get the result that you want. The alternative is Not getting what you want. But do realize that you have choices.
Stars in the night sky are tens to millions of light years away, and so from Earth, they do appear as zero diameter point sources. However at high optical magnification in telescopes, due to diffraction, we see a star not as point source, but as an Airy disk, which is an artifact of our telescope diffraction. The Airy disk diameter inversely depends on aperture diameter (half the aperture diameter creates twice the Airy disk size). The ability to separate and resolve two close points (two touching Airy disks) depends on Airy disk diameter (how much they overlap each other), which depends on aperture diameter - as seen though focal length magnification (twice the focal length shows twice the separation distance).
Telescope users know that telescopes with a larger diameter aperture have better resolution due to less diffraction. That smaller Airy disk diameter can better resolve (separate) two very closely spaced stars, to be seen and distinguished as two close stars instead of as one unresolved blob (blurred together). Known double star pairs are the standard measure of telescope resolution.
Wikipedia has the derivation of this minimum separation (x) to resolve two points. It's nothing new, it's from George Airy, 1834. Green light wavelength is about 0.00055 mm (550 nm), and green is about the center of the span of visible light.
In this formula, x is the minimum separation to resolve two of them. 1/x is called resolution. The combination f/d is the f/stop number. The minimum separation x increases directly with f/stop number. Focal length also affects the magnification of the subject detail (relative to that blur diameter), and in practice, a longer focal length supports a higher f/stop number. The beginning formula for "diffraction" is about aperture diameter 1/d. Focal length distance magnifies it on the sensor, which becomes f/d, which is f/stop number, which are supporting facts for the simple F/4 rule of thumb above. But we also enlarge the sensor image significantly to view it.
This "x" is Not exactly the Airy disk diameter. It is the radius of the first minimum (the first dark ring) (see example of "resolving"). That would seem to complicate computing pixel sizes, at least consider doubling the 1.22 radius to be 2.44 diameter, but it's still not the full diameter. But more smaller pixels won't change the diffraction, they will just resolve the rings better (and sensor enlargement should aid that). If you have such a point source and want to resolve any dark rings, you will need lots more pixels of resolution. Which certainly seems a better situation than not being able to resolve a blob. :)
However, the reciprocal 1/x of such minimum separation x (of two adjacent point sources) is the theoretical maximum resolution allowed by diffraction, directly comparable to line pairs per mm. Which applies to our camera lenses too, except we rarely photograph point sources. Measured resolution numbers of real world complex lenses are of course less than this theoretical limit, but they can't be greater. The concept of lenses that are "diffraction limited" would be the impressive feat of reaching those theoretical numbers, limited only by diffraction.
This is not about pixels. Sensor pixels will have their own resolution limits, unrelated. A greater number of pixels does not affect the sharpness that our lens can reproduce, but a greater number of (smaller size) pixels is normally greater resolution of detail that we can resolve in that lens image. The pixels job is to merely try to digitally reproduce our analog lens image it sees. The lens image is what it is, and the better that the pixels can reproduce this image is a good thing (regardless of the detail that is there... a pristine image or one suffering diffraction).
More images (maybe too many) are on next page