We could subtitle this: Details that no beginner wants to know. However the point is: You'll never grasp digital images until you get it ... until you know what digital images are, what to do with them, and how to do it. These basics are pretty much about the single issue: How do I use my image, how to I make it be the proper size for viewing, for printing, or for the video monitor? All this is really quite easy, digital is just a new concept. It is like learning to drive - once you learn an easy thing or two, it is a skill helpful for life. You will simply just know.
We just gotta know about pixels, and if any mystery, a short primer is here: What is a Digital Image Anyway?
Seriously, once we accept that pixels actually exist, then all this stuff is rather easy. It's all about pixels.
This page tries to be a quick summary of the digital concepts, about how things work. The answer to virtually any question about image size starts with one of these basics. To be able to use digital images well, we need this understanding. This may perhaps be written a little like an argument, refuting the dumb incorrect myths we may have heard about how digital works. The concepts below are instead what you need to know to use digital images properly. It is actually rather easy to grasp, if you get started right.
The size of an image might be, for example, 4000x3000 pixels. The math is that this is 4000x3000 = 12 megapixels. Or, 4288x2848 is also 12 megapixels (rounded off). We tend to think of this as the "resolution" of the image (I am guilty of often saying it that way too). The pixels do indicate the "fineness" of the smallest possible digital detail (a pixel). This size dimension in pixels is the first thing to know about using any digital image, because this size in pixels is what is important for any use.
However, we should also realize that it is the camera lens that creates the image. For example, in a DX cropped sensor camera, the original is the 24x16 mm image projected onto the DX digital sensor. The image has a size there, and after capturing the lens resolution, the image size concept definitely compares to a 24x16mm film image (APS-C size film). Then, the pixels merely digitally sample that image (something like a scanner) to try to digitally reproduce the image that the lens created. The pixels do NOT create the image, and cannot improve the image detail, the pixels merely strive to copy its detail. At best, it can hopefully be a very good reproduction. A 24 megapixel DX image and a 24 megapixel FX image are NOT equal, because the FX image is simply half again larger, and so does not have to be enlarged as much to show it.
A pixel is just numbers that represent a color, specifically, the three RGB numbers of a color specification - which represent the color that was sampled from this tiny dot of image area. When the image is viewed or printed, each little dot of image area is shown to be that corresponding color. In that way, digital images are a little like mosaic tile pictures (but in an ordered grid pattern). Each little dot is a color, and our human brain puts them together to recognize an image in all those colored dots. If it is an ordinary standard 24-bit RGB image, the pixel data is one byte for each of the Red, Green, Blue components of the pixel, which is three bytes per pixel. So if 12 megapixels, then x3 is 36 million bytes of data (24 bits). That is simply the actual data size of any 12 megapixel RGB image data, however you may see it compressed much smaller while it is in a JPG file (JPG file size is much smaller than the image data size, via JPG compression). But when that file is opened, it is full size again in computer memory, three bytes per pixel (24 bits). For other than 24-bit, and for the special interpretation of "megabytes", see more detail.
The size of that image data when opened in memory is in bytes of memory. 24-bit RGB images (8-bit color) is always three bytes of RGB data per pixel. So bytes is the "data size", but "image size" is always in pixels. Whereas, inches only refer to the paper where these pixels will be printed.
A JPG file is compressed to be maybe 1/10 this data size (roughly, can be very variable), while in the JPG file, but 12 megapixels opens again to 36 million bytes in memory. JPG uses lossy compression, which means we can specify High JPG Quality for a larger better file, or Low JPG Quality for a smaller worse file (when file size is more important than image quality). See JPG.
A few exceptions about Data Size: (See formats). Grayscale 8-bit images are only one byte per pixel. There are less common 16 bit formats (six bytes per color pixel). JPG is always 8 bits, three bytes per color pixel. And Raw file pixels are not three RGB colors, but are only one of those colors per pixel. Raw is normally 12 bits per pixel, or 1.5 bytes per pixel (uncompressed). But Raw images cannot be viewed until we convert them to standard RGB images, then three bytes per pixel. So FWIW, the Raw monitor image you see on the camera rear LCD is instead a tiny embedded JPG RGB file created just to show there, since Raw cannot be viewed.
Make no mistake though, Image size is dimensioned in pixels. It is all about pixels. Digital cameras create pixels. Inches are only about the specific piece of paper. Bytes are only about memory. Pixels are about the image.
Printers print on paper which is dimensioned in inches, but video screens are instead dimensioned in pixels (there is no concept of inches in video systems). This difference gets our attention. These devices do NOT work alike. They both show the same pixels in their way, but the basic concepts are quite different.
The video screen size is dimensioned in pixels, and the image is dimensioned in pixels, and the pixels are simply shown directly - without any concept of dpi. The video screen simply shows pixels one for one - one image pixel on one monitor pixel. So for example (one pixel of image on one pixel of screen), an image 800 pixels wide will fill exactly half the width of a 1600 pixel screen width. People telling you the image needs to say 72 dpi for the screen or web are simply just wrong. Video shows pixels, with no concept of inches or dpi. On video screens, it does not matter at all what the dpi number is. The screen shows pixels directly.
When we show a big image, larger than our viewing screen (both are dimensioned in pixels), our viewing software normally instead shows us a temporary quickly resampled copy, small enough to be able to fit on the screen so we can see it, for example, perhaps maybe 1/4 actual size (this fraction is normally indicated to us, so we know it is not the full size real data). We still see the pixels of that smaller image presented directly, one for one, on the screen, which is the only way video can show images. When we edit it, we change the corresponding pixels in the original large image data, but we still see a new smaller resampled copy of those changes.
Dpi and inches are unknown concepts (not used) in video systems, or in digital cameras.
The dpi value shown in camera images is just some clutter in the file header, merely a separate arbitrary number which has not affected the pixels in the image file in any way. Dpi is only for printing, or for scanning. Otherwise, it simply does not matter what this dpi number is, it has no use, not until the time you actually print it on paper, when you will decide an appropriate value (see Scaling below). The camera itself has no clue what size you might choose to print it later.
There is no concept of inches or dpi used in the video system. It does not matter if the monitor is a 12 inch screen or a 24 inch screen, if it is set to show 1280x1024 pixels, it will show 1280x1024 pixels. Both sizes show the SAME image, just at different sizes on the two physical screens. You might think you are showing your image to be, say 8 inches wide on your monitor, but it probably will show a little different on some other monitor of different size or different resolution setting. We don't all see the same size in video, it depends on the screen size (both pixels and inches). Especially for web images, we have no clue what monitor might view it. Yes, all of our 8x10 inch paper is the same size, but there is no concept of inches or dpi in any video system. Video shows pixels, directly. Really pretty simple (but different).
If the image dimension is 3000 pixels, and if printed at 300 pixels per inch, the image will cover 3000/300 = 10 inches on paper. The image contains pixels, but all of the inches are on the sheet of paper. Within a reasonable small range, we can print different sizes by just spacing the same pixels differently (or for a larger range, we could resample the pixels to be a different image size). The only purpose of the dpi number is to space the pixels, pixels per inch, on paper. We can change this dpi number at will, to print different sizes on paper, without changing any pixel at all (called scaling).
3000 pixels / 400 dpi - 7.5 inches of paper
3000 pixels / 300 dpi = 10 inches of paper
3000 pixels / 250 dpi = 12 inches of paper
3000 pixels / 200 dpi = 15 inches of paper
Or the other way, 3000 pixels / 11 inches = 272 dpi (scaling, next below)
If you print the image at home, from the image editor File - Print menu, the computer will use the dpi value in the file to compute the size of the image on paper. If it is 4000 pixels and says 180 dpi, it will try to print 4000/180 = 22.2 inches size. This is the only use for dpi in camera files (printing). Some print menus offer a way you can scale the size first however, to print a different size. If you scale this image to print 10 inches (to fit the paper), then it will scale to print at 4000/10 = 400 dpi (inkjets really cannot, but they try).
If you upload the image file to be printed somewhere, they don't ask dpi, they only ask what size to print the pixels that you provided. They will scale it for you. If you upload a dimension of say 2000 pixels, and ask them to print it 10 inches, you will necessarily get 2000 pixels / 10 inches = 200 dpi result. Most online printers have 250 dpi capability, which is a good upload goal. There is no point of uploading way more pixels than they can possibly print.
Scaling is adjusting the value of the dpi number itself in order to fit the image pixels to the paper size, for printing.
Word definition: A scale is a graduated measurement, like a map scale, and scaling is creating a proportionate size or extent, in this case of pixel distribution relative to the paper dimension. Scaling is computing that 3000 pixels printed at 300 pixels per inch will scale to cover 3000/300 = 10 inches of paper. Or scaling to 200 dpi size, 3000 pixels / 200 dpi = 15 inches of paper. The dpi number scales the pixel size so the overall image dimension fits the paper (more specifically, dpi scales the image size into inches, for paper, like in a book.)
So in any existing image, the only purpose of dpi is about scaling the image size on paper, pixels per inch. And of course, that numerical dpi result should also be an acceptable printing resolution for good quality. Just saying, printing at 100 dpi will be pretty poor (but 3000 pixels will print 30 inches then). Also excessively high values like 500 dpi will be pointless, just wishful thinking (but 3000 pixels will print 6 inches then). Printer capabilities are such that we can expect best results around 250 to 300 dpi, so we supply sufficient pixels to print the size we want, for example, 2500 to 3000 pixels for 10 inches. See how easy this is?
Normally, our usual goal is that we try to print photo images at about 250 to 300 dpi. This is the capability of the printers (designed for the capability of our eye to see it). 250 to 300 dpi is good for our printers at home, and also good for printing services such as Shutterfly.com, Mpix.com, Snapfish.com, Walmart, etc. We adjust for the paper size by Scaling the image (setting the dpi number value to print that size). Or, if the image is much too large, we Resample it to be smaller, so that we can scale to around 300 dpi. We also need to crop it to the same shape as the paper. See Resize Images about Cropping and Scaling and Resampling, to fit and print the image.
If we print the image on our home printer, by selecting menu File - Print, the printer will honor the dpi number specified in the file, and will print the pixels at the size (inches) determined by the pixel dimensions and the specified pixels per inch number.
(Pixel dimension) / (paper dimension inches) = pixels / inches = pixels per inch
If we send the image out somewhere to be printed, and specify "print this 5x7 inches", they will. They will necessarily ignore our dpi number, and will rescale the image to the necessary dpi number to print the requested 5x7 inches (to cover the 5x7 inches with the provided pixels). The printer machine only has capability in the 250 to 300 dpi range. If their scaled dpi number comes out higher than 250 or 300 dpi, it won't hurt, but it cannot improve the quality. You can upload your 12 megapixel images to them, but if printing 6x4 inches, then about 1500x1000 pixels is all that can help (250 dpi). I am being ambiguous about 250 vs 300 dpi, normally it won't matter much which we use (we are at limits), but both will print slightly better than 200 dpi.
However (a major point), changing this dpi number will cause absolutely no change at all on the video screen (unless resampling is also selected). Video is not concerned with dpi or inches. Video ignores any dpi number, and simply shows the pixels directly, one for one, one image pixel on one video pixel location. No matter what number the dpi says, you will will never see any effect of it on the video screen, which simply just shows the pixels directly. See an example of that.
Printing paper also has a similar shape, and the same Aspect Ratio applies. For example, 6x4 inch paper is also 3:2 aspect ratio. If we print THIS image on THIS paper, it will fit - the shapes are the same 3:2 aspect ratio (3000x2000 pixels is quite excessive though, for 4x6 inches), and really ought to be resampled to about 1800x1200 pixels first (3:2), to about 300 pixels per inch size.
However, if we want to print this image on 8x10 paper, the paper shape is different (4:5 aspect ratio) than the image (3:2), and some of the image will be lost (cropped, outside the paper edge, off the paper - the shapes are simply different). Or we could choose to fit the tightest dimension, leaving blank white borders the other way (we hate that too). We had exactly the same issues with film, not necessarily the same shape as our paper, but digital methods are a bit different. Now, we need to do Crop and Resample and Scale when printing digital images.
Video screens also have aspect ratio. Non-widescreen monitors used to all be 4:3, and HDTV wide screen TV is 16:9. This is equally important if we are trying to fill full screen, but we are more comfortable with blank space bordering our video images, than on paper.
But digital basics are all the same for all images, so after image creation, then it is a digital image, and pixels are pixels, and dpi is only used to control the size of the printed image on paper. Video screens are dimensioned in pixels too, and have no use for any dpi number.
The camera will stick in some arbitrary dummy dpi number, just so some printed size can be shown. Camera brands vary in the dpi number they make up, but this value is a meaningless arbitrary number, confusing if we try to make any sense of it. There is NO CONCEPT of inches in the camera (just pixels). The camera can have no clue what size we might eventually print it. The image is dimensioned in pixels. We will change that dummy dpi number when we decide how we want to print it.
FWIW, I am saying dpi for "pixels per inch". I am aware that nowadays, some instead prefer to say ppi for same thing, but I am also aware it has always been called dpi. Yes, I am aware that printing devices have another second use for dpi, meaning ink drops per inch, including halftone screens. If interested or confused about dpi, see more details here.
A scanned image has different creation concerns than a digital camera image. At any one setting, the camera makes images of the one size, dimensioned in pixels. The pixels are already defined and created, and it is what it is. However, the scanner requires we specify scanning resolution to create the image size our goal requires, so there is a little more to it.
The one overall rule remains: Printing is best when we have about 300 pixels per each inch of paper to be covered (300 dpi). Note again that dpi is NOT a property of the image, it is simply "just a number" inserted into the image file to establish print size (the inches are on the paper). Review the dpi section above again.
Plug your own numbers into these examples.
A ten megapixel compact camera image might be 3648x2736 pixels (3648/2736 = 1.33, which is 4:3 shape). 8x10 paper is 10/8 = 1.25, which is 5:4 shape. The aspect ratio, or shape, of the image and paper are different. The image shape will not fit the paper shape exactly, not without cropping.
Printing this image at 8x10 inches can compute either 2736 pixels / 8 inches = 342 dpi, or 3648 pixels / 10 inches = 365 dpi.
Most photo labs will choose the first option. But if you first crop the image yourself to 8x10 paper aspect ratio, you can choose what you will get. See the Image Resize page for more details.
A DSLR 12 megapixel image might typically be 2,848 x 4,288 pixels in size. 3:2 shape. Digital images are dimensioned only in pixels, and to use that image in any way, these dimensions are all important. And the image shape can be important, especially if printing on paper, which also has a shape.
If we upload it to an online photo lab and specify to print 8x10 inches, this implies printing at 2848 pixels / 8 inches = 356 dpi (and the long ends will be cropped). Their chemical printer is probably set to print at about 250 dpi capability, but this small excess will work OK. Possibly you cropped the image a bit smaller first anyway (cropping away excess blank space at borders improves many pictures).
But if we ordered a 4x6 inch print size, this one implies printing at 2848 pixels / 4 inches = 712 dpi. Which is absolutely outlandish, so the lab will first resample it smaller, to about 250 dpi size. No harm done, except your upload was much larger and slower than is reasonable for this goal. You could have prepared it better. See the Image Resize page for more details.
Images for a video monitor are pretty small. A HDTV screen might be 1920x1080 pixels, or about 2 megapixels. More pixels cannot help it. Few computer monitors are larger. A Large web page image might (arbitrarily) be 900x600 pixels, about half a megapixel. Presenting a 12 megapixel image there works, but is pretty lame, since the viewing software must resample it smaller (every viewing) to appropriate size to fit the screen, which will be slow, and unnecessary. Viewing our original images one time on our monitor is one thing, but if on a web site (which intends to show it many times, and it has no use for a 12 megapixel image, unless maybe the site offers to sell large prints), we really ought to resample it to our proper smaller goal size, one time, preparation done right, instead of requiring every web visitor viewing to download all the excessive bytes and resample it again and again (which is truly lame). Most regular image hosting sites will do this resample for you, but on your own site, YOU have to do it first.
The way things work is this: (if bothered by the numbers, just skip to the last paragraph below, Method B there).
(8 inches x 300 dpi) x (10 inches x 300 dpi) = 2400 x 3000 pixels. (Plug in your own numbers)
8 inches x 300 pixels per inch = 2400 pixels needed.
10 inches x 300 pixels per inch = 3000 pixels needed.
This is why we must create (approximately) 2400 x 3000 pixels, in order to space them 300 pixels per inch over 8x10 inches of paper. If we have different pixel dimensions, an 8x10 inch print will necessarily compute different dpi values. Hopefully it will come out near the 300 dpi range however.
This image size is always the goal (in this example) to print 8x10 inches at 300 dpi. If scanning a different input area or size, or printing a different output print size, just plug in your numbers, but still exactly the same idea.
Size need not be precisely this, but should be in this ballpark, say within +/- 20% if possible. Most online photo labs print at 250 dpi, if you give them sufficient pixels.
Of course, you can and should plan and scan and archive one time for your largest expected future purpose. Then you can simply resample a smaller copy for using it for smaller purposes (which is no big deal). However, we ought to remain rational when considering the largest feasible size we will ever realistically need. Important commercial or historical images may be exceptions to plan for, but snapshots of the dog may not be exceptional. Only you know your usage goal, but images for printing 4x6 inches, or for large HDTV display, can only use about 2 megapixels.
You scan at the necessary scan resolution to create that goal size (pixels). Proper scan resolution depends on what size you are scanning, and what size you will print it, as follows:
Film is typically small, and must be enlarged. High scan resolution is the tool used for enlargement purposes, as follows:
Scan resolution creates the pixels for enlargement. The optimum scan resolution is always:
(print size / film size) x 300 dpi. Or, Enlargement factor x printing resolution.
For example, if you scan at 300 dpi (input, 100% scale), then also printing at 300 dpi will reproduce at original size (300 dpi input, 300 dpi output, no enlargement). This is of course the usual case for for normal copy purposes (no enlargement).
If you want to print double size at 300 dpi, scan at 2x300 dpi = 600 dpi.
If you want to print half size at 300 dpi, scan at 1/2 x 300 dpi = 150 dpi.
To print slides at 9x size, scan at 9x300 dpi = 2700 dpi (input). This creates the pixels necessary.
Fill in your own numbers, for your own goal, but these are the basics which must be understood. If you are cropping substantially (smaller Input size), then of course greater enlargement and higher scan resolution is necessary. But the simple fact is that we ALWAYS need to have around 2400x3000 pixels to print 8x10 inches at 300 dpi (by definition of dpi, pixels per inch).
For this 8x10 example, (8 inches / 0.92 inches) x 300 dpi = 2608 dpi (I am rounding up, to 2700 dpi, due to very slight cropping usually being necessary). 300 dpi need not be exact - We do want to print around 300 dpi ballpark, but 240 to 360 dpi will be equally as good quality. If it comes out 311 dpi, or 269 dpi, that is very fine. However, much less than 200 dpi is a lot less fine.
Note that the goal is either A: 2700 dpi 100% scale, or B: 300 dpi 900% scale. Our goal is 8x10 inches at 300 dpi, so DO NOT specify 8x10 inches at 2700 dpi. That is a very serious common first mistake - to incorrectly enter values of 2700 dpi 900% scale, which scans at about 9x2700 = 24000 dpi and 8x10 is near 2 Gigabytes then (and useless). You probably will not do this twice, but many do it the first time.
Method A. Scan at 2700 dpi 100% scale, which creates the necessary pixels, but that image file (as is) will print original film size at 2700 dpi (definitely NOT our goal). However, we understand that we will simple scale it to print 8x10 size later, at print time, very trivial (NOT a resample, but simply edit the print size to say 8x10 inches, and it will then show 300 dpi). We will have enough pixels now. Again, the simple requirement for 8x10 inches at 300 dpi is
(8 inches x 300 dpi) x (10 inches x 300 dpi) = 2400 x 3000 pixels.
Method B. Or much simpler (and allows automatic computation of all these numbers), simply just scan at 300 dpi 8x10 inches (Output), and then the Input will show (for example):
The scanner software knows how to create pixels for the 8x10 inches at 300 dpi specified. It is NOT scanning at 300 dpi. Actual scan resolution is the Enlargement Factor x the 300 dpi Output specified.
Input is the area on the original document being scanned. Output is the pixels created for printing. The numbers (resolution and inches) will be the same numbers only at 100% scale, for copy reproduction at same original size. They will be different numbers for size enlargement or reduction (when Not 100% scale). When the scale is Not 100%, then the Resolution number entered is NOT the input Scan Resolution, but is instead the output Printing Resolution. Still true at 100% scale, but at 100% scale, they are the same numbers.
The result is the 8x10 inches at the 300 dpi Output specified. This will then be good to go, ready to print 8x10 inches at 300 dpi. The scanner will do the computations, and will know the size area you are scanning, and how much scan resolution is necessary to do it. Just mark your Input area, and tell it 8x10 inches at 300 dpi Output. (This 35mm film example will still scan at 2700 dpi - the 9x times enlargement, at 300 dpi Output). This method B still creates the same number of pixels as A above, there is no difference in the scanned image, but B is already scaled to print 8x10 inches at 300 dpi (and A is not, you still must do that yourself.) The image size need not be precisely these numbers (exact pixels), but approximately so - certainly should be in the ballpark (or else it simply is not right. 300 dpi means it covers a span of paper by printing 300 pixels per inch, which needs that many pixels.)
One subtlety, important to some critical users: The above works of course, quite well, but this computation may result in computing and scanning at an odd number like maybe 2642 dpi, or even 2700 dpi. The scanner sensor hardware can only scan at specific values (integer divisions of maximum resolution specification), like perhaps 1200 dpi, 2400 dpi, 4800 dpi, 9600 dpi (these will be its menu choices). This requires the scanner do that, and then resample the scan lines, and step the carriage motor unevenly. Perhaps of little concern for some, but when convenient, a common technique is to scan at the next larger integer division menu value (1200, 2400, 4800, 9600 dpi, menu choices), and then resample smaller to desired size at higher quality with a photo editor, which has all the data available at resample time.