These basics are pretty much about the single issue: How do I use my image, how to I make it be the proper size for viewing, for printing, or for the video monitor? All this is really quite easy, digital is just a new concept. It is like learning to drive - once you learn an easy thing or two, it is a skill helpful for life. When you know, you will simply just know. But yes, it does seem that we could subtitle this: Details that no beginner wants to know. However the point is: You'll never grasp digital images until you get it ... until you know what digital images are, what to do with them, and how to do it.
Seriously, once we accept that pixels actually exist, then all this stuff is rather easy. It's all about pixels.
We just gotta know about pixels, and if any mystery, a short primer is here: What is a Digital Image Anyway?.
This page tries to be a quick summary of the digital concepts, about how things work. The answer to virtually any question about image size starts with one of these basics. To be able to use digital images well, we need this understanding. This may perhaps be written a little like an argument, refuting the dumb incorrect myths we may have heard about how digital works. The concepts below are instead what you need to know to use digital images properly. It is actually rather easy to grasp, if you get started right.
The size of an image might be, for example, 4000x3000 pixels. That is 4000x3000 = 12 megapixels. Or, 4288x2848 is also 12 megapixels (rounded off). We tend to think of this as the "resolution" of the image. The pixels do indicate the "fineness" of the smallest possible digital detail (a pixel, which is a dot of one color). This example is borrowed from the image Resize page, to show the idea about pixels.
Pixels are how digital reproduces a scene and its colors. The digital camera merely takes many color samples (each is a pixel), of many very tiny areas, in this way shown. Film uses tiny specks of silver or emulsion dyes instead of pixels, which is not digital numbers, but film does the same sampling idea (colors of many tiny areas). Film areas actually show the color, which we can see. However, digital is totally about pixels, which are numbers representing the color. For example, the reddest orchids above have RGB components of about RGB(220, 6, 136), red is bright, green is weak, blue about mid-range. which describes that shade of red in one tiny area, a pixel. We don't have to know much, but see Wikipedia about the RGB color system.
The main concept of digital is that each pixel is just NUMBERS, binary data describing ONE RGB COLOR for one tiny area, much like one small colored tile in a mosaic tile picture. The numeric concept may be new today (called digital), but the tile concept is 5000 years old. Our brain recognizes the reproduced image in those pixels or tiles. But enlarge these enough, and all you will see is the pixels (or the individual tiles). Pixels are all there is in a digital image, and we must think of it that way. Ignoring them will Not grasp the concept. Digital will make sense when you do think of pixels.
Pixels are real, they exist, in fact, pixels are ALL that exist in digital images. There is nothing else in a digital image. We don't need to see the individual pixels, but the image Size dimension in Pixels is the First Thing To Know about using any digital image, because this size in pixels is what is important for any use. The size of a digital image is dimensioned in pixels.
Human eyes have rods and cones which are a similar sampling system of tiny areas. Cones are color sensitive, some are red cones, and some are green or blue cones. Sampling the color of tiny areas, not unlike pixels in that way. The color difference of adjacent areas is how image detail is perceived. We see a black power wire running across a blue sky because the colors are different. Color difference is the detail that we perceive (including slightest tonal shades of same color). In our digital pictures, a pixel is the smallest dot of color that can be reproduced, so we do think of more and smaller pixels as greater resolution of detail.
However, digital reproduction is a "copy" of an image. We should also realize that it is the camera lens that creates the image that we will reproduce digitally, and pixels are the detail of reproducing the lens image. For example, in a DX cropped sensor camera, the original is the image from the lens projected onto the 24x16 mm DX digital camera sensor. The image has this 24x16 mm size there, comparing to the size of an APS-C size film image. Then, the camera pixels merely digitally sample that lens image (very much like any scanner samples an image, meaning taking many color samples called pixels) to try to digitally reproduce (convert to numbers) the image that the lens created. A pixel is just numbers, three binary RGB numbers representing the red, green and blue components of the color of the area of that pixel. The pixels do NOT create the image, and cannot improve the lens image detail. The pixel sampling merely strives to reproduce its detail. At best, it can hopefully be a very good reproduction. A 24 megapixel DX image and a 24 megapixel FX image are NOT equal, because the FX image is simply half again larger (36x24 mm), and so does not have to be enlarged as much to show it.
A pixel is just numbers that represent a color, specifically, the three RGB numbers of a color specification - which represent the average color that was sampled from this tiny dot of image area. When the image is viewed or printed, each little dot of image area is shown to be that corresponding color. In that way, digital images are a little like mosaic tile pictures (but in an ordered grid pattern). Each little dot is a color, and our human brain puts them together to recognize an image in all those colored dots. If it is an ordinary standard 24-bit RGB image (like JPG), the pixel data is one byte for each of the Red, Green, Blue components of the pixel, which is three bytes per pixel. So if 12 megapixels, then x3 is 36 million bytes of data (assuming the standard 24 bits). That is simply the actual data size of any 12 megapixel RGB image data, however you may see it compressed much smaller while it is in a JPG file (JPG file size is much smaller than the image data size, via JPG compression). But when that file is opened, it is full size again in computer memory, three bytes per pixel (24 bits). For other than 24-bit, and for the special interpretation of "megabytes", see more detail, and also for a calculator to convert bytes, KB, MB, GB, and TB.
The size of that image data when opened in memory is in bytes of memory. 24-bit RGB images (8-bit color) is always three bytes of RGB data per pixel. So bytes is the "data size", but "image size" is always in pixels. Whereas, inches only refer to the paper where these pixels will be printed.
A JPG file is compressed to be maybe 1/10 this data size (roughly, can be very variable), while in the JPG file, but 12 megapixels opens again to 36 million bytes in memory. JPG uses lossy compression, which means we can specify High JPG Quality for a larger better file, or Low JPG Quality for a smaller worse file (when file size is more important than image quality). See JPG.
This calculator tries to make the point that images involve four different sizes, used for different purposes. The numbers used to describe the actual size of the image is width x height, in pixels.
Data size is the uncompressed data, the actual data size - how large your uncompressed image data actually is - normally 3 bytes per pixel (usual RGB, for example JPG files). Compressed File Size in bytes is the least useful number, only of interest for internet transfer or memory card capacity. But pixels is the important number which determines how an image can be used.
When in editor memory, Raw is converted to 16 bit RGB. However JPG is always 8 bits per RGB channel, or 24 bit color.
The compressed file size will be smaller (variable cases, but JPG will be much smaller, file perhaps only 10% or 15% of data size).
Exif data will be added, and a few formatting bytes. Camera Raw image files also contain a Large JPG image too (this JPG is to show on the camera rear LCD, and it provides the histogram too).
If someone tells us they are sending us a 9.1 MB file, that tells us maybe how it will fill our disk, but it tells us absolutely nothing about the image, or about the image size, or about how we might use it. Images are dimensioned in pixels.
For example, if about an 12 megapixel image:
A few specifics about Data Size: (See formats and megabytes). Bytes are 8-bit numbers, of values ranging from 0..255. Because 2 to the power of 8 is 256, which is the maximum number (of values 0..255) that can be stored in 8 bits. Larger numbers require multiple bytes.
The data in JPG files especially, is dramatically compressed extremely smaller, in variable degree, typically perhaps to only 1/4 to 1/12 of Data size, but too much JPG compression can reduce image quality. The JPG file size varies widely with JPG Quality setting. High JPG Quality is a larger file but better image, and Low JPG Quality is a smaller file but a worse image (but who wants lower quality?) The JPG Quality number is a better quality guide than the file megabyte size. We should always favor a larger JPG file size, because smaller is counter-productive to quality. For the file to be so small, JPG is lossy compression, meaning liberties are taken, so that recovery is not perfect, and image quality can be reduced. We still get the same megapixel count back out, but the pixels you wrote into the JPG file are not necessarily quite the same (color of) pixels you see when opened to retrieve them (see JPG Artifacts). A pixel is only the color definition of a tiny spot of area, so a JPG artifact is a pattern of changed colors. Color difference is the detail we can detect and observe.
The camera menu JPG choices affecting file size are:
However, an image for a large video monitor or HDTV, or for a 4x6 inch print, needs only about 2 megapixels. If these are our only goals, and if we do want a smaller file, then for best image quality, I suggest that Small Fine is a greatly better choice then Large Basic (but Small won't print 8x10 inches as well, nor will it allow as much cropping).
The terms "Normal" and "Basic" are arguable, compression is the opposite of best image quality, and Fine is the better default (why would we want less quality?) Lossless compression (choices other than JPG) is less effective to reduce file size, because lossless has to promise to preserve and deliver the full quality of the image (no heroic shortcuts, no quality losses). Notice that lossless compression can still be impressively small, but maybe not incredibly small. The Windows file Explorer "Properties" will show file size in MB and in bytes.
The RGB image Data size is always the X by Y pixel dimensions times 3 bytes per pixel, which is simply how large your data is (for JPG and other 24 bit images). But the compressed file size varies somewhat with the individual image content in the scene (much fine detail is larger, much blank areas compress smaller). If you have a couple hundred camera JPG in one disk folder (if all are the same size settings from same camera, but are very varied image content), and click to sort them by file size, the largest (most detailed) JPG is probably about 2x larger then the smallest (least detailed), with the average size more in the middle.
Make no mistake though, Image size is dimensioned in pixels. It is always all about pixels. Digital cameras create pixels. Inches are only about the specific piece of paper. Bytes are only about memory. Pixels are about the image.
Continuing now with the list of Essentials to Know to USE images. This is the part that confuses people (about dpi), but it is pretty simple, and this should clarify.
This is a very big deal. Printers print on paper which is dimensioned in inches, but video screens are instead dimensioned in pixels (there is no concept of inches in video systems). This difference gets our attention. These devices do NOT work alike. They both show the same pixels in their way, but the basic concepts are quite different. Printers space the pixels on paper, at perhaps 300 pixels per inch of paper. Video monitor screens show the image pixels directly one for one on the monitor pixels.
When I say Video, I don't mean movies, instead I mean the monitor viewing screen, computer or TV. The video screen size is dimensioned in pixels, and the image is dimensioned in pixels, and the pixels are simply shown directly - without any concept of dpi. The video screen simply shows pixels one for one - one image pixel on one monitor pixel. So for example (one pixel of image on one pixel of screen), an image 800 pixels wide will fill exactly half the width of a 1600 pixel screen width. People telling you the image needs to say 72 dpi for the screen or web are simply just wrong. Video shows pixels, with no concept of inches or dpi. On video screens, it does not matter at all what the dpi number is. The screen shows pixels directly.
When we show a big image, larger than our viewing screen (both are dimensioned in pixels), our viewing software normally instead shows us a temporary quickly resampled copy, small enough to be able to fit on the screen so we can see it, for example, perhaps maybe 1/4 actual size (this fraction is normally indicated to us, so we know it is not the full size real data). We still see the pixels of that smaller image presented directly, one for one, on the screen, which is the only way video can show images. When we edit it, we change the corresponding pixels in the original large image data, but we still see a new smaller resampled copy of those changes.
Dpi and inches are unknown concepts (not used) in video systems, or in digital cameras.
The dpi value shown in camera images is just some clutter in the file header, merely a separate arbitrary number which has not affected the pixels in the image file in any way. Dpi is only for printing, or for scanning. The scanner does assign the scaled dpi number you choose when scanning, so that has meaning, it will print that size. But the camera just assigns some meaningless arbitrary dpi number to the image file (print size might indicate a few feet). Of course, it has no clue what size you might choose to print it later, if you even decide to print it. Otherwise, it simply does not matter what this dpi number is, it has no use, not until the time you actually print it on paper, when you will decide an appropriate value (see Scaling below).
There is no concept of inches or dpi used in the video system. It doesn't matter if the monitor is a 12 inch screen or a 72 inch HDTV screen, if it is set to show 1920x1080 pixels, it will show 1920x1080 pixels (about 2 megapixels). Both monitor sizes show the SAME 1920x1080 image pixels, just at different sizes on the two physical screens. You might think you are showing your image to be, say 8 inches wide on your computer monitor, but it probably will show a different size on some other monitor of different size or different resolution setting. In our photo editor, we would see whatever size the image actually is (in pixels), but large images are normally resampled to show a copy that that fit on the screen. We don't all see the same size in video, it depends on the screen size (both pixels and inches). Especially for web images, the site has no clue what monitor might view it. Yes, all of our 8x10 inch paper is the same size, but there is no concept of inches or dpi in any video system. Video shows pixels, directly. Really pretty simple (but different).
If the image dimension is 3000 pixels, and if printed at 300 pixels per inch, the image will cover 3000/300 = 10 inches on paper. The image contains pixels, but all of the inches are on the sheet of paper. Within a reasonable small range, we can print different sizes by just spacing the same pixels differently (or for a larger range, we could resample the pixels to be a different image size). The only purpose of the dpi number is to space the pixels, pixels per inch, on paper. We can change this dpi number at will, to print different sizes on paper, without changing any pixel at all (called scaling).
3000 pixels / 400 dpi - 7.5 inches of paper
3000 pixels / 300 dpi = 10 inches of paper
3000 pixels / 250 dpi = 12 inches of paper
3000 pixels / 200 dpi = 15 inches of paper
Or the other way, 3000 pixels / 11 inches = 272 dpi (scaling, next below)
If you print the image at home, from the image editor File - Print menu, the computer will use the dpi value in the file to compute the size of the image on paper. If it is 4000 pixels and says 180 dpi, it will try to print 4000/180 = 22.2 inches size. This is the only use for dpi in camera files (printing). Some print menus offer a way you can scale the size first however, to print a different size. If you scale this image to print 10 inches (to fit the paper), then it will scale to print at 4000/10 = 400 dpi (inkjets really cannot, but they try).
If you upload the image file to be printed somewhere, they don't ask dpi, they only ask what size to print the pixels that you provided. They will scale it for you. If you upload a dimension of say 2000 pixels, and ask them to print it 10 inches, you will necessarily get 2000 pixels / 10 inches = 200 dpi result. Most online printers have 250 dpi capability, which is a good upload goal. There is no point of uploading way more pixels than they can possibly print.
Scaling is adjusting the value of the dpi number itself in order to fit the image pixels to the paper size, for printing.
Word definition: A scale is a graduated measurement, like a map scale, and scaling is creating a proportionate size or extent, in this case of pixel distribution relative to the paper dimension. Scaling is computing that 3000 pixels printed at 300 pixels per inch will scale to cover 3000/300 = 10 inches of paper. Or scaling to 200 dpi size, 3000 pixels / 200 dpi = 15 inches of paper. The dpi number scales the pixel size so the overall image dimension fits the paper (more specifically, dpi scales the image size into inches, for paper, like in a book.)
So in any existing image, the only purpose of dpi is about scaling the image size on paper, pixels per inch. And of course, that numerical dpi result should also be an acceptable printing resolution for good quality. Just saying, printing at 100 dpi will be pretty poor (but 3000 pixels will print 30 inches then). Also excessively high values like 500 dpi will be pointless, just wishful thinking (but 3000 pixels will print 6 inches then). Printer capabilities are such that we can expect best results around 250 to 300 dpi, so we supply sufficient pixels to print the size we want, for example, 2500 to 3000 pixels for 10 inches. See how easy this is?
Normally, our usual goal is that we try to print photo images at about 250 to 300 dpi. This is the capability of the printers (designed for the capability of our eye to see it). 250 to 300 dpi is good for our printers at home, and also good for printing services such as Shutterfly.com, Mpix.com, Snapfish.com, Walmart, etc. We adjust for the paper size by Scaling the image (setting the dpi number value to print that size). Or, if the image is much too large, we Resample it to be smaller, so that we can scale to around 300 dpi. We also need to crop it to the same shape as the paper. See Resize Images about Cropping and Scaling and Resampling, to fit and print the image.
If we print the image on our home printer, by selecting menu File - Print, the printer will honor the dpi number specified in the file, and will print the pixels at the size (inches) determined by the pixel dimensions and the specified pixels per inch number.
(Pixel dimension) / (paper dimension inches) = pixels / inches = pixels per inch
If we send the image out somewhere to be printed, and specify "print this 5x7 inches", they will. They will necessarily ignore our dpi number, and will rescale the image to the necessary dpi number to print the requested 5x7 inches (to cover the 5x7 inches with the provided pixels). The printer machine only has capability in the 250 to 300 dpi range. If their scaled dpi number comes out higher than 250 or 300 dpi, it won't hurt, but it cannot improve the quality. You can upload your 12 megapixel images to them, but if printing 6x4 inches, then about 1500x1000 pixels is all that can help (250 dpi). I am being ambiguous about 250 vs 300 dpi, normally it won't matter much which we use (we are at limits), but both will print slightly better than 200 dpi.
However (a major point), changing this dpi number will cause absolutely no change at all on the video screen (unless resampling is also selected). Video is not concerned with dpi or inches. Video ignores any dpi number, and simply shows the pixels directly, one for one, one image pixel on one video pixel location. No matter what number the dpi says, you will will never see any effect of it on the video screen, which simply just shows the pixels directly. See an example of that.
Printing paper also has a similar shape, and the same Aspect Ratio applies. For example, 6x4 inch paper is also 3:2 aspect ratio. If we print THIS image on THIS paper, it will fit - the shapes are the same 3:2 aspect ratio (3000x2000 pixels is quite excessive though, for 4x6 inches), and really ought to be resampled to about 1800x1200 pixels first (3:2), to about 300 pixels per inch size.
However, if we want to print this image on 8x10 paper, the paper shape is different (4:5 aspect ratio) than the image (3:2), and some of the image will be lost (cropped, outside the paper edge, off the paper - the shapes are simply different). Or we could choose to fit the tightest dimension, leaving blank white borders the other way (we hate that too). We had exactly the same issues with film, not necessarily the same shape as our paper, but digital methods are a bit different. Now, we need to do Crop and Resample and Scale when printing digital images.
Video screens also have aspect ratio. Non-widescreen monitors used to all be 4:3, and HDTV wide screen TV is 16:9. This is equally important if we are trying to fill full screen, but we are more comfortable with blank space bordering our video images, than on paper.
But digital basics are all the same for all images, so after image creation, then it is a digital image, and pixels are pixels, and dpi is only used to control the size of the printed image on paper. Video screens are dimensioned in pixels too, and have no use for any dpi number.
The camera will stick in some arbitrary dummy dpi number, just so some believable printed size can be shown. If they didn't, then Photoshop will automatically call the blank to be 72 dpi, which indicates some unreal print size in feet, so the cameras do stick in a dummy dpi number, maybe 200 to 300 dpi. They don't know what size you may print it later. Camera brands vary in the dpi number they make up, but this value is a meaningless arbitrary number, confusing if we try to make any sense of it. There is NO CONCEPT of inches in the camera (just pixels). The image is dimensioned in pixels. We will change that dummy dpi number when we decide how we want to print it.
FWIW, I am saying dpi for "pixels per inch". I am aware that nowadays, some instead prefer to say ppi for same thing, but I am also aware it has always been called dpi. Yes, I am aware that printing devices have another second use for dpi, meaning ink drops per inch, including halftone screens. If interested or confused about dpi, see more details here.
A scanned image has different creation concerns than a digital camera image. At any one setting, the camera makes images of the one size, dimensioned in pixels. The pixels are already defined and created, and it is what it is. However, the scanner requires we specify scanning resolution to create the image size our goal requires, so there is a little more to it.
The one overall rule remains: Printing is best when we have about 300 pixels per each inch of paper to be covered (300 dpi). Note again that dpi is NOT a property of the camera image, it is simply "just some number" inserted into the image file to establish print size (the inches are on the paper). Review the dpi section above again.
Plug your own numbers into these examples.
A ten megapixel compact camera image might be 3648x2736 pixels (3648/2736 = 1.33, which is 4:3 shape). However 8x10 paper is 10/8 = 1.25, which is 5:4 shape. Or 4x6 paper is 6/4 = 1.5 shape. The aspect ratio, or shape, of the image and paper are different. This image shape will not fit these paper shapes exactly, not without cropping.
Printing this image at 8x10 inches can compute either 2736 pixels / 8 inches = 342 dpi, or 3648 pixels / 10 inches = 365 dpi.
Most photo labs will choose the first option. But if you first crop the image yourself to 8x10 paper aspect ratio, you can choose what you will get. See the Image Resize page for more details.
A DSLR 12 megapixel image might typically be 2,848 x 4,288 pixels in size. 3:2 shape. Digital images are dimensioned only in pixels, and to use that image in any way, these dimensions are all important. And the image shape can be important, especially if printing on paper, which also has a shape.
If we upload it to an online photo lab and specify to print 8x10 inches, this implies printing at 2848 pixels / 8 inches = 356 dpi (and the long ends will be cropped). Their chemical printer is probably set to print at about 250 dpi capability, but this small excess will work OK. Possibly you cropped the image a bit smaller first anyway (cropping away excess blank space at borders improves many pictures).
But if we ordered a 4x6 inch print size, this one implies printing at 2848 pixels / 4 inches = 712 dpi. Which is absolutely outlandish, so the lab will first resample it smaller, to about 250 dpi size. No harm done, except your upload was much larger and slower than is reasonable for this goal. You could have prepared it better. See the Image Resize page for more details.
Images for a video monitor are pretty small. A HDTV screen might be 1920x1080 pixels, or about 2 megapixels. More pixels cannot help it. Few computer monitors are larger. A Large web page image might (arbitrarily) be 900x600 pixels, about half a megapixel. Presenting a 12 megapixel image there works, but is pretty lame, since the viewing software must resample it smaller (every viewing) to appropriate size to fit the screen, which will be slow, and unnecessary. Viewing our original images one time on our monitor is one thing, but if on a web site (which intends to show it many times, and it has no use for a 12 megapixel image, unless maybe the site offers to sell large prints), we really ought to resample it to our proper smaller goal size, one time, preparation done right, instead of requiring every web visitor viewing to download all the excessive bytes and resample it again and again (which is truly lame). Most regular image hosting sites will do this resample for you, but on your own site, YOU have to do it first.
The way things work is this: (if bothered by the numbers, just skip to the last paragraph below, Method B there).
(8 inches x 300 dpi) x (10 inches x 300 dpi) = 2400 x 3000 pixels. (Plug in your own numbers)
8 inches x 300 pixels per inch = 2400 pixels needed.
10 inches x 300 pixels per inch = 3000 pixels needed.
This is why we must create (approximately) 2400 x 3000 pixels, in order to space them 300 pixels per inch over 8x10 inches of paper. If we have different pixel dimensions, an 8x10 inch print will necessarily compute different dpi values. Hopefully it will come out near the 300 dpi range however.
This image size is always the goal (in this example) to print 8x10 inches at 300 dpi. If scanning a different input area or size, or printing a different output print size, just plug in your numbers, but still exactly the same idea.
Size need not be precisely this, but should be in this ballpark, say within +/- 20% if possible. Most online photo labs print at 250 dpi, if you give them sufficient pixels.
Of course, you can and should plan and scan and archive one time for your largest expected future purpose. Then you can simply resample a smaller copy for using it for smaller purposes (which is no big deal). However, we ought to remain rational when considering the largest feasible size we will ever realistically need. Important commercial or historical images may be exceptions to plan for, but snapshots of the dog may not be exceptional. Only you know your usage goal, but images for printing 4x6 inches, or for large HDTV display, can only use about 2 megapixels.
You scan at the necessary scan resolution to create that goal size (pixels). Proper scan resolution depends on what size you are scanning, and what size you will print it, as follows:
Film is typically small, and must be enlarged. High scan resolution is the tool used for enlargement purposes, as follows:
Scan resolution creates the pixels for enlargement. The optimum scan resolution is always:
(print size / film size) x 300 dpi. Or, Enlargement factor x printing resolution.
For example, if you scan at 300 dpi (input, 100% scale), then also printing at 300 dpi will reproduce at original size (300 dpi input, 300 dpi output, no enlargement). This is of course the usual case for for normal copy purposes (no enlargement).
If you want to print double size at 300 dpi, scan at 2x300 dpi = 600 dpi.
If you want to print half size at 300 dpi, scan at 1/2 x 300 dpi = 150 dpi.
To print slides at 9x size, scan at 9x300 dpi = 2700 dpi (input). This creates the pixels necessary.
Fill in your own numbers, for your own goal, but these are the basics which must be understood. If you are cropping substantially (smaller Input size), then of course greater enlargement and higher scan resolution is necessary. But the simple fact is that we ALWAYS need to have around 2400x3000 pixels to print 8x10 inches at 300 dpi (by definition of dpi, pixels per inch).
For this 8x10 example, (8 inches / 0.92 inches) x 300 dpi = 2608 dpi (I am rounding up, to 2700 dpi, due to very slight cropping usually being necessary). 300 dpi need not be exact - We do want to print around 300 dpi ballpark, but 240 to 360 dpi will be equally as good quality. If it comes out 311 dpi, or 269 dpi, that is very fine. However, much less than 200 dpi is a lot less fine.
Note that the goal is either A: 2700 dpi 100% scale, or B: 300 dpi 900% scale. Our goal is 8x10 inches at 300 dpi, so DO NOT specify 8x10 inches at 2700 dpi. That is a very serious common first mistake - to incorrectly enter values of 2700 dpi 900% scale, which scans at about 9x2700 = 24000 dpi and 8x10 is near 2 Gigabytes then (and useless). You probably will not do this twice, but many do it the first time.
Method A. Scan at 2700 dpi 100% scale, which creates the necessary pixels, but that image file (as is) will print original film size at 2700 dpi (definitely NOT our goal). However, we understand that we will simple scale it to print 8x10 size later, at print time, very trivial (NOT a resample, but simply edit the print size to say 8x10 inches, and it will then show 300 dpi). We will have enough pixels now. Again, the simple requirement for 8x10 inches at 300 dpi is
(8 inches x 300 dpi) x (10 inches x 300 dpi) = 2400 x 3000 pixels.
Method B. Or much simpler (and allows automatic computation of all these numbers), simply just scan at 300 dpi 8x10 inches (Output), and then the Input will show (for example):
The scanner software knows how to create pixels for the 8x10 inches at 300 dpi specified. It is NOT scanning at 300 dpi. Actual scan resolution is the Enlargement Factor x the 300 dpi Output specified.
Input is the area on the original document being scanned. Output is the pixels created for printing. The numbers (resolution and inches) will be the same numbers only at 100% scale, for copy reproduction at same original size. They will be different numbers for size enlargement or reduction (when Not 100% scale). When the scale is Not 100%, then the Resolution number entered is NOT the input Scan Resolution, but is instead the output Printing Resolution. Still true at 100% scale, but at 100% scale, they are the same numbers.
The result is the 8x10 inches at the 300 dpi Output specified. This will then be good to go, ready to print 8x10 inches at 300 dpi. The scanner will do the computations, and will know the size area you are scanning, and how much scan resolution is necessary to do it. Just mark your Input area, and tell it 8x10 inches at 300 dpi Output. (This 35 mm film example will still scan at 2700 dpi - the 9x times enlargement, at 300 dpi Output). This method B still creates the same number of pixels as A above, there is no difference in the scanned image, but B is already scaled to print 8x10 inches at 300 dpi (and A is not, you still must do that yourself.) The image size need not be precisely these numbers (exact pixels), but approximately so - certainly should be in the ballpark (or else it simply is not right. 300 dpi means it covers a span of paper by printing 300 pixels per inch, which needs that many pixels.)
One subtlety, important to some critical users: The above works of course, quite well, but this computation may result in computing and scanning at an odd number like maybe 2642 dpi, or even 2700 dpi. The scanner sensor hardware can only scan at specific values (integer divisions of maximum resolution specification), like perhaps 1200 dpi, 2400 dpi, 4800 dpi, 9600 dpi (these will be its menu choices). This requires the scanner do that, and then resample the scan lines, and step the carriage motor unevenly. Perhaps of little concern for some, but when convenient, a common technique is to scan at the next larger integer division menu value (1200, 2400, 4800, 9600 dpi, menu choices), and then resample smaller to desired size at higher quality with a photo editor, which has all the data available at resample time.