Digital images are new to some, and there are things that we need to know, how it works. If you have a question, the idea is that it's hopefully answered here.
There are two very different ways to use images, printing or video screens.
We scan for the capability of our output device. We choose the scan resolution based strictly on the needs of the output device that will process that image. At home, that output device is normally a printer or a video monitor.
Video monitors and printers work very differently from each other, and must be discussed one at a time. All of the rules are different for images intended for these two devices. The following material details the significance of these differences. (when I say "video", I mean the video monitor screen, not movies).
|Properties of Printed Images||Properties of Video Images|
|Image size is measured on paper in inches or centimeters (paper size is also measured in inches)||Image size is measured on the screen in pixels (screen size is also measured in pixels)|
|Image size seen does NOT vary with scanned resolution||Image size seen varies with scanned resolution|
|Image size is modified on paper by scaling||Image size is modified on screen by resampling|
|Image pixels are spaced on paper using specified scaled resolution (dpi)||Image pixels are located at each screen pixel location, one for one|
|Several printer ink dots are dithered to represent the color of one image pixel||One screen pixel location contains one image pixel, and can be of any RGB value|
So because of these great and fundamental differences, when this text says "it's this way" or "it's that way", then notice that it also says "for printing" or "for video". Don't get them out of context, because the two modes are very different, with different properties and concerns.
There are generally two different goals for creating an image (in scanner or camera), either to show it on the video screen, or to print it on paper. These uses have different rules. These digital basics are summarized here. The bottom line is that we do have to know about pixels, at least that they do exist. Our digital images are dimensioned in pixels. Pixels are ALL that there is in a digital image.
The following specifically speaks of scanning and printing to make a copy an image or a document.
The purpose of high scan resolution is to create more pixels to provide for enlargement, so an important way to think of it is this way:
Paper is dimensioned in inches (or mm). Images are dimensioned in pixels. Paper has a fixed spatial dimension. Pixels do not, we can print images any size (dpi), or show on any size of video screen.
We normally always want to print photos at about 300 dpi on paper. 300 dpi is simply the magic number representing the detail that a good eye can normally see at close distance. Paper is dimensioned in inches, so pixels per inch is a concern.
Honest, 300 dpi is an optimum and desirable goal for printing color or grayscale photos. This is an assumed given here, not for debate. Except your one-hour photo print shop likely uses 250 dpi, and it is adequate too. Dpi is pixels per inch (which is jargon perhaps, but naysayers see below).
Line art mode: This is mostly speaking of color or grayscale images here. However an exception is that B&W documents in line art mode (without color or tonal data) can print a little better at 600 dpi, with smaller "jaggies". But 300 dpi line art is still fair (fax is 200 dpi line art). But for more complex tonal work (color or grayscale), our printer cannot print 600 dpi tonal detail. It is not designed to try, because our eye cannot resolve more than 300 dpi detail.
To help distinguish your scanner menus about this: Line art mode is one bit data, only two colors, black ink and white paper, ideal for inked line drawings or printed text documents. Only two colors allows higher resolution, and eliminates unwanted intermediate colors, like pink or blue color casts. Some scanners call it line art mode (and I do), but other scanners may call it B&W mode (meaning ink, NOT meaning photos).
|Scanners have three scan modes||Three modes about source|
|Color, or maybe called RGB mode||Reflective mode (prints, paper)|
|Grayscale, or B&W Photo mode||Film slide (positive)|
|Line art, or B&W mode||Film negative (inversion,|
and orange mask if color)
What we really need to know first is how to use our images. How to scan what we need for our purpose, and how to print them right, how to properly make the best use of them. We have to learn that pixels exist, and that images are dimensioned in pixels. We have to think in terms of pixels. Then it all becomes easy.
Scan resolution of 300 dpi means that the scanner will create 300 pixels for every inch scanned. Scanning five inches creates 1500 pixels across it (to reproduce five inches).
Print resolution of 300 dpi means that the printer will space the available pixels at 300 pixels per inch of paper. Printing 1500 pixels at 300 dpi will cover five inches.
So, to create a copy at original size, simply scan photos at 300 dpi, and print them at 300 dpi. Then the same dpi number means the source will be copied at same original size, at optimum 300 pixels per inch.
We imagine high resolution is about showing greater detail, and of course it is, however the only way we can see that detail is to enlarge it bigger to see it better. Enlargement is the important tool. We use high resolution for enlargement. We need more pixels to show it larger to see the detail. Of course, this is NOT speaking of scaling or resampling larger, which cannot increase image detail. This enlargement is due to greater scan resolution, sampling more pixels reproducing finer image detail from the original image.
To create sufficient pixels for print enlargement, the ratio of (scanning resolution / printing resolution) is the enlargement factor. For example,
Film is small, and so needs more enlargement, but still the same rule. One example is to scan 35 mm film at 2700 dpi, and print at 300 dpi, for 2700/300 = 9X size enlargement. 9X is about 8x12 inches (about A4 size) from full frame 35 mm (about 1.4 x 0.9 inches). The ratio of (scanning resolution / printing resolution) is the print enlargement factor.
Film is designed to be enlarged, with fine grain, so it enlarges well. Frankly, prints are only of adequate quality at original size, and they don't enlarge well (attempts will be somewhat degraded). 2x might be still be acceptable for some uses, but it seems likely you won't like 3x (detail resolution will be low, spread out, not sharp). Prints are already enlarged, and are assumed to be the final purpose. The paper emulsion is designed with more coarse grain, not designed to be enlarged. In contrast, the film is the original master source, with fine detail designed to be enlarged. Or digital cameras provide the original source file, even better (depending on adequate image size). Either way, it can be better quality to go back to the original source than to plan to enlarge a print copy. Copying prints at original size should work out well though.
Note that to scan a full page size at high resolution like 2700 dpi is nonsense, surely a misunderstanding. You're not planning a 9x enlargement and will have no use for such a large image. The concept is about desired enlargement, a goal for a purpose. A full page size is already large. When you just want to print a copy of a photo print or a document page at original size, just automatically scan and print at 300 dpi. One exception, if and only if the scan is line art mode, then maybe scanning and printing line art at 600 dpi can be a little better (no tonal colors to dither, and 600/600 is still original size).
Again, if scanning something small, like a 35 mm film frame, and wanting to print it large, like full page size, then the idea is to scan it at maybe 2700 dpi at 100%, or 300 dpi at 900%, which is the same result, either of which will create enough pixels so that then printing it at 300 dpi will enlarge it to 2700/300 = 9x, like to A4 or 8x12 inch size.
But if wanting to scan something and print it as same original size, then scan at 300 dpi, and print at 300 dpi, and you will reproduce original size at optimum quality.
This is very easy, and very important to know, and this little bit to know may answer most questions.
Again, scanning 6x4 inches of paper at 100 dpi will produce
(6 inches x 100 dpi) x (4 inches x 100 dpi) = 600x400 pixels of image size.
Plug in the appropriate numbers to get the size image you want (in pixels) from what you are scanning (inches). dpi = pixels per inch.
That dpi concept is true for printing goals too, meaning that if you plan to print 8x10 inches at 300 dpi, then of course in preparation, you need to create in the ballpark of
(8 inches x 300 dpi) x (10 inches x 300 dpi) = 2400x3000 pixels.
The scanner software tool will compute this math for you, and will help with this. It provides more calculator than this little one below, to make work easy for you. The concept is INPUT from scanner, and OUTPUT to printer. The typical scanning menu will show in advance, the size of the INPUT area to be scanned, the resolution and any enlargement factor, and the size of the OUTPUT image, in pixels and/or inches. Output for video screens is only concerned with output size in pixels, ignore output inches then. You can just make the numbers show your output goal.
The scanner tool converts an INPUT scan area to an OUTPUT print area, using dpi and enlargement, same as described here. Unless 100% scale (1x enlargement), the dpi value entered will be assumed to be the desired OUTPUT printing resolution value, and the actual scanning resolution is computed according to enlargement, which not many scanners will show. The two dpi are equal Only if at 100% scale, and the dpi seen will be the printing resolution goal (see this page). Otherwise, the scanning resolution will be Enlargement x Printing dpi, which you should know this even if not shown. The scanner may also have a beginners automated menu to hide much more of this detail.
Repeating concept, if you scan a slide at 300 dpi 9x enlargement, the output file will be marked with 300 dpi, to print 9x size, as is.
If you scan it at 2700 dpi 100%, you get exactly the same pixels, but the output file is marked 2700 dpi, to print original size (at excessive resolution specified). We of course scale this file to 300 dpi later, before we print it (which simply only changes the dpi number in the file, so then these are equivalent in that way).
In practice, the computed scan resolution number might come out like maybe 1037 dpi. Which will work, but good practice for critical work might increase it to the next even number the scanner menu offers, like when 150 300, 600, 1200, 2400, 4800 dpi is offered, choose the next larger one, like 1200 dpi in this case. Then the final scan can be cropped and resampled as necessary (see this page). Not much actual difference today, but the even menu value can be more a critically precise scan (the scanner carriage motor moves in certain steps, and the scan sensor also has certain intervals, which these defaults will match. Then the photo editor can better resample the larger full image).
There is a qualification about printing. There are two methods that should be mentioned:
Any time when "fitting the paper", your job is to provide sufficient pixels to match your desired actual print size. Optimum is about 300 pixels per inch for best quality photos (or 250 dpi works well too). If you want 4x6 inches, that's 1200x1800 pixels. If you want 8x10 inches, that's 2400x3000 pixels. (Large wall posters probably have too few pixels for their size, and must be less dpi, but they are viewed at greater distances.)
There is another procedure often necessary, because camera images and print paper are often Not the same SHAPE, and so cannot be fitted to the paper as is. For example, a 4x6 image will not fit 4x5 paper. The print is not going to work out well unless they are the same shape (see this about cropping to fit). But the most essential basic fact is that images are dimensioned in pixels, and that is how we must think of it. That makes it easier, not harder. And it is necessary, and it's very important to plan your pixels.
Sufficient pixels to print at 240 to 300 dpi is an optimum goal to print photo images. This is true for images sent to online printing services, or true of your own inkjet printer too. More pixels really cannot help the printer, but very much less is detrimental to quality. This is very simple, but it is essential to know and keep track of. This simple little calculation will show the image size needed for optimum photo printing.
Using inches shows the dpi concept more obviously, so it will compute in mm if desired, but it also shows conversion.
This little calculation has two purposes:
There is a larger dpi calculator that knows about scanning, printing, and enlargement.
It's important to realize that an area scanned at 300 dpi will create the pixels necessary to also print the same size at 300 dpi. The concept either way is pixels per inch. 300 dpi is likely what you want for a photo copy job (but a line art mode scan of black text or line drawings can use 600 dpi well).
This printing dpi number does NOT need to be exact at all, ± 10% of 300 dpi won't matter much, but it should be ballpark. But planning size to have sufficient pixels to be somewhere near this size ballpark (of 250 to 300 pixels per inch) is a very good thing for printing.
The role of scan resolution for the video screen is to create the appropriate image size, the dimensions of image width x height in pixels. Scanning dpi and enlargement do affect the final video image size (pixels), but on the screen, only the output image size in pixels has any importance (not inches). Video screens and printing on paper are just simply very different concepts. Video is easier actually, but we have to understand the difference. It should all be obvious, and second nature, once we know.
(6 inches x 100 dpi) x (4 inches x 100 dpi) = 600x400 pixels
It is a fundamental basic. Printing spaces pixels the same way (pixels per inch).
Inches (or mm) do exist on paper or film. Paper is dimensioned in inches (or mm).
Scanners and printers use paper. Video systems do not.
So the image pixels are pretty much directly displayed one for one on the screen, one image pixel on one screen pixel, so your target goal is the size of image that you want to see. Inches are a factor on paper, but inches are not used on the screen. The physical size you see will depend on how large a screen you bought, so size will vary on different screens. The screen is dimensioned in pixels. Images are dimensioned in pixels. A 600x400 pixel image will fill 600x400 pixels of the larger video screen. It simply does not matter to the video system if that image file is claiming 72 dpi or 300 dpi or any other value, video simply doesn't look at dpi. The 600x400 pixel image will be shown in 600x400 pixels on the video screen. This is of course the easiest way it could be.
Your digital camera does not use paper either. Film was dimensioned in mm like paper, but a digital sensor directly creates an image dimensioned in pixels. Yes, the sensor is dimensioned in mm (like film), but the "scanning" job is complete now, and what we get out of the camera is pixels. However, your camera will also stick in some arbitrary dpi number, maybe 180 or 240 or 300 dpi. At best it is just a bad guess, because of course, the camera has no clue what size you might print it, if at all. This might indicate a huge print size of a couple of feet? But that has no meaning there, don't worry about it yet. You will determine a useful print size and dpi number when you are ready to print. This dpi number does not affect the pixels that the camera creates. Dpi is just a separate lone number stored someplace in the image file to later tell the printer how to space the pixels on paper, and this number can simply be changed at will (called scaling, to match the paper size).
Digital home movies were 640x480 pixels in the past, and today are 1920x1080 or 1280x720 pixels, to match HDTV screens which are also one of the same two sizes. Digital is about pixels.
Ifs and buts ... (confusion factors)
But otherwise, video screens normally show pixels as pixels, one image pixel on one screen pixel. That does make converting paper documents dimensioned in inches be a problem on the screen, which has no concept of inches or dpi. See method about video "logical inches" (on its 2nd page).
The digital camera can work very well to copy photos, but (like any other photo), copy work depends on your procedures.
The scanner is carefully designed to control the lighting, the position of the light, and the color and white balance of the light. It holds the media square and flat. These are big jobs. The scanner is designed to do the copy job well, and automatically.
However, all of this is left up to the camera operator, who needs to have some understanding of how to do that part well too. This camera copy setup is Not automatic, and the details of what you do are important. To use a camera for scanning documents, see Google for the standard copy procedures. It will be a closeup, so you surely will need to use a macro lens.
Cameras don't use dpi, they create a fixed image size in pixels. The computed scan dpi is pixels shown / inches shown. So copying a 6x4 inch photo into (if 24 megapixels) 6000x4000 pixels, which is 6000 pixels / 6 inches (pixels per inch) = 1000 dpi, which is scan resolution. Then this 6000x4000 pixel image printed at 300 dpi is 6000 pixels / 300 dpi = 20 inches wide on paper. You'll likely want to resample it smaller to 1800x1200 pixels to print a copy at original 6x4 inch size at 300 dpi.
The next page will start with "What is a pixel?", but before we get started, a note relating to context usage of "dpi":
So yes, inkjet printer rating dpi is something entirely different, referring to inkjet printer ink drops instead of image pixels. Ink drops is about print quality (dithering quality), and is NOT about printing resolution. The inkjet printer only has three or four colors of ink - or maybe double that in some cases - but nowhere near the 16.7 million color possibilities of 24 bit color, one of which the pixel might be. The concept of a pixel is in fact a colored dot, and the printers goal is to reproduce the color of that dot. But the limited inkjet printer cannot directly reproduce the color of a pixel. The inkjet printer just tries to approximate the color of a pixel, with multiple ink drops chosen from maybe six colors of ink. This method of simulating colors by combining multiple individual ink drops is called Dithering. If it was, say, a 250 dpi pixel (size, 1/250 inch), it must make several ink dots of its few CMYK ink colors, which are located on the perhaps 1200 or 1440 dpi spacing within that pixel's 1/250 inch space (not very exactly, see the Printer Basics section). Any color error is carried over to adjacent neighbor pixels, which are then intentionally tinted the opposite way to compensate. The printer is trying the best it possibly can to reproduce the pixel's color, but inkjets cannot reproduce colored pixels directly. The point is, image pixels and inkjet printer ink drops are NOT the same thing at all. Yet the spacing of both is called dpi, with very different meanings, understood in context of use. If it is about ink drops, it is about ink drops. If it is about pixels, it is about pixels. Image files do not contain any ink drops, so any discussion about images is about pixels. Seems simple.
However, continuous tone printers (dye-subs, and the chemical photo paper printers) don't print discrete ink dots of three colors like inkjet printers must - instead they mix the color of the pixel directly, and they print pixels (called continuous tone). There are no dithered ink dots then, just pixels. But these printer ratings still refer to the spacing of those image pixels with the term dpi, simply because dpi has always been the name for "pixels per inch".
Scanner ratings also always call it dpi, also referring to pixels of course (scanners don't use ink dots). You have never heard of a 4800 ppi scanner (and there are no ink drops used in scanners - scanners create pixels of course).
The formal technical specifications at the very heart of our digital imaging definitions use dpi:
Dwelling on the obvious, but all of the above is good enough for me. There are no ink drops in image files. No ink drops in scanners either. Those fundamental and elite specification documents written by the experts do not use ppi one time - dpi has simply always been the name of it. I always say dpi too, for same reason, simply because that's always been the name for pixel resolution.
Yes, sure, the term dpi does have multiple meanings, same as most English words do. Now we have many newbies getting into digital, and this term confuses some of them who dream up their own imagined restrictions, demanding the term dpi ought to be reserved only for printer ink drops (simply their notion, about how they wish things were instead). They apparently don't know the real world term for image resolution has always been dpi. (Sorry, yes, I am going too far here, just for them)
Also, search Google for the phases "72 dpi" and "72 ppi" (using the quotes to be a phrase). We all know "72 dpi" is about video, and has absolutely nothing to do with ink drops, but look at the count of the hits... in 2017, I see 137 million for "72 dpi", and only 163K for "72 ppi". That's 840 to 1, nearly three orders of magnitude, an overwhelming difference in popular usage. I am just teasing the troops, but some of them should "get over it". Dpi is the name of it, always has been, and we might as well get used to it.
Ppi is a relatively new term, we never saw it until recent years, but we are seeing ppi used some now, and it seems a perfectly fine name too, since dpi with respect to images does mean "pixels per inch". It might even have been a good plan, but it was not the plan. It may sound a bit silly to pronounce ppi, but recent photo editor software often does say ppi, while scanner software generally says dpi (but we see some exceptions to both). But usage makes either term correct now, even if the long established name for image resolution has always been dpi, for many years. Nothing makes ppi mandatory.
I am Not arguing we must use the term dpi. I don't care which you use, I understand it either way. I am just saying you should understand both too, because that is what you will see. I learned it as dpi (not that my choice is important, but that's what it's always been called). Mainly, my rant is simply explaining why you need to expect to see it used both ways. It's OK with me if you want to use your own preference, since both terms mean pixels per inch. We will all understand it either way, and you should too. BUT - again - regardless of your own preference, you definitely will often see both dpi or ppi used, so for your benefit, in the real world, you MUST understand what you read both ways. It should be second nature for you, not confusing. If about images, it can only be about pixels. If about printers, it may be about ink drops. Both can be dots. That's simply how things are (and you already know that). Think of this as training to understand what you will see elsewhere.
I am not arguing which is better, or how things could have been, or ought to have been. It simply wasn't, and I am arguing what actually is. Facts are about what is, which is both ways today. The important thing to me (and my big peeve) is that it certainly does the beginner no favor for the "holier than thou" to stand up and incorrectly shout that dpi is wrong, and that everything we read everywhere is therefore wrong. That is confusion indeed, harmful, not helpful, since it does not promote understanding of the real world that actually is. That's my fight. The only reasonable action is to instead simply state that both terms are used, they mean the same thing if about image resolution. (To those so-called "advisers" - remember, when everyone except you is wrong, you probably don't understand the situation.)
The only proper instruction is that both terms are used, interchangeably. Expect to see either. To understand what we read, we must understand it either way.
There is really no problem understanding the two uses of the word dpi if you know the basics, and realize the context. It always means the ONLY thing it can possibly mean in context. This should be no big deal, the English language thrives on multiple context definitions. If it is about ink drops, it is about ink drops. If it is about pixels, it is about pixels. If it is about the images themselves, it is not about ink drops.
If the usage context pertains to images or printing pixels (and it almost always does), then dpi always means "pixels per inch". So does ppi, same thing exactly. It cannot mean anything else, printing is about spacing pixels on paper. The two terms are fully interchangeable, use either according to your whim (but we gotta understand it both ways). If we have a 300 dpi image, both terms mean it will print at 300 pixels per inch (pixel spacing on paper), so that 300 pixels will cover one inch.
If the usage context pertains to inkjet printer ink dot ratings, dpi means "ink drops per inch" (but since the ink drops are actually larger than their spacing, the rating is more specifically about carriage and paper motor stepping intervals). If the printer rating is 1440 dpi, it means its motors can space 1440 ink dots per inch while trying to simulate the color of the pixels in that 300 dpi image. The pixels still remain 300 dpi size (as best as they can be reproduced). Most printer drivers have renamed this now anyway, as Quality, offering Good, Better, Best Quality, or maybe Fast, Standard, High Quality. This ink drop spacing is a quality parameter, about reproducing those pixels to the degree possible - it is not an image resolution parameter.
Use the NEXT button on each page (below) to continue through the remaining pages.