A few scanning tips

www.scantips.com

What is a digital image anyway?

Any image from a scanner, or from a digital camera, or in a computer, is a digital image. Computer images have been "digitized", a process which converts the real world color picture to instead be numeric computer data consisting of rows and columns of millions of color samples measured from the original image.

How does a camera make an image? How is it able to tell the little girl from the tree or from the pickup truck? It simply cannot do that, the camera is indescribably dumb about the scene, compared to human brains. All the camera can see is a blob of light, which it attempts to reproduce, whatever it is (it has no clue what it is).

The way a film camera works is that the lens focuses an image onto the film surface. The color film has three layers of emulsion, each layer is sensitive to a different color, and the (slide) film does record on each tiny spot of film to reproduce the same color as the image projected onto it, same as the lens saw. This is an analog image, same as our eyes can see, so we can hold the developed film up and look at it. This is not hard to imagine. However, digital is something very different.

The way a digital camera creates this copy of a color picture is with a CCD or CMOS chip behind the lens. The lens focuses the same analog image onto that digital sensor, which is constructed with a grid of many tiny light-sensitive cells, or sensors, arranged to divide the total picture area into rows and columns of a huge number of very tiny subareas. A 12 megapixel camera CCD has a grid of maybe 4288x2848 sensors (12 million of them). Each sensor inspects and remembers the color (three colors, red, green, and blue, called RGB) of the tiny area (where it was on the sensor frame). The way digital remembers the colors is by digitizing (convert to be numbers) the analog color into three numeric (digital) values representing the color. This organization of numeric data is called pixels. A pixel contains the digital numeric RGB color data (numbers) of one tiny surface area. It creates a row and column array of pixels, perhaps of size 4288x2848 pixels (12 megapixels). Speaking of the output JPG image (skipping some stuff), what we get out of the camera is that each image pixel will consist of three RGB numeric components (which is perhaps comparable to the three layers of color film, but in a very different way). The pixel's color will be one of 16 million colors (256 shades of Red, 256 of Green, 256 of Blue, so 256x256x256 = 16.8 million possible combinations). But any and every pixel is always ONLY ONE COLOR, whatever color it saw in that tiny area of the sensor. Again, a pixel is simply a "color", sampled from the frame area. A pixel is generally too tiny for humans to see individually, but rearrange those pixels in the same order on photo print paper, or on a computer video screen, and the human eye puts it together to recognize the original scene. Printers and video screens are digital devices too, their only purpose in life is to display pixels. We should realize that pixels is all there is in an image file. Also, computer video systems only show pixels. Digital is about pixels, period.

A scanner has a one-row array of similar cells, and a carriage motor moves this row of sensors down the page, making columns in many rows to form the full image grid. The final images are pretty much the same (scanner and camera), both are composed of pixels. Digital is all about the pixels. All there is, is pixels.

In either case (scanner and camera), the color and brightness of each tiny area seen by a sensor is "sampled", meaning the color value of each area is measured and recorded as a numeric value which represents the color there. This process is called digitizing the image. The data is organized into the same rows and columns to retain the location of each actual tiny picture area.

Each one of these sampled numeric color data values is called a pixel. Pixel is a computer word formed from PIcture ELement, because a pixel is the smallest element of the digital image. Pixels are a new concept initially mysterious to beginners, but the good news is that the concept of pixels is easy to understand and use (and is our goal here).

I wish there were magic words that could easily convince novices that the absolute first fundamental basic they must realize is that digital images are composed of pixels, and that digital images are therefore dimensioned in pixels (not inches, but instead pixels). Our video monitor and printer display these pixels. This is simply how things work, and you won't make much progress until you accept that digital images are dimensioned in pixels.

Accepting this concept of pixels is absolutely essential to be able to use digital images, because pixels are all that exists in digital images. It is easy. We don't need to understand most of the details about pixels — only that they exist. In your photo editor program, zoom an image to about 500% size on the screen, and you will see the pixels. The fundamental thing to understand about digital images is that they consist of pixels, and are dimensioned in pixels. If we don't know the dimension of our image (the image size is some number of pixels wide and some number of pixels tall), then we don't know the first thing about using that image. The image dimension in pixels is the most important thing to know, and then the rest should be nearly obvious.

It may help to realize that a picture constructed of colored mosaic tile chips on a wall or floor is a somewhat similar concept, being composed of many tiny tile areas, each tile represented by a sample of one color. From a reasonable viewing distance, we do not notice the individual small tiles, our brain just sees the overall picture represented by them. The concept of pixels is similar, except that these pixels (digitized color sample values) are extremely small, and are aligned in perfect rows and columns of tiny squares, to compose the rectangular total image. A pixel is the remembered color value of each one of these color samples representing tiny square areas. The size of the image is dimensioned in pixels, X columns wide and Y rows tall.

When all of this image data (millions of numbers representing tiny color sample values, each called a pixel) is recombined and reproduced in correct row and column order on printed paper or a computer screen, our human brain recognizes the original image again. The complex work is done automatically by the computer, and we can overlook most of it. What we do need to know is 1) pixels exist, and 2) digital images are dimensioned in pixels, and 3) how to determine and supply the sufficient dimension in pixels for our usage goal (following chapters). Primarily this means that we must think of that image as pixels, simply because that is what it is, and how things work.

The tiny image at left is an early Ulead PhotoImpact icon, including the Windows Shortcut arrow. Icons are graphic images, this one is size of 32x32 pixels.   This image was not photographed nor scanned, instead it was created by hand in a graphics computer program. But any image is composed of pixels, and I selected an icon for an example because it is a small and manageable image.

If we blow up the icon image about a dozen times larger, we see the individual pixels of the image.

Each small square we see is an individual pixel in the original image. Pixel is a computer term for "picture element". The ideal is that each pixel is only one color, and color is the detail in the image (a pixel is the smallest element of detail). Icons are just small, low resolution images, usually graphic instead of photographic, and icons are often composed of 32 rows with 32 columns of pixels. Otherwise, icons are just like any other image (created by pixels).

I've added a few lines on the image to aid seeing the rows and columns of data. Each square in this grid is a pixel. All images are always rectangular, regardless if the background pixels have been made transparent or not (as here).

The scanner creates the pixels by sampling the color of the original photograph. The pixel is one color value sampled from a small area of the original (at say 100 dpi or every 1/100 inch) to create the color samples or pixels. This area is this color. The size (in inches) of the original photograph is as important to image size as is the resolution in dpi. A 6 inch photograph scanned at 100 dpi will produce 600 pixels across that dimension of the image. Or, a one inch section of photograph or film scanned at 600 dpi will also create an image with a dimension of 600 pixels.

The digital camera creates the pixels by sampling the color in the image that the lens projects onto the digital sensor. Again, a pixel is one color sample, representing a tiny area of the picture. This is how the picture is reproduced. dpi has no meaning yet in the camera, the size in inches we might print it is still unknown.

What's a pixel?

Specifically, Numbers.   Conceptually, one color, representing a tiny little area of the picture. A digital color image pixel is just numbers representing a RGB data value (Red, Green, Blue). Each pixel's color sample has three numerical RGB components (Red, Green, Blue) to represent the color of that tiny pixel area. These three RGB components are three 8-bit numbers for each pixel. Three 8-bit bytes (one byte for each of RGB) is called 24 bit color. Each 8 bit RGB component can have 256 possible values, ranging from 0 to 255. For example, three values like (250, 165, 0), meaning (Red=250, Green=165, Blue=0) to denote one Orange pixel. Photo editor programs have an EyeDropper tool to show the 3 RGB color components for any image pixel.

In the base 2 binary system, an 8 bit byte can contain one of 256 numeric values ranging from 0 to 255, because 2 to the 8th power is 256, as seen in the sequence 2,4,8,16,32,64,128,256. The 8th of these is 256. This is the same concept in base 10, that 3 decimal digits can store one of 1000 values, 0 to 999.   10 to the 3rd power is 1000, same idea as 2 to the 8th power is 256.

Yeah, right, but the only point here is that 255 is the maximum possible number that can be stored in an 8 bit byte. Larger numbers require multiple bytes, for example two bytes (16 bits) can hold up to 256x256 = 65536 unique values. 24 bit RGB color images use 3 bytes, and can have 256 shades of red, and 256 shades of green, and 256 shades of blue. This is 256x256x256 = 16.7 million possible combinations or colors for 24 bit RGB color images. The pixel's RGB data value shows "how much" Red, and Green, and Blue, and the three colors and intensity levels will be combined at that image pixel, at that pixel location.

The composite of the three RGB values creates the final color for that one pixel area. In the RGB system, we know Red and Green make Yellow. So, (255, 255, 0) means Red and Green, each fully saturated (255 is as bright as 8 bits can be), with no Blue (zero), with the resulting color being Yellow.

Black is a RGB value of (0, 0, 0) and White is (255, 255, 255). Gray is interesting too, because it has the property of having equal RGB values. So (220, 220, 220) is a light gray (near white), and (40,40,40) is a dark gray (near black). Gray has no unbalanced color cast.

Since gray has equal values in RGB, Black & White grayscale images only use one byte of 8 bit data per pixel instead of three. The byte still holds values 0 to 255, to represent 256 shades of gray.

Line art pixels are represented by only one binary bit with values 0 or 1, used to denote Black or White (2 colors, no gray). Line art data is stored packed 8 bits into one 8-bit byte.

What's in an image file?

Those numbers.   The image file contains three color values for every RGB pixel, or location, in the image grid of rows and columns. The data is also organized in the file in rows and columns. File formats vary, but the beginning of the file contains numbers specifying the number of rows and columns (which is the image size, like 800x600 pixels) and this is followed by huge strings of data representing the RGB color of every pixel. The viewing software then knows how many rows and columns, and therefore how to separate and arrange the following RGB pixel values accordingly into rows and columns.

Every location on one of the rows and one of the columns is color sample, which is called a pixel. If the image size were say 1000x750 pixels (written as width x height by convention), then there would be 1000 columns and 750 rows of data values, or 1000x750 = 750,000 pixels total. For 24 bit color, each pixel's data contains three 8-bit RGB byte values, or 750,000 x 3 = 2,250,000 bytes. Every pixel is the same size, because a pixel is simply the color of the area between the grid lines. That area will be colored by the one corresponding RGB data value. Larger areas of the same color are just many multiple identical pixels, including the blank background (for example the blue sky below), which are many more pixels too. The image data is just a series of RGB numeric color values in a grid of rows and columns.

The image itself is an abstract thing. When we display that color data on the screen, then our human brain makes an image out of it from the appearance of all of these RGB data values.

Icons are usually "graphic" images, built of discrete pixels, instead of having continuous tones like photographs. Some graphic artist has worked very carefully on the previous icon, pixel by pixel, one pixel at a time. But a photograph is more blended, and adjacent pixels often have similar colors (called continuous tone). The blue sky is many slightly different colors of blue, we can see that here. In a graphic image, the sky would be exactly one color of blue. And scanned photographs are typically very much larger than 32x32 pixels.

Let's talk about real photographic images (which is exactly the same thing).

The center of the hot-air balloon was enlarged below to show that photographic images are also composed of pixels in the same way as the graphic icon. Increased enlargement beyond what makes the pixels visible cannot show any increased detail. It will only make the pixels appear larger.

In any digital image, regardless how sharp and clear it is, blow it up to about ten times actual size, and you'll have only pixels visible. Each pixel is simply one numeric RGB color value in the image file, as sampled by the scanner or digital camera.


So how do we use these pixels?

This really is an easy concept, but beginners unfortunately do often seem reluctant to acknowledge that these mysterious pixels exist. And then (simple as it really is) digital images remain mysterious to them, until the day (hopefully early) that they decide to actually consider that maybe digital images do consist of pixels, and are in fact dimensioned in pixels. That is the day the light bulb comes on, and it mostly becomes nearly obvious after that point.

The image size in pixels determines what we can do with this image — how it can be used, and if it is appropriate size for the intended use. There are two fundamental uses which cover almost every application: printing the image on paper (print a photo or in a book, etc), or showing the image on a video screen (snapshots or web pages, etc). These two situations are rather different with different concerns. But either way, we must create the image size (dimension in pixels) to be suitable for the way we will use it. The next sections will elaborate on the details of these two uses.

But first, in the briefest possible way — if we show a digital image on a video screen of size say 1024x768 pixels, then for sure we don't need an image larger than that video screen size (1024x768 pixels. Video screens are dimensioned in pixels, and images are dimensioned in pixels. Inches are no factor at all on the video screen. Chapter 5 is about using images on the video screen.

Or for the other use, when we print digital images on paper, the paper is dimensioned in inches, but digital images are dimensioned in pixels. We print the image on paper at some printing resolution, which is specified in pixels per inch (ppi), which is simply a spacing of pixels on paper. The image size in pixels determines the size we can print it in inches on paper. For example, if we print 1800 pixels width at 300 ppi, then those 1800 pixels will cover 6 inches of paper, simply because 1800 pixels / 300 ppi = 6 inches. That is how it works, but chapters 6 and 7 will elaborate.

Digital camera images and scanner images are the same in all respects about showing or printing the image. Both images are dimensioned in pixels. One difference at creation is that the camera's image size is created by the fixed sensor chip size, for example a 3 megapixel CCD chip creates about 2048x1536 pixels size (see page 87). The camera image is this same size (pixels), but the camera menu also offers a couple of other smaller image sizes, for example half of those dimensions.

The scanner scanning resolution (pixels per inch) and the size of the area being scanned (inches) determine the image size (pixels) created from the inches scanned. If we scan 8x10 inch paper at 300 dpi, we will create (8 inches x 300 ppi) x (10 inches x 300 ppi) = 2400x3000 pixels. The scanner bed and the paper we scan are dimensioned in inches, but the image created is dimensioned in pixels.


Copyright © 1997-2010 by Wayne Fulton - All rights are reserved.

Previous Main Next