Resampling is very different from scaling.
Resampling changes the image size in pixels. It does not (normally) change image resolution (which at this point is just a number used for printing). Resampling is the only tool we have to change the video size of an image, but resampling is not typically used to affect the size of images to be printed.
Resampling is a drastic procedure that actually recalculates all of the image's pixel data values to produce a different size of image. For example, resampling may resize an image from 400x400 pixels to be instead of size 300x300 pixels. The same picture in the image will fit into only ¾ the number of pixels of width or height. That means in any row of pixels, any feature of detail (a face, an eye, a mountain) now has only ¾ as many pixels across it as before. It's simply a smaller image, and it's smaller on the video too. But to place fewer pixels across the same image, resampling has to recalculate the RGB color and location of every pixel, to create NEW pixels on a NEW dot grid pattern to attempt to preserve the detail in the original image. Each row of the image grid is reduced to only ¾ the number of original pixels, and each column has only ¾ the number of original pixels, making the total area have only 9/16 (56%) of the original number of pixels (in this case). The two images contain 400x400=160,000 pixels, and 300x300=90,000 pixels. So resampling can change the image size and the file size radically, which is of course the only purpose for doing it.
Resampling is just interpolation (remember calculating intermediate values from trig and log tables?). We normally reserve that word for resampling to a larger image, however it's the same recalculation process either way, to a different grid spacing. The only difference is that reducing image size discards data and detail (replaces many dots with a few, sometimes called downsampling), and increasing size to a larger image must fabricate additional data (replaces a few dots with many, sometimes called upsampling). The image is simply larger, but no additional detail is possible without another scan of course.
This is the image we are resampling, it's a polar bear in a snow storm. It's a little fuzzy, but work with me on this. It's divided into 4 black intervals and 3 red intervals, to suggest the old 400x400 grid and the new 300x300 grid on the same image. The 400x400 image is actually 1/3 larger physically, they are NOT the same size, but the abstract concept here is that the pictorial image is the same, the polar bears head looks the same in both picture frames, and in particular, on both grids.
Basically what resampling does, is that in order to create a RGB color sample for every dot position in the new 300x300 grid, say the one blue pixel (don't ask!), which is located 67% over from the left edge and 33% down from the top, the software goes to the corresponding location in the old 400x400 grid data, to the 67% X and 33% Y position of that image, and "resamples" or reads the RGB color there. The old 400x400 grid possibly has no pixel exactly at that precise location, because obviously the two grids cannot be aligned, but there are nearby real neighbors of that imaginary point from which to sample the color value.
Not all pixels in the larger old image will necessarily be sampled, because downsampling means that many old pixels are discarded, the limited number of new pixels have no need to look at all of them. Or when upsampling, some pixels in the smaller old image get sampled more than once when fabricating new pixels that are more densely populated. Meaning much data is simply repeated in the new larger image. It would of course be better to go back and resample the original photograph (scan it again), but it must not be available now (or we would).
Some programs do this resampling calculation better than others. Adobe Photoshop offers these three resampling choices:
1) Nearest Neighbor creates the new pixel simply to be the same color of the one closest adjacent old pixel (fastest, and usually best for hard-edged graphics, but too crude for photo images).
2) Bilinear creates the new pixels to be the color interpolated from linearly weighting the value and distance of the old pixel on either side of the new pixel on the same row. "Bi" repeats this vertically, creating new rows using those new pixels.
Or 3) Bicubic creates the new pixels from the color of two pixels in either direction, using cubic equations to "best fit" the new point within the four existing points. "Bi" repeats this vertically, creating new rows using the new pixels. Calculating millions of pixels is slow work, but our computers are much faster today, and the best methods are not such difficult feats anymore. Bicubic mode is more accurate, important if resampling larger, but it is still interpolation. Calculating new pixels from old data is NOT the same as actually sampling real new data from the original.
People often assume that resampling images to an integer divisor (like to 1/2 or 1/3 size) simply uses only every second or every third sample (nearest neighbor), but that's not often true. This was common years ago when computing power was primitive and it was all the hardware could manage then. It is still the best technique for resampling graphics, because otherwise resampling by blending two pixel values together creates a new intermediate value which blurs any sharp edges.
Continuous tone photo images are anti-aliased anyway, and are better resampled by using all existing samples. They already exist anyway, available for free. The excess or "discarded" samples can then still have an effect on the final image. If one of those pixels was a black speck, like maybe a very distant bird in the sky, at least maybe we have a gray spot left. The algorithm to resample to 150 ppi or to 153 ppi is normally one and the same method.
However, it is still true that the results can be a little sharper if resampling to an even fraction of the original, when the old grid and new grid are aligned, so a 150 ppi choice may in fact be better than 153 ppi (see next page).
A 300 dpi scanner has 300 dpi CCD cells, and when we scan at 130 dpi, it must resample the 300 dpi scan line to 130 dpi. Some scanners use bilinear and some use nearest neighbor, to resample the scan line horizontally. All scanners must use nearest neighbor vertically, because the carriage motor only stops to sample lines at every 1/130 inch in this case.
Some people claim it is better to always scan at full 300 dpi optical resolution and then resample back to 130 dpi in an external program. Their point is that the program like Photoshop has a better resample technique than the scanner, and your computer has much more memory and processor power than the scanner. Should we do this with a 1200 dpi scanner too? Gracious, then don't buy one of those. <grin> That would be a very large image.
Along those same lines, some also claim that we can scan at less than full optical resolution, but that we should scan only at values of full optical resolution divided by integers (1, 2, 3, 4, etc.). So for a 300 ppi scanner, the idea is that we should scan only at 300 or 150 or 100 or 75 ppi, instead of values like 80 or 130 ppi. Many scanners only provide these integer choices. The idea is that an integer divisor makes resampling easier, with better results, because the new grid and old grid are always aligned. We would scan at the next higher integer resolution, and then downsample slightly to the desired size (externally). For example, scan at 150 ppi and resample to 130 ppi size.
Note that 600 ppi scanners have additional integer divisor values of 200 ppi and 120 ppi not available to a 300 ppi scanner. 1200 ppi scanners add 400 and 240 ppi. Even divisors of 2, 4, 8 are likely better than odd divisors like 3 or 5, but any integer divisor is probably better than other values, like 58%. There is indeed sometimes a slight improvement using integer divisors, and you should be aware of the choices available to you. Your results and choice may be affected by how well your image program performs resampling in comparison to the scanner. You should experiment and decide for yourself in your situation. See next page for a sample of these techniques.
But I don't want to stray away from the original point, that resampling is a very drastic change. Every single pixel is torn down and rebuilt. Actually, it's replaced with an approximation of others nearby. With the point being that conversely, scaling is not a change affecting the image pixels at all, it is not even visible until we print the image. Then it affects the spacing of the original pixels on the printed paper. The original pixels are not otherwise affected. This is a rather important distinction.
Printed versions of the image have the ability to show the effect of scaling, and scaling vs resampling, and the desirability of scaling vs resampling, but images on the video screen are limited in that capability, since scaling does not affect video images. Images are printed at some number of pixels per inch of paper. Video images show pixels directly, no concept of inches.
Scaling is like stretching the image on a rubber sheet. The same rubber molecules are still painted the same color, we didn't change them, but they are farther from their neighbors now. The idea is that we stretch the rubber sheet until we get the pixel spacing optimum for our printer's dithering capability. The pixels won't have space between them. Instead, the printed pixels get larger as we space them wider with reduced scaled resolution. But so long as our resulting printed image resolution number ends up roughly equal to the printer's image resolution capability, it still looks fine. <significant pause>
Resampling is the only option to change image size for the video monitor, but I hope it is obvious that for printing, scaling is the desirable option when we have enough pixels, and that (for whatever reason), any need for resampling should perhaps be contemplated a little first. I am not at all saying resampling is bad. It is a standard basic operation, and short of rescanning, there is no other way to do what it does. I am just saying it is not a trivial operation, and it may not always be necessary.