Large images consume large memory and make our computers struggle. Memory cost for an image is computed from the image size.
For a 6x4 inch image printed at 300 dpi, the image size is calculated as:
(6 inches × 300 dpi) × (4 inches × 300 dpi) = 1800 × 1200 pixels
1800 × 1200 pixels is 1800×1200 = 2,160,000 pixels (2 megapixels).
The memory cost for this RGB color image is:
1800 × 1200 × 3 = 6.48 million bytes.
The last "× 3" is for 3 bytes of RGB color information per pixel for 24 bit color (3 RGB values per pixel, one 8-bit byte for each RGB value, which totals 24 bit color).
The compressed JPG file will be smaller (maybe 10% of that size), selected by our choice for JPG Quality, but the smaller it is, the worse the image quality. The larger it is, the better the image quality. If uncompressed, it is three bytes per pixel.
Different color modes have different size values, as shown below:
|Image Type||Bytes per pixel|
|1 bit Line art||1/8 byte per pixel
(1 bit per pixel, 8 bits per byte)
|8 bit Grayscale||1 byte per pixel|
|16 bit Grayscale||2 bytes per pixel|
|24 bit RGB||3 bytes per pixel
Most common for photos, for example JPG
|32 bit CMYK||4 bytes per pixel, for Prepress|
|48 bit RGB||6 bytes per pixel|
File Compression techniques can make this data smaller while stored in the file, but it comes back out of the file uncompressed, with the original number of bytes when open in memory. JPG artifacts (lossy compression) just means the pixels may not all be the same original color, but there is the same number of pixels when uncompressed.
The memory size of images is often shown in megabytes. You may notice a little discrepancy from the number you calculate with WxHx3 bytes. This is because (as regarding memory sizes) "megabytes" and "millions of bytes" are not quite the same units.
Memory sizes in terms like KB, MB, GB, and TB count in units of 1024 bytes, whereas humans count thousands and millions in units of 1000 bytes.
A million bytes is 1000x1000 = 1,000,000 bytes, powers of 10, or 106. But binary units normally are used for memory sizes, powers of 2, where one kilobyte is 1024 bytes, and a one megabyte is 1024x1024 = 1,048,576 bytes, or 220.
So, a number like 10 million bytes is 10,000,000 / (1024x1024) = 9.54 megabytes. One megabyte holds nearly 5% more bytes than one million, so there are about 5% fewer megabytes.
There are technical reasons that memory chip addressing uses binary units, so data size in memory often does too. 10 bits of address lines address 210 or 1024 bytes. We cannot manufacture just 1000 bytes, anything useful has to be 1024 bytes. That difference was important in the old days when computer chips only had 1 KB of memory which would hold 1024 bytes of data. We counted every byte then, and matched size to memory very carefully... we needed to know the actual number was 1024 bytes. But it is totally unimportant today (to humans) when we have 4 to 12 GB of memory. Humans count in decimal, and there is no reason to count bytes so carefully today. Old reasons really no longer matter, but we still just do it.
megapixels in Digital camera, and disk drive megabytes are both correctly advertised as decimal millions (1000x1000). Or Gigabyte is (1000x1000x1000). Humans do count in powers of 10. However, after formatting the disk, the computer operating system will count it in binary GB. The manufacturer did advertise it correctly, and formatting did NOT make the disk smaller, the units just changed (in computer lingo, 1K became counted as 1024 bytes instead of 1000 bytes).
Memory chips and file sizes are described in binary kilobytes (1024) or megabytes (1024x1024) or gigabytes (1024x1024x1024). There are good reasons for memory chips, because each address bit is a power of two. There is no good reason today for files, but counting in 1024 units is still done on them.
The definition of the unit prefix "Mega" absolutely has always meant millions of course (decimal factors of 1000) - still does, it does NOT mean 1024. However, memory chips are necessarily dimensioned in binary units (factors of 1024), and they simply incorrectly appropriated the term mega. Since then, to preserve the actual decimal meanings of Mega and Kilo, new SI units Ki and Mi and Gi were defined for the binary powers of 1024, but they seem ignored, they have not caught on. Maybe we won't buy a 4 Gi memory chip. So, this just complicates things today. Memory chips are binary of course, but there is absolutely no reason why our computer operating system still does this, regarding file sizes. Humans count in decimal.
This is why we buy a 500 GB disk drive (sold as 1000's, the actual real count, the decimal way humans count), and then we format it to 465 gigabytes of binary file space (a dummy number using 1024). All precisely correct, 500 GB / (1.024 x 1.024 x 1.024) = 465 GB. But users who don't understand this assume the disk manufacturer cheated them somehow. Instead, the disk just counted in decimal, same as we do. No crime in that, mega does mean million. It was the operating system that confuses us, when one companies units are different than another's units. GB is large enough that it adds up.
Note that you will see different numbers in different units for the same file size dimension:
Example For a 4000 x 2500 pixel image, then: (24-bit RGB is most common)
4000 x 2500 pixels = 4000x2500 = 10 megapixels
4000x2500 x 3 = 30 million bytes (if 24-bit RGB)
30,000,000 bytes / (1024 x 1024) = 28.61 megabytes (MB)
This is simply how large the data is - For ANY 24-bit 10 megapixel image, but JPG files compress it smaller (only while in the file).
57.220 MB if 48-bit RGB (6 bytes RGB per pixel)
38.147 MB if 32-bit CMYK (4 bytes CMYK per pixel)
28.610 MB if 24-bit RGB (3 bytes RGB per pixel)
19.073 MB if 16-bit GrayScale (2 bytes per pixel)
9.537 MB if 8-bit GrayScale (1 byte per pixel)
1.192 MB if 1-bit Line Art (1 bit per pixel)
Notice that when you increase resolution, the size formula above multiplies the memory cost by that resolution number twice, in both width and height. The memory cost for an image increases as the square of the resolution. The square of say 300 dpi is a pretty large number (more than double the square of 200).
When people ask how to fix memory errors when scanning at 9600 dpi, the answer is to use 300 dpi if you don't have 8 gigabytes of memory, or a 9600 dpi scanner.
Scanning any 6x4 inch photo will consume the amounts of memory shown in the table below. I hope you realize that it rapidly becomes impossible.
|Memory size in bytes|
|24 bit RGB Color||8 bit Grayscale||Line art|
When we double the scan resolution, memory cost goes up 4 times. Multiply resolution by 3 and the memory cost increases 9 times, etc. So this seems a very clear argument to use only the amount of resolution we actually need to improve the image results for the job purpose. More than that is waste. It's often even painful. Well, virtual pain. <grin>