Large photo images consume large memory and can make our computers struggle. Memory cost for an image is computed from the image size.
For a 6x4 inch image printed at 300 dpi, the image size is calculated as:
(6 inches × 300 dpi) × (4 inches × 300 dpi) = 1800 × 1200 pixels
1800 × 1200 pixels is 1800×1200 = 2,160,000 pixels (2 megapixels).
The memory cost for this RGB color image is:
1800 × 1200 × 3 = 6.48 million bytes.
The last "× 3" is for 3 bytes of RGB color information per pixel for 24 bit color (3 RGB values per pixel, one 8-bit byte for each RGB value, which totals 24 bit color).
The compressed JPG file will be smaller (maybe 10% of that size), selected by our choice for JPG Quality, but the smaller it is, the worse the image quality. The larger it is, the better the image quality. If uncompressed, it is three bytes per pixel.
Different color modes have different size data values, as shown.
|Image Type||Bytes per pixel||Possible colors||File Types|
|1 bit Line art||1/8 byte per pixel|
(1 bit per pixel)
black or white
|8 bit Indexed Color||Up to 1 byte per pixel||256 colors maximum. Graphics use today||TIF, PNG, GIF|
|8 bit Grayscale||1 byte per pixel||256 shades of gray||JPG, TIF, PNG|
|16 bit Grayscale||2 bytes per pixel||65636 shades of gray||TIF, PNG|
|24 bit RGB||3 bytes per pixel (one byte for each of R, G, B)||16.77 million colors.|
The "Norm" for photos, e.g., JPG
|JPG, TIF, PNG|
|32 bit CMYK||4 bytes per pixel, for Prepress||Varies, but low. One guess has been approx 160,000 colors||TIF|
|48 bit RGB||6 bytes per pixel||2.81 trillion colors||TIF, PNG|
|File Type Property||JPG||TIF||PNG||GIF|
|8 bit color (24 bit data)||X||X||X|
|16 bit color option||X||X|
|CMYK or LAB color||X|
|Indexed color option||X||X||X|
|RGB (Red, Green, Blue)|
|(0,0,0)||(128,128,128) Middle Gray||(255,255,255)|
This is a summary of the very basic definitions of these topics, and digital basics covers more in other areas like resolution.
JPG files are 24-bit color in a 8-bit system mode. These are two definitions with two different meanings. RGB 24-bit "color" has three channels, and its 8-bit "mode" means each Red, Green, and Blue channel is 8-bits (24 bit "color", total of three channels). The 8-bit "mode" means 256 values or shades in each channel, values 0..255 (28 = 256) of each of Red, Green, and Blue. This can create 256 x 256 x 256 = 16.77 million possible color combinations, called 24-bit "color". Today, the vast majority of our monitors, video cards and printers are 8 bit devices, meaning 24-bit color, meaning they can show 16.77 million colors.
This 16.77 million is a calculated number of possibilities (most possibilities will not actually exist in our data). Our eye cannot differentiate all of them. It is said that tone values must be 1% different for our eye to be able to separate them (Google "Just Noticeable Difference"). That's about 100 exponential steps, but RBG tones have 256 linear steps. For example in the green samples at 85 at bottom, a step of 2 might be detected as different if you try hard enough. The difference is 2.4%, and you might at least detect an edge between them. The color is dark, but that helps the percentage. If double at 170, the percentage of the same step is half. Color is wonderful, and offers much, but difference in adjacent numerical values can be very subtle. And our eyes and brain do strange things to us. I can sometimes imagine the solid middle gray background can appear a bit lighter near the Black box and a bit darker near the White box, yet the background is exactly the same everywhere. The gradient strips show most of the 256 single RGB colors.
RGB colors are combinations of the three R,G,B basic colors. For example, we know from grade school that red and green make yellow. Or White is all colors. Or like RGB(238, 187, 68) meaning Red is 238 (of 255), Green is 187 (of 255) and Blue is 68 (of 255). There are 16.77 million possible combinations. That does Not mean any one image uses most of them, or that our eye can uniquely distinguish all of them, but they are choices. There's a good color picker from Mozilla to play with to match the colors to the numbers. The Photoshop Color Picker is a similar tool.
Our human eye does not perceive all colors as equally the same brightness. Of the three, we see Green as our brightest color. Red is next, and Blue is least bright (to us). Their perceived brightness (luminosity) is judged to be: Green 0.59, Red 0.3, Blue 0.11. That's the NTSC television formula for converting RGB to grayscale. RGB Luminance = 0.3 R + 0.59 G + 0.11 B (the three RGB adding to 1.0 when all are combined). Luminance is the equivalent brightness as seen in grayscale.
Grayscale mode is one channel of color, 256 shades of gray between black and white.
RGB(180, 180, 180) is neutral gray
RGB(188, 180, 180) has a pink cast
Middle Gray and Gamma: Many things do have multiple definitions, interpreted in context, and Middle Gray is one of the most confusing. We call RGB(128,128,128) to be Middle Gray, simply because it is mid-range in our [0..255] histogram (which contains gamma data values). However in real world linear scenes, our human eye is believed to perceive 18% linear as Middle Gray. 18% linear is value 46 (18% of 255), but which becomes value 117 in the gamma 2.2 data in our histogram. And one stop down from 255 is 50% and 128 Linear, but it is 186 in gamma data (more or less 186, about 3/4 of full scale, but the camera is also doing other things too, like white balance, color profiles, and contrast). So not realizing the difference, we often imagine 18% Middle Gray is somehow instead 128 midpoint in the histogram. Gamma 117 is only about 1/8 EV difference from 128, so gamma does make it fairly close, but for an entirely different concept and reason (not because 18% is the middle of anything digital).
Grayscale images (RGB(110,110,110) becomes single channel value 110 in grayscale). The single channel one byte is 1/3 size of color files with the three bytes of RGB, and has no color cast. We can print this with only one black ink with no risk of color casts, but which is a struggle to simulate shades of gray (dithering is a way of combining sparse black ink dots with blank white paper "dots" to create an averaged appearance of gray in the area size of one pixel). Or we can print grayscale with CMYK inks that are combined into shades of gray. The few best printers for grayscale photos also offer a couple of lighter gray inks to improve dithering. To print color, ink jet color printers also have to dither their four colors of ink to simulate 16.7 million possible colors. Some may also offer lighter shades of cyan and magenta, to improve dithering.
Indexed color: As is common practice, there is also another definition used for 8-bit "mode" (two different meanings):
8-bit "color mode" of three channel RGB is 24-bit "color" data, three 8-bit channels, 3 bytes per pixel, and up to 16.7 million possible color combinations (256 x 256 x 256). Good for photos.
8-bit Indexed "data mode" is 8-bit data, one 8-bit byte per pixel, which is an index value of 0..255, which selects one of 256 possible colors. Good for most graphics, but it can cause banding in color photo images.
Image size is always dimensioned in pixels.
Data size is dimensioned in bytes.
Data Compression varies data size varies too much to have meaning about image size. Saying, the size of our 24 megapixel image is 6000x4000 pixels. This "dimension in pixels" is the important parameter that tells us how we can use that image. The uncompressed file size may be 72 MB, or 13 MB or other numbers if a JPG file, but that doesn't tell us anything about the image, only about storage space or internet speed. Most commonly, it is 24-bit color photo image which is 3 bytes of RGB data per pixel. That means any 24 megapixel camera takes RGB images of data size 72 million bytes (the calculator below converts this to 68.7 MB, actual uncompressed data size).
However, data Compression techniques can make this data smaller while stored in the file. In some cases drastically smaller, and maybe 72 MB goes into a 6 to 15 MB file if JPG compression. We can't state any exact size numbers, because when creating the JPG file (in camera or in editor), we can select different JPG Quality settings. The result is that larger JPG files are better image quality, and smaller JPG files have less image quality. JPG files made too small are certainly not a plus, larger is better. Surely we want our camera images to be the best they can be. Also this compressed file size naturally varies some with image content too. Images containing much fine detail everywhere (a tree full of small leaves) will be a little larger, and images with much blank content (sky or walls, etc.) will be noticeably smaller (better compressed). But JPG files are typically 1/4 to 1/12 of the image data size (but other extremes do exist). Both larger and smaller are possible (an optional choice).
Then when the file is opened and the image data is uncompressed and shown, the image data comes back out of the file uncompressed, and original size, with the original number of bytes and pixels when open in computer memory. Still the same counts, but JPG Quality differences affect the color accuracy of some of the pixels (detail is shown by the pixel colors), and bad effects can add visible added JPG artifacts, which we can learn to see.
Photo programs differ in how they describe JPG Quality. The software has a few options about how it is done, and Quality 100 is arbitrary (Not a percentage of anything), and it NEVER means 100% Quality. It is always JPG. But Maximum JPG Quality at 100, and even Quality of 90 (or 9 on a ten scale) should be pretty decent. I usually use Adobe Quality 9 for JPG pictures to be printed, as "good enough". Web pictures usually are less quality, because file size is so important on the web, and they are only glanced at one time.
13 MB JPG from 68.7 MB data would be 19% original size (~1/5), and we'd expect fine quality (not exactly perfect, but extremely adequate, hard to fault).
6 MB JPG from 68.7 MB would be compression to 8% size (~1/12), and we would Not expect best quality. Perhaps acceptable for some casual uses, but anything smaller would likely be bad news.
Compromising, down towards 1/8 size (12.5%) might be a typical and reasonable file size for JPG, however, a) we might prefer better results, and b) images with much blank areas like sky or walls can compress exceptionally well, which that is Not an issue itself. File size is not the final criteria, we have to judge how the picture looks. We can learn to see and judge JPG artifacts. We would prefer not to see any of them in our images.
But there are downsides with JPG, which is lossy compression, and image quality can be lost (not recoverable). Selecting higher JPG Quality is better image quality but a larger file size. Lower JPG Quality is a smaller file, but lower image quality. Don't cut off your nose to spite your face. Large is Good regarding JPG, the large one is still small. File size may matter when the file is stored, but image quality is important when we look at the image. Lower JPG quality causes JPG artifacts (lossy compression) which means the pixels may not all still be the same original color (image quality suffers from visible artifacts). There are the same original number of bytes and pixels when opened, but the original image quality may not be retained if JPG compression was too great. Most other types of file compression (including PNG and GIF and TIF LZW) are lossless, never any issue, but while impressive, they are not as dramatically effective (both vary greatly, perhaps 70% size instead of 10% size).
Note that 24-bit RGB data (like JPG) is three bytes per pixel, regardless of image size. See more detail.
The memory size of images is often shown in megabytes. You may notice a little discrepancy from the number you calculate with WxHx3 bytes. This is because (as regarding memory sizes) "megabytes" and "millions of bytes" are not quite the same units.
Memory sizes in terms like KB, MB, GB, and TB count in units of 1024 bytes for one K, whereas humans count thousands in units of 1000.
A million is 1000x1000 = 1,000,000, powers of 10, or 106. But binary units normally are used for memory sizes, powers of 2, where one kilobyte is 1024 bytes, and a one megabyte is 1024x1024 = 1,048,576 bytes, or 220. So a number like 10 million bytes is 10,000,000 / (1024x1024) = 9.54 megabytes. One binary megabyte holds nearly 5% more bytes than one million, so there are about 5% fewer megabytes.
If you ever see 1e-7, it means to move the decimal point 7 places to the left, 1e-7 to 0.0000001
Seeing a NaN result would mean the input is Not A Number.
Each line in the calculator is 1024 times the line below it. Which is binary, and is how memory computes byte addresses. However humans normally use 1000 units for their stuff. Specifically, megapixels and the GB or TB disk drive we buy do correctly use 1000 units (until we format them, when Windows shows 1024). Memory chips (including SSD) necessarily use 1024. File sizes do not need 1024 units, however it is normal practice. Windows may show file size either way, depending on location (File Explorer normally shows KB, but DIR shows actual bytes).
We also see units of Mb as a rate of bandwidth. Small b is bits, as in Mb/second of bandwidth. Capital B is bytes of data, as in MB size. Eight bits per byte, so Mb = MB x 8.
Binary 1024 units are necessarily used for memory chips, but computer operating systems also like to arbitrarily use it for file sizes. All else (megapixels, purchased disk size, etc) use normal 1000 units.
Specifications for megapixels in digital cameras, and disk drive size in gigabytes are both correctly advertised as multiples of decimal thousands... millions are 1000x1000. Or giga is 1000x1000x1000. That is a smaller unit, therefore a larger number than MB or GB counting by units of 1024. This is NOT cheating, it's the same amount of bytes either way, just different units. It is just units, and that is how many there are, and is just how humans count (in powers of 10) - and million IS THE DEFINITION of Mega.
However, after formatting the disk, the computer operating system has notions to count it in binary GB. The device manufacturer did advertise it correctly, and formatting did NOT make the disk smaller, the units just changed (in computer lingo, 1K became counted as 1024 bytes instead of 1000 bytes). This is why we buy a 500 GB disk drive (sold as 1000's, the actual real count, the decimal way humans count), and it does mean 500,000,000,000 bytes, and we do get them all. But then we format it, and then we see about 465 gigabytes of binary file space (using 1024). All precisely correct, 500 GB / (1.024 x 1.024 x 1.024) = 465.661 GB. But users who don't understand this switch assume the disk manufacturer cheated them somehow. Instead, no, the disk just counted in decimal, same way as we humans do. No crime in that, mega does mean million, and we do count in decimal (powers of 10 instead of 2). It is the operating system that confuses us, calling it something different.
However, Memory chips (also including SSD and Compact Flash and SD cards and USB flash sticks, which are all memory chips) are different, and construction requires the use binary kilobytes (1024) or megabytes (1024x1024) or gigabytes (1024x1024x1024). This is because each added address line exactly doubles size, 8 bits is 256, 9 bits is 512, 10 bits is 1024. But also (for no good reason) file sizes are usually said in binary 1024K units. Doing this for file sizes is debatable, but there are good necessary technical reasons for memory chips to use binary numbers, because each address bit is a power of two - the sequence 1,2,4,8,16,32,64,128,256,512,1024... makes it be extremely impractical (simply unthinkable) to build a 1000 byte memory chip. It simply would not come out even. The binary address lines count 0 to 1023, so it is necessary to add the other 24 bytes to fill it up. However there is no good reason today for file sizes in binary today, it is just an unnecessary complication, but counting in binary 1024 units is still done on them. Mindless convention perhaps. If we have a file of actual size exactly 20,002 bytes (base 10), the computer operating system will call it 19.533 KB (base 2).
Binary: In base 10, we know the largest numeric value we can represent in 3 digits is 999. That's 9 + 90 + 900 = 999. When we count by tens, 1000 requires 4 digits, 103 = 1000, which is one more than three digits can hold. Binary base 2 works the same way, the largest number possible in 8 bits is 255, because 28 = 256 (which is 9 bits). So 1 + 2 + 4 + 8 + 16 + 32 + 64 + 128 = 255. And 16 bits can contain addresses 0..65535. 216 = 65536 is one more than 16 bits can address.
You can see that 210 is 1024, which humans see as close to 1000 decimal. Memory chips have address lines which are the memory address of one memory byte. Ten lines address 1024 bytes of memory, 0..1023. We cannot stop building the memory at 1000, because valid 10 bit addresses do access 1001-1023 which would cause a fault, no result if we had stopped. We might stop at nine lines, which address values 0..511.
Units of 1000 are extremely handy for humans, we can convert KB and MB and GB in our head, by simply moving the decimal point. Units of 1024 are not so easy, but it came about in the computer early days back when 1024 bytes was a pretty large chip. We had to count bytes precisely to ensure the data would fit into the chip, and the 1024 number was quite important for programmers. Not still true today, chips are huge, and exact counts are unimportant now. Hard drives dimension size in units of 1000, but our operating systems still like to convert file sizes to 1024 units. There is no valid reason why today...
As a computer programmer, back in the day decades ago, I had the job of modifying a boot loader in a 256 byte PROM. It was used in 8080 test stations that booted from a console cassette tape, and I had to add booting from a central computer disk if it was present. I added the code, but it was too large. After my best tries, it was still 257 bytes, simply one byte too large to fit. It took some time and dirty tricks to make it fit. Memory size was very important then, but today, our computers have several GB of memory and disk, and the precise data sizes really matter little. Interesting color, at least for me. :)
The definition of the unit prefix "Mega" absolutely has always meant millions (decimal factors of 1000x1000) - and of course it still does mean 1000, it does NOT mean 1024. However, memory chips are necessarily dimensioned in binary units (factors of 1024), and they simply incorrectly appropriated the terms kilo and mega, years ago... so that's special, but we do use it that way. In the early days, when memory chips were tiny, it was useful to think of file sizes in binary, when they had to fit. Since then though, chips have become huge, and we don't sweat a few bytes now.
And with the goal to preserve the actual decimal meanings of Mega and Kilo, new SI units Ki and Mi and Gi were defined for the binary powers of 1024, but they seem ignored, they have not caught on. So, this still complicates things today. Memory chips are binary of course, but there is absolutely no reason why our computer operating system still does this, regarding file sizes. Humans count in decimal powers of 10.
Note that you will see different numbers in different units for the same file size dimension:
The numbers we need to know is the image size in pixels. Then image size in bytes is (width in pixels) x (height in pixels) and then x 3 (for 3 bytes per pixel, if normal 24 bit color). That is the real decimal data size in bytes. Then for binary numbers for bytes, then divided by 1024 bytes for KB, or divided by 1024 bytes twice for MB. After that, you can go back to real decimal byte count by multiplying by 1.024 (once for KB, or twice for MB, or three times for GB).
Scanning any 6x4 inch photo will occupy the amounts of memory shown in the table below. I hope you realize that extreme resolution rapidly becomes impossible.
You may enter another resolution and scan size here, and it will also be calculated on the last line of the chart below. Seeing a result of NaN means that some input was Not a Number.
When people ask how to fix memory errors when scanning photos or documents at 9600 dpi, the answer is "don't do that" if you don't have 8 gigabytes of memory, and a 9600 dpi scanner, and have a special reason. It is normally correct to scan at 300 dpi to reprint at original size (600 dpi can help line art scans, but normally not if color or grayscale photos).
Notice that when you increase resolution, the size formula above multiplies the memory cost by that resolution number twice, in both width and height. The memory cost for an image increases as the square of the resolution. The square of say 300 dpi is a pretty large number (more than double the square of 200).
Scan resolution and print resolution are two very different things. The idea is that we might scan about 1x1 inch of film at say 2400 dpi, and then print it 8x size at 300 dpi at 8x8 inches. We always want to print photos at about 300 dpi, greater scan resolution is only for enlargement purposes.
The enlargement factor is Scanning resolution / printing resolution. A scan at 600 dpi will print 2x size at 300 dpi.
Emphasizing, unless it is small film to be enlarged, you do not want a high resolution scan of letter size paper. You may want a 300 dpi scan to reprint it at original size.
When we double the scan resolution, memory cost goes up 4 times. Multiply resolution by 3 and the memory cost increases 9 times, etc. So this seems a very clear argument to use only the amount of resolution we actually need to improve the image results for the job purpose. More than that is waste. It's often even painful. Well, virtual pain. <grin>