Home » Questions » Computers [ Ask a new question ]

Why are hard drives never as large as advertised?

Why are hard drives never as large as advertised?

From all the hard drives I have bought, they never seem to be as large as the advertised size; from 320 GB down to 290 GB, from 500 GB down to 450 GB, etc. Is there a technical reason for this?

Asked by: Guest | Views: 292
Total answers/comments: 5
Guest [Entry]

"The technical reason is that the hard drive manufacturers sell you capacities in metric units. So a GB = 1,000,000,000 bytes by the metric system. However, computers measure the drive size in powers of 2. So 1GiB = 1,024MiB, 1MiB = 1,024KiB, etc. What this means is that 1GiB = 1,073,741,824 bytes, a difference of 73,741,824.

So when you install your 1GB (for the sake of example) drive, the OS only sees 0.93GiB, and this is the cause of the discrepancy.

(If you've never seen the abbreviation GiB before, it's a new notation adopted to denote powers of 1024 as opposed to 1000. However, most operating systems will report GiB as GB, confusing this issue even further)"
Guest [Entry]

"When a drive manufacturer creates a 500 GB capacity drive, it does have a capacity of 500,000,000,000 bytes, and they are sure going to advertise it as such. Computers, being binary devices, prefer powers of two, with a different set of prefixes, so that is what they use for storage space measurement:

1 kibibyte = 2^10, 1 mebibyte = 2^20, 1 gibibyte = 2^30, etc.

For instance, I have a 300 GB drive attached to this machine and Windows displays the following for the capacity:

Capacity: 300,082,855,936 279 GB

300,082,855,936 / 2^30 = ~279. What it is actually showing you is the drive's size in gibibytes, not gigabytes. So, it should read:

Capacity: 300,082,855,936 279 Gi

One might say this is a flaw in Windows, but apparently there is no definitive standard for storage capacity prefix meanings. Lots more good info, including a section on ""Consumer confusion"", in this Wikipedia article."
Guest [Entry]

"They actually usually are as large as they are advertised, but:

They always (as far as I know) use 1000 instead of 1024 when doing B to KB and so on.
Some small amount of space is used by the file system to keep track of everything.

May be other reasons too, but those are the major ones I know about"
Guest [Entry]

"In the old days of computers every calculation was expensive (in the performance sense). Programmers used all kind of shortcuts to do as little calculations as possible. One of those tricks was to store the year part of a date as only two digits, which ultimately led to the y2k problem.
Another trick was that they defined 1k (kilo) to not mean 1000 as everyone else in the civilised world did, but to mean 1024 instead. This allowed them to cut a few corners when doing size calculations.
That habit stuck and is still being used today although computer calculations have become so much cheaper.

The hardware manufacturer is giving you the proper size where K=1000, M=1000000 and G=1000000000. It's the software that's giving you false values.

Software manufacturers are changing their habits nowadays. OSX for example shows the proper size."
Guest [Entry]

"This should clear up others comments who think there is a standard and metric equivalent when referring to hard drive size.

No, we do not use the metric system for data, exactly. I would think of it as “meta-metric” — units that are “next to” actual metric units.

Metric prefixes WERE borrowed to express data sizes — kilo=, mega=, giga-, tera-, peta- etc.

However, SI has no unit for “bit” or “byte”.

And, smaller units, milli-, micro-, and nano- were also borrowed, though not applied to data, but to “processors”. (“Minicomputers” were smaller computers, compared to main-frames. “Microprocessors” and “microcomputers” were much smaller than minicomputers. In neither case was the 1000:1 ratio implied.)"