Some people use decimal prefixes but they mean the binary values, for example kilo is 1000 but in computers is most often used as 1024. HDD manufacturers express the size in decimal (1 TB = 10^12 bytes), people that think in 1024 multiples will calculate this is less Gigabytes than expected.
HDD manufactures often report a storage capacity that is larger than the effective capacity, since some capacity is used for formatting information, or error correction. This is the same situation with protocols (like gigabit Ethernet).
I think modern HDD and SSD manufacturers have been pretty honest about the actual capacity of their drives. They have no way of knowing what kind of filesystem you will be putting on the drive and how much overhead it has, or whether you might be putting it in a RAID 1 array (which cuts the effective capacity per drive according to the number of drives in the array).
The number of bytes they specify is the actual number of bytes available to your operating system. It seems like a pretty fair measurement.
The 1000 vs. 1024 thing is an unfortunate point of confusion. Terms like kibibyte and such were an attempt to solve it, but they never took off. (If you were to ask me which number a kilobyte or kibibyte was, I would have to go look it up!)
Even if we did use the kilo vs. kibi terms, HDD and SSD manufacturers are the ones who are getting it right. As the article you linked notes:
> 1 kibibyte (KiB) = 2^10 bytes = 1024 bytes
> The kibibyte is closely related to the kilobyte. The latter term is often used in some contexts as a synonym for kibibyte, but formally refers to 10^3 bytes = 1000 bytes, as the prefix kilo is defined in the International System of Units.
The same applies as you go up in the units. One mega-anything is 1,000,000 of those things, giga- is 1,000,000,000, and tera- is 1,000,000,000,000 things.
So a one-terabyte HDD or SSD should have a true capacity of 1,000,000,000,000 bytes, before any operating system or RAID overhead. Of course its actual physical capacity has to be higher, to support remapping of failing sectors or flash blocks and such. But that's all hidden by the drive controller.
I think it's the memory people who got this wrong, by co-opting a "kilobyte" to mean 1024 bytes, contrary to the standard definition. It was a handy coincidence of terminology at the time, but the error was amplified as we got into multiples of that size.
And then the operating system and utility people (or many of them) completely messed up by using the power-of-two definitions for disk/flash storage instead of the correct power-of-ten definitions.
This is why, for example, every single Amazon listing of a 1TB drive (HDD, SSD, flash card) which honestly provides the correct 1,000,000,000,000 bytes of storage to the OS will have at least one review complaining:
> Claims to have 1TB but only has 931GB as reported by my operating system.
The "missing" 69GB isn't due to formatting or any misdoing on the part of the drive manufacturer, it's because the OS is using the wrong units.
The outcome is that hard drive manufacturers need to explicitly print their definition of a "gigabyte". IMHO, it's not a right or wrong discussion, just a difference.
Your second line is right. Your first isn't really. Formatting only reserves about 100MB. Error correction bits are completely hidden and not marketed at all. It's the 10% difference between "TB" and "TB" that really matters.