Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the HDD route, I'd probably prefer lower density storage, heaps of redundancy, and a good spread of HDD manufacturers.

A quick Google of HDD MTBF suggests that 1 million hours (over a century) is wildly optimistic, and typical failure rate is 2-4% per year, possibly as high as 13%. If e is the failure rate (as a fraction of 1), and assuming a constant and independent failure rate over time, then the survival chances for any one drive are:

    (1 - e)^50
So the chance that any one will fail is:

    1 - (1 - e)^50
With n mirrors (assuming a reliable checksum to verify data in the event of only a single copy of a mirror surviving), the chances of all failing, f, are:

    f = (1 - (1 - e)^50)^n

    log(f) = n log(1 - (1 - e)^50)

    n = log(f) / log(1 - (1 - e)^50)
So, for a reliability of 99.999%, and hoping you can keep the individual yearly failure rate at 3%, so f=0.00001 and e=0.03, n would need to be at least 47.


I think you're making the improper assumption that the drives would need to be left on during this entire time. I don't believe they would.

I don't know if the bits eventually lose their magnetism over time, so if they do, you may need to spin up the drives every so often and copy to and from drives to make sure the data is still "fresh", but I seriously doubt they'd need to be left on and spinning for the entire 50 year span.


I'm not assuming that the disks are spinning all the time; I don't know what the failure rates of drives left unpowered are, so I took the spinning rate instead. Google search suggests that failure rates for unpowered hard drives is high - sticking heads etc. - and as drives are not designed for this, probably higher than powered drives.


this is also what I've read. Also, anecdotally, I've had a disproportionate number of drives fail on power-up.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: