Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One VM using excessively more disk space than it's supposed to can potentially cause data corruption in all the other VMs on that system. For just spinning VMs up and down for testing, you probably won't run into that issue, but on a production system, it could potentially cause some massive downtime


Virtual machine disk space (e.g. Xen, Linode, AWS EC2, or similar) does not work this way. Each VM gets a dedicated amount of disk space allocated to it, they don't all share a pool of free space.


Yes they do with the "dyanmic allocation" the parent comment mentions; VMware datastore has 1TB total, you put VMs in with dynamically expanding disks they are sharing the same 1TB of free space and will fill it if they all want their max space at the same time and you've overprovisioned their max space.

And if you haven't overprovisioned their max space, you may as well not be using dynamic allocation and use fixed size disks.

Even then, snapshots will grow forever and fill the space, and then you hope you have a "spacer.img" file you can delete from the datastore, because you can't remove snapshots when the disk is full and you're stuck. It's the same problem, at a lower level.


I see, a VMware feature, thanks for clarifying. I suppose it's a nice idea in theory, but you'd have to be crazy to use that in production, or for any workload that you care about. It would just be a ticking time bomb.


Hyper-V can do that too, and so can you under Linux. It's called thinly-allocated disks, sparse files, or the dm-thin device mapper target. Professional SANs also allow you to overallocate the total size of the iSCSI volumes offered.

Yes, I've seen that time bomb go off on multiple occasions. Never on my watch though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: