Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I can't possibly hope to change your mind but stability issues with union filesystem driver in docker(part of it was not even docker's problem) and persistent volumes of kubernetes are two very different things. Cassandra running standalone on host(and crashing) is no different from cassandra crashing when running using a PV inside a container.

Moreover - all/most Linux distros have switched to using Overlay2 as default driver. If you are running latest version of RHEL/CentOS/Fedora/Ubuntu that is the driver you will be most likely using.



Don't get me wrong, I know it's not a bug in kubernetes, it's a bug in the filesystem. Kubernetes is as stable as the weakest part and the weakest part is the container engine (docker and underneath).

Containers require volumes/filesystems to run and some implementations are buggy as fuck.

Docker abandoned CentOS 6 many years ago, whether they stated officially or not, the last docker package and kernel/drivers are unstable. Similar story on some other distributions.

It wasn't production-ready at all back then and it's still not a good idea to containerize databases now. Besides bugs that come and go, there are other challenges around lifecycle, performance and permissions that are not trivial to deal with.


>"I can't possibly hope to change your mind but stability issues with union filesystem driver in docker(part of it was not even docker's problem)"

Can you outline what those stability issues are/were? Was the non-Docker part of the problem kernel related? Genuinely curious.


See RHEL and Debian sections: https://thehftguy.com/2017/02/23/docker-in-production-an-upd...

The filesystem drivers are buggy as fuck. You would experience kernel panics on Debian Jessie (overlayFS), or containers + docker daemon hanging on CentOS 6 (devicemapper). The fix in both cases is a reboot.

You might not notice it if you barely used docker, but it can be very outstanding at scale. I've been consulting briefly at a major web company that was deploying their web services to 5-20 nodes, daily. On every service deployment there would be up to 3 nodes dying.


For sure it is a very different thing. Local SSD or remote drive? That means a lot for Cassandra.


Kubernetes supports local volumes. With GKE you get local SSDs.


That doesn't make sense to use GKE for this. Eventually you will just have bunch of VMS that run only your DB (since you need to avoid interference of other workloads) and there are no support for multi DC mode... And what benefits? Restarting SQL or Cassandra is not very cheap operation and can cause large data migrations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: