Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

History has shown that tons of Linux networking scalability and performance contributions have been rejected by the gatekeepers/maintainers. The upstream kernel remains unsuitable for datacenter use, and all the major operators bypass or patch it.


All the major operators sometimes bypass or patch it for some use cases. For others they use it as is. For other still they laugh at you for taking the type of drugs that makes one think any CPU is sufficient to handle networking in code.

Networking isn't a one size fits all thing - different networks have different needs, and different systems in any network will have different needs.

Userland networking is great until you start needing to deal with weird flows or unexpected traffic - then you end up either needing something a bit more robust and your performance starts dropping because you added a bunch of branches to your code or switched over to a kernel implementation that handles those cases. I've seen a few cases of userland networking being slower than just doing the kernel - and being kept because sometimes the what you care about is control over packet lifecycle more than raw throughput.

Kernels prioritize robust network stacks that can handle a lot of cases good enough. Different implementations handle different scenarios better - there's plenty of very high performance networking done with vanilla linux and vanilla freebsd.


Do you have links on this? I’ve not heard anything about it


I believe they're paraphrasing the Snap paper, and also that they're extrapolating too far from it.


Look who you're arguing with.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: