Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What an absurdly whataboutism filled response. Meanwhile Windows has been doing it the correct way for 20 years or more and never has to kill a random process just to keep functioning.


So you're saying the correct way to support fork() is to... not support it? This seems pretty wasteful in the majority of scenarios.

For example, it's a common pattern in many languages and frameworks to preload and fully initialize one worker process and then just fork that as often as required. The assumption there is that, while most of the memory is theoretically writable, practically, much of it is written exactly once and can then be shared across all workers. This both saves memory and the time needed to uselessly copy it for every worker instance (or alternatively to re-initialize the worker every single time, which can be costly if many of its data structures are dynamically computed and not just read from disk).

How do you do that without fork()/overprovisioning?

I'm also not sure whether "giving other examples" fits the bill of "whataboutism", as I'm not listing other examples of bad things to detract from a bad thing under discussion – I'm claiming that all of these things are (mostly) good and useful :)


> How do you do that without fork()/overprovisioning?

You use threads. The part that fork() would have kept shared is still shared, the part that would have diverged is allocated inside each thread independently.

And if you find dealing with locking undesirable you can use some sort of message system, like Qt signals to minimize that.


> the part that would have diverged is allocated inside each thread independently

That’s exactly my criticism of that approach: It’s conceptually trickier (fork is opt-in for sharing; threads are opt-out/require explicit copying) and requires duplicating all that memory, whether threads end up ever writing to it or not.

Threads have their merits, but so do subprocesses and fork(). Why force developers to use one over the other?


> Threads have their merits, but so do subprocesses and fork(). Why force developers to use one over the other?

I used to agree with you, but fork() seems to have definitely been left behind. It has too many issues.

* fork() is slow. This automatically makes it troublesome for small background tasks.

* passing data is inconvenient. You have to futz around with signals, return codes, socketpair or shared memory. It's a pain to set up. Most of what you want to send is messages, but what UNIX gives you is streams.

* Managing it is annoying. You have to deal with signals, reaping, and doing a bunch of state housekeeping to keep track of what's what. A signal handler behaves like an annoying, really horribly designed thread.

* Stuff leaks across easily. A badly designed child can feed junk into your shared filehandles by some accident.

* It's awful for libraries. If a library wants to use fork() internally that'll easily conflict with your own usage.

* It's not portable. Using fork() automatically makes your stuff UNIX only, even if otherwise nothing stops it from working on Windows.

I think the library one is a big one -- we need concurrency more than ever, but under the fork model different parts of the code that are unaware of each other will step over each other's toes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: