Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Async is just modern cooperative multitasking, and just like the 90s, it's easy to accidentally lock the whole system.


Yeah, I remember just how nice it was going from Cooperative MT to Preemptive multi tasking -- the general view was that anything that only did Cooperative MT was just a Toy.

I'd bet that the orders of magnitude of speed from Moore's law did in CMT by making PMT doable without a huge speed hit.

Nothing I've seen from async is cleaner, easier to maintain, or better from a cognitive load POV. It's just more efficient for certain types of loads because you're being a consenting adult and not breaking things.


CMT for concurrency + DLP for parallelism is a he'll of a lot more scalable in LOC than the unsafe hammer of preemptive. We keep unsafe parallelism to only tiny tiny tiny GPU code snippets.

Node's concurrency and parallelism probs don't require deep lang changes but runtime ones and some ergonomics to be closer to Go -- and not far IMO bc of last few years of work for Async, safe (fresh env) eval, and json/buffer msg's.

More exciting would be something like Apache Arrow / Berkeley's 'Plasma', but that stuff is still more exploratory.


For folks looking for a good primer on the terminology and history of cooperative vs preemptive multi tasking, I took a pause on reading this thread to look for an article

https://dev.to/nestedsoftware/is-cooperative-concurrency-her...

This article does a good overview


What language were you using that had preemptive multi tasking?


We are no longer in the 90s. The code has increased in volume a hundredfold and it comes from literally everywhere. You can no longer trust everything on your machine or your network to be bug-free or otherwise non-hostile.

Creating a system in the 21st century that tries to follow ideals from the 90s gives us the kind of idiotism that we can witness here.


I think you may have misinterpreted earthboundkid; the claim isn't that it worked in the 90s, the claim is that it was already broken in the 90s.

You are otherwise on the right track, though Node does technically have one advantage, which is that it is a cooperatively-scheduled island in a preemptively-scheluded overall OS. In the 1990s, when the cooperatively-scheduled program was not cooperative, you locked the machine, not the process [1]. There is a reason why Apple went very aggressive with the OSX rewrite; the previous Systems had basically written themselves into a corner where they had to use cooperative multitasking because so much code made use of the implicit promises it provides, yet they could no longer afford to compete with Microsoft if they didn't get off it it, because the complexity just kept going up, up, up and the problem was going to continue getting exponentially worse.

For a Node program, you only have to account for the Node program itself, not everything running on the computer. Still, you're in the same exponentially-growing-complexity trap (with a very initially-safe-seeming low exponent, but it still gets you in the end), you just reset yourself back to a point earlier on the curve.

[1] There are various details, caveats, interrupts, etc, the picture is more complicated than one sentence can convey, but the principle still held and it was still possible to wedge the machine fairly badly for varying periods of time with simple bad code.


If I thought my bank was running thousands of node containers in parallel to handle transactions, I think I'd look for a new bank.


I mean, if you write slow code in a high throughput environment you'll just kill the CPU from context switching between threads instead.

It doesn't matter if it's in an event loop or thread per request. Architect things correctly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: