Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"slower" is just one piece of the calculation. On the server end one goal of async/await is that you can run 10k instances of your 3 lines of code concurrently - inside a single thread. And while this might not use parallelism to make an individual operation faster, it might use less resources overall.

The other use-case was to run multi-step operations which involve waiting on UI threads of appplications, which wouldn't have worked with blocking waits (would prevent redraw). For that use-case "speed" also isn't the highest priority.



Like I said, this is a theoretical benefit that is realised only if the load is sufficiently high for the reduced overhead of async programming to provide a noticeable benefit.

For naive async code, there is a surprisingly narrow range of loads where this is true: only something like 80-99% load. Any higher and latencies start to go towards the stratosphere, or memory usage grows exponentially.

Of course, this is fixable with the appropriate use of backpressure and timeout cancellations, but I've never seen this implemented correctly and consistently anywhere. Almost all web apps in the wild fall over when load goes from 100% to 101%. They don't become 1% slower! Instead they take 30s to return a page or just start spewing 5xx errors.

For a point of comparison, Java is abandoning the complex and fragile async approach in favour of user-mode scheduled lightweight threads, which are vaguely similar in terms of efficiency, but are much easier for programmers to understand. They're also compatible with traditional threaded code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: