Overall this article is accurate and well-researched. Thanks to Daroc Alden for due diligence. Here are a couple of minor corrections:
> When using an Io.Threaded instance, the async() function doesn't actually do anything asynchronously — it just runs the provided function right away.
While this is a legal implementation strategy, this is not what std.Io.Threaded does. By default, it will use a configurably sized thread pool to dispatch async tasks. It can, however, be statically initialized with init_single_threaded in which case it does have the behavior described in the article.
The only other issue I spotted is:
> For that use case, the Io interface provides a separate function, asyncConcurrent() that explicitly asks for the provided function to be run in parallel.
There was a brief moment where we had asyncConcurrent() but it has since been renamed more simply to concurrent().
Daroc here — I've gone ahead and applied two corrections to the article based on this comment. If you want to be sure that feedback or corrections reach us in the future (and not, as in this case, because I'm reading HN when I should be getting ready for bed), you're welcome to email lwn@lwn.net.
Thanks for the corrections, and for your work on Zig!
Hey Andrew, question for you about something the article litely touches on but doesn't really discuss further:
> If the programmer uses async() where they should have used asyncConcurrent(), that is a bug. Zig's new model does not (and cannot) prevent programmers from writing incorrect code, so there are still some subtleties to keep in mind when adapting existing Zig code to use the new interface.
What class of bug occurs if the wrong function is called? Is it "UB" depending on the IO model provided, a logic issue, or something else?
For example, the function is called immediately, rather than being run in a separate thread, causing it to block forever on accept(), because the connect() is after the call to async().
If concurrent() is used instead, the I/O implementation will spawn a new thread for the function, so that the accept() is handled by the new thread, or it will return error.ConcurrencyUnavailable.
What a really like about concurrent(), is that it improves readability and expressiveness, making it clear when writing and reading that "this code MUST run in parallel".
> > When using an Io.Threaded instance, the async() function doesn't actually do anything asynchronously — it just runs the provided function right away.
> [...]
Well, yeah, but even if you spin up a thread to run "the provided function right away" it still will only be for some value of "right away" that is not instantaneous. Creating a thread and getting it up and running is often an asynchronous operation -- it doesn't have to be, in that the OS can always simply transfer the caller's time quantum, on-CPU state, and priority to the new thread, taking the caller off the CPU if need be. APIs like POSIX just do not make that part of their semantics. Even if they did then the caller would be waiting to get back on CPU, so thread creation is fundamentally an async operation.
> When using an Io.Threaded instance, the async() function doesn't actually do anything asynchronously — it just runs the provided function right away.
While this is a legal implementation strategy, this is not what std.Io.Threaded does. By default, it will use a configurably sized thread pool to dispatch async tasks. It can, however, be statically initialized with init_single_threaded in which case it does have the behavior described in the article.
The only other issue I spotted is:
> For that use case, the Io interface provides a separate function, asyncConcurrent() that explicitly asks for the provided function to be run in parallel.
There was a brief moment where we had asyncConcurrent() but it has since been renamed more simply to concurrent().