At first glance, this looks great, almost 20% speedup!
But genuine question, this looks to be how almost every new tech begins. Initially it's FAST because it keeps things as barebones as possible, and everything else other than the specific module can be ignored.
But no feature works in isolation, so as more features get added – scheduling constraints, safety quirks, or random features that cut across the language – the 20% looks like nothing.
I did go through the blog, but how can one be sure that this 20% gap is a guaranteed design advantage, and not just a temporary deferral of everything else?
fwiw Zig is taking its role as a low-level, bare bones "c replacement" as a core design constraint. For instance zig has eschewed from even fairly tame features like operator overloading because they're not considered in line with Zig's identity as a minimal, explicit language.
So feature creep is always a concern, but from what I understand Zig has a better chance at avoiding this than other projects due to the language's guiding philosophy.
> how can one be sure that this 20% gap is a guaranteed design advantage
It's no guarantee, but in my experience, Zig is a language that lends itself very well to composability. I imagine that instead of adding features to stdlib, anyone who needs "extra features" in their threading system will find it quite easy to implement their own as a library outside of stdlib that can easily be swapped in and out in A/B tests for the library consumer. If some feature or another is REALLY wanted in the stdlib for some reason it would probably be easy to drop it behind a comptime-flag as part of a type constructor function, which won't compromise (runtime) performance, teeny tiny hit to comptime performance.
> this looks to be how almost every new tech begins
On second thought, do you think this is true? The level of detail shown in this post is fairly extraordinary for event loops just getting started.
> how can one be sure that this 20% gap is a guaranteed design advantage
I would say because the design is solid and recognizes principles of mechanical sympathy that are fundamental and unlikely to change for quite some time.
Nobody can predict the future, but one thing that might help is that Protty is in the Zig core dev team and, as the post shows, he really cares about efficiency.
it seems like the design philosophy of zig is such that it's not going for feature creep/bloat, it's keeping the language very simple and eschewing unexpected complexity
But genuine question, this looks to be how almost every new tech begins. Initially it's FAST because it keeps things as barebones as possible, and everything else other than the specific module can be ignored.
But no feature works in isolation, so as more features get added – scheduling constraints, safety quirks, or random features that cut across the language – the 20% looks like nothing.
I did go through the blog, but how can one be sure that this 20% gap is a guaranteed design advantage, and not just a temporary deferral of everything else?