Everything about signals makes more sense if you understand that they're modeled on hardware interrupts. They're the same mechanism, but for userspace code instead of kernel code. Preempting and taking over the stack, coalescing (without which a flood generated faster than they could be consumed could consume unbounded memory) - all the same upsides and downsides. IOW, they're not message passing.
(Which makes sense, considering they date from the 70s (research UNIX era) - that's not much later than the first modern code for handling hardware interrupts, which date roughly circa the Apollo program.)
And similar workarounds for those limitations: threaded interrupts (although in practice, we just punt to workqueue context in the kernel) vs. sigaltstack or signalfd with a dedicated thread.
Not as much code has to deal with them directly anymore; they're really only used in the interfaces that date way back, but the concept of edge (and vs. level) triggered interrupts is still worth understanding.
What I don’t understand is the kernel has trauma from its rough childhood but it chooses to perpetuate the cycle of abuse by imposing IRQs onto userspace when it could, like, not? The model for signals makes perfect sense but the choice of doing things these days remains as mystifying as ever.
You have to understand
- rolling out new APIs tends to be _quite_ the hassle, you need to get it signed off by a lot of people, and rightly so
- rolling out new APIs that involve _design_ work is doubly a hassle, and changing the semantics of signals would be quite the change
and since signals are only for an ancient part of the API that really isn't used for anything new (new code yes, but people aren't doing anything new conceptually with that stuff) - people are going to take a "why bother" approach. Lot of work for debatable gain.
I'm working on new syscalls for exposing and using filehandles (because we can't rely on st_ino for inode number uniqueness anymore, 64 bits isn't sufficient for uniqueness with subvolumes, or for stacking filesystems), and some related work for exposing and traversing subvolumes (and mountpoints) in a clean standardized way - and they're probably going to be 6 month projects (granted, not my main project right now). It's just a _lot_ of work to do this stuff right when it involves fundamental APIs.
(Which makes sense, considering they date from the 70s (research UNIX era) - that's not much later than the first modern code for handling hardware interrupts, which date roughly circa the Apollo program.)
And similar workarounds for those limitations: threaded interrupts (although in practice, we just punt to workqueue context in the kernel) vs. sigaltstack or signalfd with a dedicated thread.
Not as much code has to deal with them directly anymore; they're really only used in the interfaces that date way back, but the concept of edge (and vs. level) triggered interrupts is still worth understanding.