After the recent "save 30% power in datacenters" thing¹, I've grown a bit weary of the university of waterloo, and "things get warm in the sun"² isn't helping.
¹ it was a IRQ polling change in the Linux kernel that only helps network-heavy applications under high load with good NIC offload capabilities. I'll accept it might save 30% in the "best degenerate" case, but it'll be nowhere close in general. https://news.ycombinator.com/item?id=42841981
² yes I'm exaggerating. The problem is I have no clue about this heat thing, but I do have an understanding Linux NAPI/IRQ behavior, and that made me with good reason call massive exaggeration there. So what am I to do when I see something from this same source that I don't know about?
¹ it was a IRQ polling change in the Linux kernel that only helps network-heavy applications under high load with good NIC offload capabilities. I'll accept it might save 30% in the "best degenerate" case, but it'll be nowhere close in general. https://news.ycombinator.com/item?id=42841981
² yes I'm exaggerating. The problem is I have no clue about this heat thing, but I do have an understanding Linux NAPI/IRQ behavior, and that made me with good reason call massive exaggeration there. So what am I to do when I see something from this same source that I don't know about?