On a practical note, when I need to write code for a platform (I usually write for Apple platforms), there's usually a dozen ways to do something, but only one or two ways to do it right.
I just went through this today, as I was selecting the best way to trap and pipe stdout.
My experience is that with software there's very often no correct way to do it. It seems to me that most of engineering is simply deciding which compromises are acceptable to you today. Or picking something, and shoring up the leaks yourself.
Various impossibility theorems say otherwise, eg. the CAP theorem for distributed computing, the scalability trilemma for blockchains, Arrow's impossibility theorem for voting systems. Many times you have mutually incompatible constraints that are both desirable, and the best you can do is pick a point on the continuum that's satisfactory to the particular subset of users you wish to serve.
The (highly cost-ineffective) Right Thing is to invest enough in network infrastructure that there are no network partitions.
I don't know about the other examples off the top of my head, but they're probably similar (and similarly impractical).
Anyone can design a bridge that won't fall down; engineering is the process of figuring out which corners you can cut and still have the bridge just barely not fall down.
It's still a distributed system if clients connect to the closest server, speed of light latency between servers exists, and the system deals with consensus between servers, even if the design assumes network partitions never happen.
My favorite one of these is metastability[1]. Any time you have a crossing clock domain the fact that your ASIC works is based on statistics and not hard guarantee.
The issue I happen to be working on right now is: the operating system returns an incorrect value, for a particular function call, in many cases.
I'm reminded of an article I once read about determining whether a large number is prime. There are probabilistic algorithms that are much faster than actually checking every possible factor, and surprisingly accurate. Also, if you run any algorithm long enough, it's susceptible to bit flips (due to a cosmic ray or bad RAM or whatever). Together, this means that beyond some point, you're more likely to get the right answer with the probabilistic test than a perfect algorithm.
What's the "one right way" to deal with a bug in the OS? Sure, in the worst case, I could reimplement that entire subsystem, if I had infinite time and money ... but it's a large enough subsystem, odds are I'd introduce even more bugs in any replacement.
That's the tricky side of engineering. Sometimes "not cost-effective" is so expensive it defeats the entire purpose of whatever you're building. You very quickly get into massless elephants and frictionless inclined planes, and that doesn't help anyone.
I don't believe in "one right way" except in the simplest cases. As Harvey Pekar said, "Ordinary life is pretty complex stuff."
> The issue I happen to be working on right now is: the operating system returns an incorrect value, for a particular function call, in many cases.
Yes! Great example. So the right way is to fix the bug.
The bug is not in your source code and fixing a bug in someone else's source code that you rely on is not cost effective because, in the words of Bob Dylan, "everything is broken."
I just went through this today, as I was selecting the best way to trap and pipe stdout.