I'm still confused why static linking isn't a more common solution to versioning issues. Software developers normally have no problem using an order of magnitude more resources to solve organizational problems. Is there any technical advantage to dynamic linking other than smaller binaries and maybe slightly faster load times from disk?
Static linking essentially freezes not only the ABI to the kernel, but many implementation details of the linked libraries as well. Including, for example, how library client code talks to daemons, or formats of files directly read in by the libraries. The timezone configuration would be one instance, or things related to NSS.
It's really not viable in a lot of cases, unless you like rebuilding (or at least relinking) with every system software update.
And then there's of course the memory savings. macOS and iOS for example have giant "shared caches" which are mapped into all processes and comprise of all the system libraries. (Other OSs often do this on the individual shared library level.) With static linking, you'd instead have many copies of lots of potentially-but-not-necessarily identical library code pages in DRAM.
Seems its rare to hear a defense in favor of dynamic linking (aside from vague allusions to "resource use", especially in light of "successor languages" seemingly moving away from this approach. Thanks for this.
IIRC the underlying implementation may be different on other systems. I think in particular, DNS resolution.
Linux is the only system where static linking all the way really makes any sense. For most systems, you don’t get a stable syscall ABI. Instead, you get a stable ABI to the library which does syscalls for you… Windows has kernel32.dll, macOS has libSystem.
Note that on Linux, the vDSO is dynamically linked.
Compilation speed is a big plus. For large projects, linking time can easily dominate the time needed for incremental rebuilds.
Due to tooling issues, PIE is a lot easier with dynamic linking, and this gives you better ASLR. These issues are solvable, but it’s a lot easier to get PIE if you use dynamic linking. If you want static PIE, you need to compile all your static libs as PIE—doable, but you don’t get it out of the box.
1) glibc doesn't static link and musl requires you to understand how musl is different from glibc (DNS resolution being a favorite), so you always end up with at least that dynamic dependency, at which point might as well have more dynamic dependencies
2) static binaries take significantly more time to build, and engineers really hate waiting - more than they care about wasted resources :)
3) static linking means having to re-ship your entire app/binary when a dependency needs patching - and I'm not sure how many tools are smart enough to detect vulnerable versions of static-linked dependencies in a binary vs. those that scan hashes in /usr/lib and so on. If your tool is tiny this doesn't matter, but if it's not, you end up in a lot of pain
4) licensing of dependencies that are statically linked in is sometimes legally interesting or different versus dynamic linking, but I'm not sure how many people actually think about that one
I've also personally had all kinds of weird issues trying to build static binaries of various tools/libraries; since it's not as common a need, expect to have to put in a bunch of effort on edge cases.
Resource usage _does_ come up - a great example of this is how Apple handled Swift for awhile - every Swift application had to bundle the full runtime, effectively shipping a static build, and a number of organizations rejected Swift entirely because it lead to large enough downloads that Apple would push it to Wi-Fi or users would complain. :)
In addition to the problems others mention, dynamic object dependencies are only one slice of a dependency pie; if you can config-manage your way out of all of the others then maybe dynamic libs aren't really a lot of problem? I think this is why container images became so popular, because they close over a lot more dependencies than just dynamic libs.
And the scope of the solution isn't very wide. In the Go community it's common to distribute statically-linked binaries because it solves so many problems--but it just kind of moves them to installation or configuration time because you have to pick a platform and platform version and so forth to find the binary you need, if you want your tool to work on more than one of them.