There is also pkgsrc, which from memory can be build to be separate from the underlaying OS. FreeBSD's ports is similar. Slackware's packaging is based off simple tar files.
It does seem that this software build/deploy is a key problem that needs solving and the decentralised, easy to grok nature of Docker is key to the software delivery system. Nix is a really clever bit of engineering and design, but is also hard to grasp. Could it be made more simple? I suspect it's 'functional' nature is the hard part.
The nature of containers being essentially immutable, at least from a base software stance, with packages not being upgraded so much as newly installed avoids the problem of upgrading running services. Most (all?) software would run as its own user, so no root level daemons.
Configuration files are built from service discovery (e.g. via Kelsey's confd in lieu of the apps themselves deriving config), so even config need not be preserved if a roll back to prior to the package layering is done.
Just some thoughts, but I agree there is a need to better manage dependencies. Heck, why not build static linked binaries?
Thanks for the pointers -- I'll dig in to those to get more ideas.
Also agree that config-as-package is part of this too.
As for statically linked binaries -- this solves some of it but not all. Still hard to figure out which version of openssl is actually running in production. Also falls down in the world of dynamic languages where you app is a bunch of rb/php/py files.
Which version of X library can be introspected via which binary is linked, rather than which package is installed - inspecting the actuality rather that an meta-data wrapper in the form of a package may be preferable.
As for static vs dynamic binaries, is dynamic less memory intensive even in containers - that is, do library version get shared across containers in RAM or are they separate? If not separate, static may be much easier to manage in general, though there maybe cases where they can't be provided. Upgrading the app involves a recompile, but that's a container building exercise. Versioning can then become tied explicitly to the container version, or by inspecting the static binary.
Agreed -- but how do you figure out which packages are in an image without cracking that image?
Quick -- what version of OpenSSL is in the golang Docker images? (https://registry.hub.docker.com/_/golang/). Short of downloading them and poking around the file system, I can't tell.
That's a fair point, but the alternative of always compiling the packages at the point in time negates that, at the risk of regressions and other 'new version' bugs, but that may be endemic in this anyway - if the author of the golang container built it at a point in time against version X, then went away and left the container alone, upgrading versions may become risky in any case.
Those kind of risks, and perhaps the larger container risks look like CI pipeline issues - how is the container tested a given new versions? As an adjunct, how do we so container component integration testing? Is that part of this packaging & building system?
Not using docker but zones as a container solution here, but what we do is bake images containing: pkgsrc + static config files + small scripts.
To provision we take the image + metadata containing dynamic configuration values (network details, keys & certs, etc.) and execute that.
This allows us to make very stable releases containig all our software.
Pkgsrc is the most important part of this.
It already contains very recent versions of packages as it is released quarterly. But sometimes we need to run a very specific version in production, or maybe add a patch to fix some bugs. This is super easy with pkgsrc.
It does seem that this software build/deploy is a key problem that needs solving and the decentralised, easy to grok nature of Docker is key to the software delivery system. Nix is a really clever bit of engineering and design, but is also hard to grasp. Could it be made more simple? I suspect it's 'functional' nature is the hard part.
The nature of containers being essentially immutable, at least from a base software stance, with packages not being upgraded so much as newly installed avoids the problem of upgrading running services. Most (all?) software would run as its own user, so no root level daemons.
Configuration files are built from service discovery (e.g. via Kelsey's confd in lieu of the apps themselves deriving config), so even config need not be preserved if a roll back to prior to the package layering is done.
Just some thoughts, but I agree there is a need to better manage dependencies. Heck, why not build static linked binaries?