The beauty of docker is that it is a reflection of how much someone cares about deployments: do you care about being efficient? You can use `scratch` or `X-alpine`. Do you simply not care and just want things to work? Always go for `ubuntu` and you're good to go!
You can have a full and extensive api backend in golang, having a total image size of 5-6MB.
I've done both, tiny scratch based images with a single go binary to full fat ubuntu based things.
What is killing me at the moment is deploying Docker based AI applications.
The CUDA base images come in at several GB to start with, then typically a whole host of python dependencies will be added with things like pytorch adding almost a GB of binaries.
Typically the application code is tiny as it's usually just python, but then you have the ML model itself. These can be many GB too, so you need to decide whether to add it to the image or mount it as a volume, regardless it needs to make it's way onto the deployment target.
I'm currently delivering double digit GB docker images to different parts of my organisation which raises eyebrows. I'm not sure a way around it though, it's less a docker problem and more an AI / CUDA issue.
Docker fits current workflows but I can't help feeling having custom VM images for this type of thing would be more efficient.
Yep, then I have some projects that have pytorch dependencies which use it's own bundled CUDA and non-pytorch dependencies that use a CUDA in the usual system wide include path.
So CUDA gets packaged up in the container twice unless I start building everything from source or messing about with RPATHs!
> You can have a full and extensive api backend in golang, having a total image size of 5-6MB.
So people are building docker "binaries", that depend on docker installed on the host, to run a container inside a container on the host– or even better, on a non-linux host, all of that then runs in a VM on the host... just... to run a golang application that is... already compiled to a binary?
Sure but a Docker setup is more than just running the binary. You have setup configs, env vars, external dependencies, and all executed in the same way.
Of course you can do it directly on the machine but maybe you don't need containers then.
In the same vein: people put stuff within a box, which is then put within another bigger box, inside a metal container, on top on another floating container. Why? Well, for some that's convenient.
Docker / containers are more than just that though. Using it allows your golang process to be isolated and integrated into the rest of your tooling, deployment pipelines, etc.
It's go; that could be trivially done with a script.
Heck, you can even cross compile go code for any architecture to another one (even for different OSes), and docker would be useless there unless docker has mechanisms to bind qemu-$ARCH with containers and binfmt.
I'd argue that having it in a Docker container is much easier to integrate with the rest of many people's infra. On ECS, K8s, or similar? Docker is such an easy layer to slap on and it'll fit in easily in that situation.
Are you running on bare servers? Sure, a Go binary and a script is fine.
Yep, it's using docker as a means of delivery really. Especially in larger organisations this is just the done thing now.
I understand what the OP is saying but not sure they get this context.
If I were working in that world still I might have that single binary, and a script, but I'm old school and would probably make an RPM package and add a systemd unit file and some log rotate configs too!