Moreover, if you are keeping pace with kubeadm upgrades at all (minor releases are quarterly, and patches are more frequent) then since the most recent minor release, Kubernetes 1.17, certificate renewal as an automated part of the upgrade process is enabled by default. You would have to do at least one cluster upgrade per year to avoid expired certs. tl;dr: this cert expiration thing isn't a problem anymore, but you do have to maintain your clusters.
(Unless you are using a managed k8s service, that is...)
The fact remains also that this is the very first entry under "Administration with Kubeadm", so if you did use kubeadm and didn't find it, I'm going to have to guess that either docs have improved since your experience, or you really weren't looking to administrate anything at all.
I appreciate the links, but for my home stuff I'll be ripping Kubernetes out.
The notion that one has to keep pace with Kubernetes upgrades is exactly the kind of thing that works fine if you have a full-time professional on the job, and very poorly if it's a sideline for people trying to get actual productive work done.
Which is fine; not everything has to scale down. But it very strongly suggests that there's a minimum scale at which Kubernetes makes sense.
Or, that there is a minimum scale/experience gradient behind which you are better served by a decent managed Kubernetes, when you're not prepared to manage it yourself. Most cloud providers have done a fairly good job to make it affordable.
I think it's fair to say that the landscape of Kubernetes proper itself (the open source package) has already reached a more evolved state than the landscape of managed Kubernetes service providers, and that's potentially problematic, especially for newcomers. It's hard enough to pick between the myriad choices available; harder still when you must justify your choice to a hostile collaborator who doesn't agree with part or all.
IMO, the people who complain the loudest about the learning curve of Kubernetes are those who have spent a decade or more learning how to administer one or more distributions of Linux servers, who have made the transition from SysV init to SystemD, and in many cases who are now neck deep in highly specialized AWS services, which in many cases they have used successfully to extricate from the nightmare-scape where one team called "System Admins" is responsible for broadly everything that runs or can run on any Linux server (or otherwise), from databases, to vendor applications, to monitoring systems, new service dev, platforming apps that were developed in-house, you name it...
I basically don't agree that there is a minimum scale for Kubernetes, and I'll assert confidently that declarative system state management is a good technology, that is here to stay. But I respect your choice and I understand that not everyone shares my unique experiences, that led me to be more comfortable using Kubernetes for everything from personal hobby projects, to my own underground skunkworks at work.
In fact it's a broadly interesting area of study for me, "how do devs/admins/(people at large) get into k8s" since it is such a steep learning curve, and this has all happened so fast, there is so much to unpack before one can start to feel comfortable that there isn't really that much more complexity buried behind that you haven't deeply explored already and understood.
It sounds like we both agree there's a minimum scale for running your own Kubernetes setup, or you wouldn't be recommending managed Kubernetes.
But a managed Kubernetes approach only makes sense if you want all your stuff to run in that vendor's context. As I said, I started with home and personal projects. I'd be a fool to put my home lighting infrastructure or my other in-home services in somebody's cloud. And a number of my personal projects make better economic sense running on hardware I own. If there's a managed Kubernetes setup that will manage my various NUCs and my colocated physical server, I'm not aware of it.
> there's a minimum scale for running your own Kubernetes setup
I would say there is a minimum scale that makes sense, for control plane ownership, yes. Barring other strong reasons that you might opt to own and manage your own control plane like "it's for my home automation which should absolutely continue to function if the internet is down"...
I will concede you don't need K8s for this use case, even if you like containers and wanted to use containers, but don't have much prior experience with K8s, from a starting position of "no knowledge" you will probably have a better time with compose and swarm. There is a lot to learn about K8s to a newcomer, but the more you already learned, the less likely I would be to recommend using swarm, or any other control plane (or anything else.)
This is where I feel the fact I mentioned that managed k8s ecosystem is not as evolved as it will likely soon become is relevant. You may be right that no managed Kubernetes setups will handle your physical servers today, but I think the truth is somewhere between: they're coming / they're already here but most are not quite ready for production / they are here, but I don't know what to recommend strongly.
I'm leaning toward the latter (I think that if you wanted a good managed bare metal K8s, you could definitely find it.) I know some solutions that will manage bare metal nodes, but this is not a space I'm intimately familiar with.
The solutions that I do know of, are in early enough state of development that I hesitate to mention. It won't be long before this gets much better. The bare metal Cluster API provider is really something, and there are some really amazing solutions being built on top of it. If you want to know where I think this is going, check this out:
WKS and the "firekube" demo, a GitOps approach to managing your cluster (yes, even for bare metal nodes)
I personally don't use this yet, I run kubeadm on a single bare metal node and don't worry about scaling, or the state of the host system, or if it should become corrupted by sysadmin error, or much else really. The abstraction of the Kubernetes API is extremely convenient when you don't have to learn it from scratch anymore, and doubly so if you don't have to worry about managing your cluster. One way to make sure you don't have to worry, is to practice disaster recovery until you get really good at it.
If my workloads are containerized, then I will have them in a git repo, and they are disposable (and I can be sure, as they are regularly disposed of, as part of the lifecycle). Make tearing your cluster down and standing it back up a regular part of your maintenance cycles until you're ready to do it in an emergency situation with people watching. It's much easier than it sounds, and it's definitely easier than debugging configuration issues to start over again.
The alternative that I would recommend for production right now, if you don't like any managed kubernetes, is to become familiar with the kubeadm manual. It's probably quicker to read it and study for CKA than it would be to canvas the entire landscape of managed providers for the right one.
I'm sure it was painful debugging that certificate issue, I have run up against that issue in particular before myself. It was after a full year or more of never upgrading my cluster (shame on me), I had refused to learn RBAC, kept my version pinned at 1.5.2, and at some point after running "kubeadm init" and "kubeadm reset" over and over again it became stable enough (I stopped breaking it) that I didn't need to tear it down anymore, for a whole year.
And then a year later certs expired, and I could no longer issue any commands or queries to the control plane, just like yours.
Once I realized what was happening, I tried to renew the certs for a few minutes, I honestly didn't know enough to look up the certificate renewal docs, I couldn't figure out how to do it on my own... I still haven't read all the kubeadm docs. But I knew I had practiced disaster recovery well over a dozen times, and I could repeat the workloads on a new cluster with barely any effort (and I'd wind up with new certs.) So I blew the configuration away and started the cluster over (kubeadm reset), reinstalled the workloads, and was back in business less than 30 minutes later.
I don't know how I could convince you that it's worth your time to do this, and that's OK (it's not important to me, and if I'm right, in 6 months to a year it won't even really matter anymore, you won't need it.) WKS looks really promising, though admittedly still bleeding edge right now. But as it improves and stabilizes, I will likely use this instead, and soon after that forget everything I ever knew about building kubeadm clusters by hand.
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/...
Moreover, if you are keeping pace with kubeadm upgrades at all (minor releases are quarterly, and patches are more frequent) then since the most recent minor release, Kubernetes 1.17, certificate renewal as an automated part of the upgrade process is enabled by default. You would have to do at least one cluster upgrade per year to avoid expired certs. tl;dr: this cert expiration thing isn't a problem anymore, but you do have to maintain your clusters.
(Unless you are using a managed k8s service, that is...)
https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/...
The fact remains also that this is the very first entry under "Administration with Kubeadm", so if you did use kubeadm and didn't find it, I'm going to have to guess that either docs have improved since your experience, or you really weren't looking to administrate anything at all.