The majority of what you posted reiterates the post I responded to , and it doesn't address the complexity of those features or their implementation. Additionally, I challenge your assertion that "real production environments" need automatic scaling.
You missed my point. I was contrasting Kubernetes with the alternative: Critics often highlight Kubernetes' complexity, forgetting/ignoring that replicating its functionality is also complex and often not composable or transferable to new projects/clusters. It's hard to design a good, flexible Puppet (or whatever) configuration that grows with a company, can be maintained across teams, handles redundancy, and all of those other things.
Not all environments need automatic scaling, but they need redundancy, and from a Kubernetes perspective those are two sides of the same coin. A classical setup that automatically allows a new node to start up to take over from a dysfunctional/dead one isn't trivial.
Much of Kubernetes' operational complexity also melts away if you choose a managed cloud such as Digital Ocean, Azure, or Google Cloud Platform. I can speak from experience, as I've both set up Kubernetes from scratch on AWS (fun challenge, wouldn't want to do it often) and I am also administering several clusters on Google Cloud.
The latter requires almost no classical "system administration". Most of the concerns are "hoisted" up to the Kubernetes layer. If something is wrong, it's almost never related to a node or hardware; it's all pod orchestration and application configuration, with some occasional bits relating to DNS, load balancing, and persistent disks.
And if I start a new project I can just boot up a cluster (literally a single command) and have my operational platform ready to serve apps, much like the "one click deploy" promise of, say, Heroku or Zeit, except I have almost complete control of the platform.
In my opinion, Kubernetes beats everything else even on a single node.