Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Best resource for learning devops as an engineer (js/ruby)?
14 points by mmanfrin on April 27, 2017 | hide | past | favorite | 4 comments
In my career I've been spoiled with terrific devops coworkers who have generally handled everything from code to deploy. As a result, I feel I lack in expertise about devops that I should know as an engineer. Sideprojects of mine end up going on PaaS services like Heroku or Zeit and I don't really learn what it means to set up a server.

For instance, I could tell you that you likely have a load balancer (like nginx) which you can use to point routes to different services, and I know that services are going to be things like node instances running my code; but everything I know is fairly superficial and lacking in depth. What is the best way to share secrets? Is there a standard way of having your load balancer know what services to feed to? How does a load balancer that defines routes work with a rails or express instance that also defines routes? Should all my services be built on vagrant, or has docker made that less useful?

I've been having trouble filling in these gaps in deep understanding because it seems like every setup is different, so cribbing from convention is not really possible -- and then I read a litany of stories of setups that are wrong in some way and that feeds in to this paralysis of not wanting to do things wrong.

I tried setting up a Kubernetes cluster along with Deis, but there are so many layers of different/new services that I don't know how to conceptually tie those back to a broader understanding. For instance, K8s has a load balancer 'Ingress' -- does this negate the need for nginx? But then Deis installed its own load balancer so now I have two low balancers for zero services. It's just confusing because the big picture is obscured by the novelty of everyones setup.

Thank you in advance for your help.



The best way to learn is to start low-level. It sounds like you are interested in k8s specifically (which I HIGHLY recommend, it's the future). Don't use a layer on top of k8s like Deis, just use k8s itself.

I would recommend you start with minikube to run a cluster easily on your laptop. If you REALLY want to get nitty gritty you can try installing it from scratch, but I think that's overkill with all the tools available for setup.

read read read. try things. break things. k8s has a mountain of concepts and terminology to learn, so start with their docs.

To answer your questions:

Ingress is not a load balancer. It is basically a reverse-proxy into your cluster you can manage with manifests. It is paired with an ingress-controller which can be whatever you want, nginx.. traefik etc. The default k8s ingress is nginx under the covers.

Services are your load balancers. You can also use a service mesh like Linkerd for even more functionality.

Good luck!


Thank you! Couple followup questions: if I have a box at home that I intend to install ubuntu to run random services, should I use minikube since it's just one node/box, or should I give it the full K8s treatment?

Also, what is the difference between a reverse-proxy and a load balancer? In the K8s docs, they mention load-balancing in the 'what is ingress' section. Is it that Google made ingress both things (reverse proxy and load balancer) or that those two things are conceptually similar?

If I wanted to make my local home-network machine accessible to the outside, would I want to have a reverse proxy within my network that leads to k8s/ingress, or could/should I use k8s as the exposed entrypoint to the outside world?


If you are using ubuntu as a base you are probably better off installing k8s directly with a tool such as https://kubernetes.io/docs/getting-started-guides/kubeadm/

On ingress:

You can use ingress to load balance your ingress across nodes, but it's not a service load balancer. That what services are for. Your ingress points to one or many services, which then load balance pods matching that label.

Ingress IS however used in place of cloud-level load-balancers like ELB on AWS. When you choose a service type of "LoadBalancer" on a cloud platform, it provisions an elb that external traffic can go through.

On-prem k8s obviously does not have that, so before ingress the only way to expose ports was by service type "NodePort" This exposes the service on a dedicated port, but it's a pain to manage because they are randomish.

Ingress lets you just point at port 80, from there you can easily route to services as they come and go.

For your home cluster, you can expose port 80 of your cluster to your router. Then you set up an ingress that matches the hostname. This will route to your services. It's what I do with my home cluster: https://hackernoon.com/diy-kubernetes-cluster-with-x86-stick...


If you are interested in learn K8S and Rails for example, I wrote a book on that: http://www.apress.com/us/book/9781484224144

I would recommend to start with Docker. Learn how to setup a Ruby development environment. Then try to deploy that same environment only using Docker. Then when you understand the workflow you can try K8S.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: