Home > database >  Kuberbetes: DNS server, Ingress controller and Metal LB communucations
Kuberbetes: DNS server, Ingress controller and Metal LB communucations

Time:07-17

I'm unable to wrap my head around the concepts of interconnectivity among DNS, Ingress controller, MetalLb and Kubeproxy.

I know what these resources/tools/services are for and get the concepts of them individually but unable to form a picture of them working in tandem.

For example, in a bare metal setup, a client accesses my site - https://mytestsite.com, still having doubts , how effectively it lands to the right pod and where the above mentioned services/resources/tools comes into picture & at what stage ? Ex. How DNS talks to my MetalLB, if the client accesses my MetalLB hosting my application and how LB inturn speaks to IngressController and finally where does Kube-proxy comes into play here.

I went thru the K8s official documentation and few others as well but still kind of stumped here. Following article is really good but I'm unable to stitch the pieces together. https://www.disasterproject.com/kubernetes-with-external-dns/

Kindly redirect me to the correct forum, if it is not the right place, thanks.

CodePudding user response:

The ingress-controller creates a service of type LoadBalancer that serves as the entry point into the cluster. In a public cloud environment, a loadbalancer like ELB on AWS would create the counter part and set the externalIP of that service to it's ip. It is like a service of type NodePort but it also has an ExternalIP, which corresponds to the actual ip of the counterpart, a load balancer like ELB on aws.

In a bare metal environment, no external load balancer will be created, so the external ip would stay in <Pending> state forever. Here for example the service of the istio ingress controller:

$ kubectl get svc istio-ingressgateway -n istio-system
NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)
istio-ingressgateway   LoadBalancer   192.12.129.119   <Pending>   [...],80:32123/TCP,443:30994/TCP,[...]

In that state you would need to call http://<node-ip>:32123 to reach the http port 80 of the ingress controller service, which would be then forwarded to your pod (more on that in a bit).

When you're using metallb, it will update the service with an external ip so you can call http://<ip> instead. Metallb will also announce that ip, e.g. via BGP, so other know where to send traffic to, when someone would call the ip.

I havn't used external DNS and only scanned the article but I guess that you can use that to also have a dns record to be created so someone can call your service by it's domain, not only by it's ip. So you can call http://example.com instead.

This is basically why you run metallb and how it interacts with your ingress controller. The ingress controller creates an entry point into the cluster and metallb configures it and attracts traffic.

Until now the call to http://example.com can reach your cluster, but it needs to also reach the actual application, running in a pod inside the cluster. That's kube-proxy's job.

You read a lot about service of different types and all this kind of stuff, but in the end it all boils down to iptables rules. kube-proxy will create a bunch of those rules, that form a chain.

SSH into any kubernetes worker, run iptables-save | less command and search for the external ip configured on your ingress-controller's service by metallb. You'll find a chain with the destination of you external ip, that basically leads from the external IP over the service ip with a load balancer configuration to a pod ip.

In the end the whole chain would look something like this:

http://example.com  
  -> http://<some-ip> (domain translated to ip)
    -> http://<node-ip>:<node-port> (ingress-controller service)
---
      -> http://<cluster-internal-ip>:<some-port> (service of your application)
        -> http://<other-cluster-internal-ip>:<some-port> (ip of one of n pods)

where the --- line shows the switch from cluster external to cluster internal traffic. The cluster-internal-ip will be from the configured service-cdir and the other-cluster-internal-ip will be from the configured pod-cidr.

Note that there are different ways to configure cluster internal traffic routing, how to run kube-proxy and some parts might even be a bit simplified, but this should give you a good enough understanding of the overall concept.

Also see this answer on the question 'What is a Kubernetes LoadBalancer On-Prem', that might provide additional input.

  • Related