Home > front end >  how pods manage the IP address?
how pods manage the IP address?

Time:12-09

I would like to know how exactly pods get an IP address, and how they distribute the pods to agent and master.

I have 1 master node and 2 agent nodes. my pods all are running well, but I am curious how the pods get an IP address.

some pods have IP cluster nodes, meanwhile, some have an ethernet IP address. I run Nginx and Metallb for the load balancer. Disable Traefik and Klipper.

if we can see the agent-03 has 2 IP addresses run on

root:/# kubectl get pods -A -o wide

ingress          nginx-dep-fdcd8sdfs-gj5gff                 1/1     Running   0          46h     10.42.0.80      master          <none>           <none>

ingress          nginx-dep-fdcd8sdfs-dn80n                1/1     Running   0          46h     10.42.0.79      master          <none>           <none>

ingress          nginx-doc-7cc85c5899-sdh55              1/1     Running   0          44h     10.42.0.82      master          <none>           <none>

ingress          nginx-doc-7cc85c5899-gjghs               1/1     Running   0          44h     10.42.0.83      master          <none>           <none>

prometheus       prometheus-node-exporter-6tl8t    1/1     Running   0          47h     192.168.1.3   agent-03    <none>           <none>

ingress          ingress-controller-nginx-ingress-controller-rqs8n            1/1     Running   5          47h     192.168.1.3    agent-03   <none>           <none>

prometheus       prometheus-kube-prometheus-operator-68fbcb6d67-8qsnf       1/1     Running   1          46h     10.42.2.52     agent-03    <none>           <none>

ingress          nginx-doc-7cc85c5899-b77j6                1/1     Running   0          43h     10.42.2.57      agent-03    <none>           <none>

metallb-system   speaker-sk4pz                                 1/1     Running   1          47h     192.168.1.3    agent-03   <none>           <none>

in my pod's shows agent-03 run Nginx-doc use IP cluster while metal use IP ethernet, or it depends on what service are running in pods?

ingress          nginx-doc-7cc85c5899-b77j6                  1/1     Running   0          43h     10.42.2.57      agent-03    <none>           <none>

metallb-system   speaker-sk4pz                                   1/1     Running   1          47h     192.168.1.3    agent-03   <none>           <none>

and I can see master has 2 Nginx-doc pods running, which means when I deploy 3 Nginx-doc one agent will not get any Nginx-doc because it has been taken by the master. and it is not divided equally.

If I miss configuring which part do I need to fix.

CodePudding user response:

Pod's IP address is provided by CNI driver from range that was specified when cluster was created using --pod-network-cidr, see here.

Some CNI implementations can implement additional behavior.

In your particular case I believe that pods in question are started using hostNetwork: true in their PodSpec, which gives them access to host network

CodePudding user response:

Based on your internal plugin your POD will get the IPs. Which again will be the internal IPs mostly.

There are different types of Network interfaces, we can use CNI as per need : https://kubernetes.io/docs/concepts/cluster-administration/networking/

POD gets exposed by the service. There are different types of services. Cluster IP, Node Port, Load Balancer. https://kubernetes.io/docs/concepts/services-networking/service/

in my pod's shows agent-03 run Nginx-doc use IP cluster while metal use IP ethernet, or it depends on what service are running in pods?

Could be possible due to the service type you are using due to that IP is different and using ethernet.

If your service type is LoadBalancer using MetalLb which means that the service is exposed using the IP, not like internal IP that PODs have mostly.

kubectl get svc -n <namespace name> and check

and I can see master has 2 Nginx-doc pods running, which means when I deploy 3 Nginx-doc one agent will not get any Nginx-doc because it has been taken by the master. and it is not divided equally.

There is no guarantee on that, K8s put and assign pods based on score.

You can read more about score at here : https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/

If you want to fix your POD on a specific node, suppose you are running the GPU with Node your should schedule on that Node to use GPU in that case you can use.

https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/

  • Related