Home > Enterprise >  Network policy behavior for multi-node cluster
Network policy behavior for multi-node cluster

Time:03-01

I have a multi-node cluster setup. There are Kubernetes network policies defined for the pods in the cluster. I can access the services or pods using their clusterIP/podIP only from the node where the pod resides. For services with multiple pods, I cannot access the service from the node at all (I guess when the service directs the traffic to the pod with the resident node same as from where I am calling then the service will work).

Is this the expected behavior? Is it a Kubernetes limitation or a security feature? For debugging etc., we might need to access the services from the node. How can I achieve it?

CodePudding user response:

No, it is not the expected behavior for Kubernetes. Pods should be accessible for all the nodes inside the same cluster through their internal IPs. ClusterIP service exposes the service on a cluster-internal IP and making it reachable from within the cluster - it is basically set by default for all the service types, as stated in Kubernetes documentation.

Services are not node-specific and they can point to a pod regardless of where it runs in the cluster at any given moment in time. Also make sure that you are using the cluster-internal port: while trying to reach the services. If you still can connect to the pod only from node where it is running, you might need to check if something is wrong with your networking - e.g, check if UDP ports are blocked.

EDIT: Concerning network policies - by default, a pod is non-isolated either for egress or ingress, i.e. if no NetworkPolicy resource is defined for the pod in Kubernetes, all traffic is allowed to/from this pod - so-called default-allow behavior. Basically, without network policies all pods are allowed to communicate with all other pods/services in the same cluster, as described above. If one or more NetworkPolicy is applied to a particular pod, it will reject all traffic that is not explicitly allowed by that policies (meaning, NetworkPolicythat both selects the pod and has "Ingress"/"Egress" in its policyTypes) - default-deny behavior.

What is more:

Network policies do not conflict; they are additive. If any policy or policies apply to a given pod for a given direction, the connections allowed in that direction from that pod is the union of what the applicable policies allow.

So yes, it is expected behavior for Kubernetes NetworkPolicy - when a pod is isolated for ingress/egress, the only allowed connections into/from the pod are those from the pod's node and those allowed by the connection list of NetworkPolicy defined. To be compatible with it, Calico network policy follows the same behavior for Kubernetes pods. NetworkPolicy is applied to pods within a particular namespace - either the same or different with the help of the selectors.

As for node specific policies - nodes can't be targeted by their Kubernetes identities, instead CIDR notation should be used in form of ipBlock in pod/service NetworkPolicy - particular IP ranges are selected to allow as ingress sources or egress destinations for pod/service.

Whitelisting Calico IP addresses for each node might seem to be a valid option in this case, please have a look at the similar issue described here.

  • Related