Home > front end >  Kubernetes Network Policy Egress to pod via service
Kubernetes Network Policy Egress to pod via service

Time:11-11

I have some pods running that are talking to each other via Kubernetes services and not via the pod IP's and now I want to lock things down using Network Policies but I can't seem to get the egress right.

In this scenario I have two pods:

  • sleeper, the client
  • frontend, the server behind a Service called frontend-svc which forwards port 8080 to the pods port 80

Both running in the same namespace: ns

In the sleeper pod I simply wget a ping endpoint in the frontend pod:

wget -qO- http://frontend-svc.ns:8080/api/Ping

Here's my egress policy:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-to-frontend-egress
  namespace: ns
spec:
  podSelector:
    matchLabels:
      app: sleeper
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: frontend

As you can see, nothing special; no ports, no namespace selector, just a single label selector for each pod.

Unfortunately, this breaks my ping:
wget: bad address 'frontend-svc.ns:8080'

However if I retrieve the pod's ip (using kubectl get po -o wide) and talk to the frontend directly I do get a response:
wget -qO- 10.x.x.x:80/api/Ping (x obviously replaced with values)

My intuition was that it was related to the pod's egress to the Kube-dns being required so I added another egress policy:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-egress-kube-system
  namespace: ns
spec:
  podSelector: {}
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: "kube-system"
      podSelector: {}
  policyTypes:
  - Egress

For now I don't want to bother with the exact pod and port, so I allow all pods from the ns namespace to egress to kube-system pods.

However, this didn't help a bit. Even worse: This also breaks the communication by pod ip.

I'm running on Azure Kubernetes with Calico Network Policies.

Any clue what might be the issue, because I'm out of ideas.


After getting it up and running, here's a more locked-down version of the DNS egress policy:

kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: allow-all-pods-dns-egress
  namespace: ns
spec:
  policyTypes:
  - Egress
  podSelector: {} 
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          # This label was introduced in version 1.19, if you are running a lower version, label the kube-dns pod manually.
          kubernetes.io/metadata.name: "kube-system"  
      podSelector:
        matchLabels:
          k8s-app: kube-dns
    ports:
    - port: 53
      protocol: UDP
    - port: 53
      protocol: TCP

CodePudding user response:

I recreated your deployment and the final networkpolicy (egress to kube-system for DNS resolution) solves it for me. Make sure that after applying the last network policy, you're testing the connection to service's port (8080) which you changed in you're wget command when accessing the pod directly (80).

Since network policies are a drag to manage, My team and I wanted to automate their creation and open sourced a tool that you might be interested in: https://docs.otterize.com/quick-tutorials/k8s-network-policies.

It's a way to manage network policies where you declare your access requirements in a separate, human-readable resource and the labeling is done for you on-the-fly.

  • Related