I was testing the Kubernetes network policy first before trying in a production requirement but unfortunately, I could not make it work yet and looking for a solution.
My test environment is a Kind k8 cluster on WSL.
Trying everything in the namespace "networkpolicy":
→ kubectl -n networkpolicy get ns networkpolicy
NAME STATUS AGE
networkpolicy Active 174m
Two pods running in that namespace:
→ kubectl -n networkpolicy get pods --show-labels -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
np-busybox 1/1 Running 0 151m 10.244.0.11 selfie-control-plane <none> <none> app=client
np-nginx 1/1 Running 0 9m52s 10.244.0.12 selfie-control-plane <none> <none> app=nginx
You can see the pod "np-nginx" has the label "app=nginx"
Network policy created with podSelector "app: nginx"
→ kubectl -n networkpolicy describe networkpolicy
Name: my-networkpolicy
Namespace: networkpolicy
Created on: 2022-10-08 21:49:16 0530 IST
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=nginx
Allowing ingress traffic:
<none> (Selected pods are isolated for ingress connectivity)
Allowing egress traffic:
<none> (Selected pods are isolated for egress connectivity)
Policy Types: Ingress, Egress
so I think, specifying policy types Ingress and Egress without explicitly specifying any rules under it, means it by default denies any connections. Is that correct?
I tried to curl the Nginx pod IP from the busybox client pod and it is able to connect fine even if the network policy is in place.
→ kubectl -n networkpolicy exec np-busybox -- curl -s 10.244.0.12 | html2text
****** Welcome to nginx! ******
If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.
Thank you for using nginx.
Is there something wrong with what I tried?
CodePudding user response:
ok, I figured out the solution now.
KIND ships with a simple networking implementation kindnet which does not seem to support networkpolicy.
You can change the CNI on your Kind cluster to Calico(which does support networkpolicy) as follows:
You can see kindnet and no calico present:
~ → kubectl -n kube-system get all | grep calico
~ →
~ → kubectl -n kube-system get all | grep kindnet
pod/kindnet-mmlgj 1/1 Running 4 (2d1h ago) 2d21h
daemonset.apps/kindnet 1 1 1 1 1 <none> 2d21h
Get into the docker container:
~ → docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1beac63b6221 kindest/node:v1.25.2 "/usr/local/bin/entr…" 2 days ago Up 2 days 127.0.0.1:34235->6443/tcp selfie-control-plane
~ → docker exec -it 1beac63b6221 bash
root@selfie-control-plane:/#
Create the following yaml file with option "disableDefaultCNI" to disable default kindnet of Kind cluster:
root@selfie-control-plane:/# cat <<EOF >/etc/kubernetes/manifests/kind-calico.yaml
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
networking:
disableDefaultCNI: true # disable kindnet
EOF
root@selfie-control-plane:/# exit
exit
Exit from the container, then Stop and Start the kind cluster docker container
~ → docker stop selfie-control-plane
selfie-control-plane
~ → docker start selfie-control-plane
selfie-control-plane
~ → docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1beac63b6221 kindest/node:v1.25.2 "/usr/local/bin/entr…" 2 days ago Up 7 seconds 127.0.0.1:34235->6443/tcp selfie-control-plane
~ →
Install calico CNI plugin now:
~ → kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
Now you cannot curl it and it simply timesout after waiting for long:
→ kubectl -n networkpolicy exec np-busybox -- curl -s 10.244.100.66
.
.
.