I have a Kubernetes cluster with the followings:
- A deployment of some demo web server
- A ClusterIP service that exposes this deployment pods
Now, I have the cluster IP of the service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d3h
svc-clusterip ClusterIP 10.98.148.55 <none> 80/TCP 16m
Now I can see that I can access this service from the host (!) - not within a Pod or anything:
$ curl 10.98.148.55
Hello world ! Version 1
The thing is that I'm not sure if this capability is part of the definition of the ClusterIP service - i.e. is it guaranteed to work this way no matter what network plugin I use, or is this plugin-dependant.
The Kubernetes docs state that:
ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType
It's not clear what is meant by "within the cluster" - does that mean within a container (pod) in the cluster? or even from the nodes themselves as in the example above?
CodePudding user response:
It depends on the cluster setup. If you are using a GKE cluster and it is setup as VPC native, then you will be able to reach the clusterIP service from a host in the same VPC.
CodePudding user response:
The definition from the Kubernetes documentation says it's reachable from within the cluster:
ClusterIP
: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the defaultServiceType
.
The question is, how do we understand "the cluster"?
Well, the Kubernetes cluster is:
A Kubernetes cluster is a set of node machines for running containerized applications.
So answering your question:
It's not clear what is meant by "within the cluster" - does that mean within a container (pod) in the cluster? or even from the nodes themselves as in the example above?
Yes, it means nodes pods that are running on these nodes, so your behaviour is normal and expected.
Another question:
The thing is that I'm not sure if this capability is part of the definition of the ClusterIP service - i.e. is it guaranteed to work this way no matter what network plugin I use, or is this plugin-dependant.
Basically the CNI plugin is responsible for network communication for the pods:
A CNI plugin is responsible for inserting a network interface into the container network namespace (e.g. one end of a veth pair) and making any necessary changes on the host (e.g. attaching the other end of the veth into a bridge). It should then assign the IP to the interface and setup the routes consistent with the IP Address Management section by invoking appropriate IPAM plugin.
I did a quick test with a fresh kubadm cluster setup without CNI plugin. I created a sample Nginx deployment and then service for it. What I observed?
- the service is created properly and has an IP address assigned, however...
user@example-ubuntu-kubeadm-template-clear-1:~$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m my-service ClusterIP 10.109.245.27 <none> 80/TCP 6m52s
- the pods from the deployment did not started because the node has a taint:
In the1 node(s) had taint {node.kubernetes.io/not-ready: }
kubectl describe node {node-name}
I can find:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
So yeah, services are independent from the CNI plugin, but without CNI plugin you can not start any pods so services are useless.