I am running a 3 Node Kubernetes cluster with Flannel as CNI. I used kubeadm to setup the cluster and the version is 1.23.
My pods need to talk to external hosts using DNS addresses but there is no DNS server for those hosts. For that, I have added their entries in /etc/hosts on each node in cluster. The nodes can resolve the host from DNS but Pods are not able to resolve them.
I tried to search this problem over internet and there are suggestions to use HostAlias or update /etc/hosts file inside container. My problem is that the list of hosts is large and it's not feasible to maintain the list in the yaml file.
I also looked if Kubernetes has some inbuilt flag to make Pod look for entries in Node's /etc/hosts but couldn't find it.
So My question is -
- Why the pods running on the node cannot resolve hosts present in /etc/hosts file.
- Is there a way to setup a local DNS server and asks all the Pods to query this DNS server for specific hosts resolutions?
Any other suggestions or workarounds are also welcomed.
CodePudding user response:
Environments in the container should be separated from other containers and machines (including its host machine), and the same goes for /etc/hosts.
If you are using coreDNS (the default internal DNS), you can easily add extra hosts information by modifying its configMap.
Open the configMap kubectl edit configmap coredns -n kube-system
and edit it so that it includes hosts
section:
apiVersion: v1
data:
Corefile: |
.:53 {
...
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
### Add the following section ###
hosts {
{ip1} {hostname1}
{ip2} {hostname2}
...
fallthrough
}
prometheus :9153
...
}
The setting will be loaded in a few minutes then all the pods can resolve the hosts described in the configMap.