I wrote a service to retrieve some information from the Kubernetes cluster. Below is a snippet from the kubernetes_service.py
file that works perfectly when I run it on my local machine.
from kubernetes.client.rest import ApiException
from kubernetes import client, config
from exceptions.logs_not_found_exceptions import LogsNotFound
import logging
log = logging.getLogger("services/kubernetes_service.py")
class KubernetesService:
def __init__(self):
super().__init__()
config.load_kube_config()
self.api_instance = client.CoreV1Api()
def get_pods(self, body):
try:
api_response = self.api_instance.list_namespaced_pod(namespace=body['namespace'])
dict_response = api_response.to_dict()
pods = []
for item in dict_response['items']:
pods.append(item['metadata']['name'])
log.info(f"Retrieved the pods: {pods}")
return pods
except ApiException as e:
raise ApiException(e)
def get_logs(self, body):
try:
api_response = self.api_instance.read_namespaced_pod_log(name=body['pod_name'], namespace=body['namespace'])
tail_logs = api_response[len(api_response)-16000:]
log.info(f"Retrieved the logs: {tail_logs}")
return tail_logs
except ApiException:
raise LogsNotFound(body['namespace'], body['pod_name'])
When creating the docker image using Dockerfile, it also installed kubectl. Below is my Dockerfile.
FROM python:3.8-alpine
RUN mkdir /app
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt && rm requirements.txt
RUN apk add curl openssl bash --no-cache
RUN curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" \
&& chmod x ./kubectl \
&& mv ./kubectl /usr/local/bin/kubectl
COPY . .
EXPOSE 8087
ENTRYPOINT [ "python", "bot.py"]
To grant the container permissions to run the command kubectl get pods
I added the role in the deployment.yml file:
apiVersion: v1
kind: Service
metadata:
name: pyhelper
spec:
selector:
app: pyhelper
ports:
- protocol: "TCP"
port: 8087
targetPort: 8087
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: pyhelper
spec:
selector:
matchLabels:
app: pyhelper
replicas: 1
template:
metadata:
labels:
app: pyhelper
spec:
serviceAccountName: k8s-101-role
containers:
- name: pyhelper
image: **********
imagePullPolicy: Always
ports:
- containerPort: 8087
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-101-role
subjects:
- kind: ServiceAccount
name: k8s-101-role
namespace: ind-iv
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-101-role
At the start up of the container it returns the error kubernetes.config.config_exception.ConfigException: Invalid kube-config file. No configuration found
at the line config.load_kube_config()
in the kubernetes_service.py
file. I checked the config file by running the command kubectl config view
and the file is indeed empty. What am I doing wrong here?
Empty config file:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
Also tried to run the command kubectl get pods
in the shell of the container and it successfully returned the pods.
CodePudding user response:
I believe you'll want kubernetes.config.load_config
which differs from the load_kube_config
you're currently using in that the package-level one looks for any $HOME/.kube/config
as you expected, but then falls back to the in-cluster config as the ServiceAccount
usage expects
from kubernetes.config import load_config
class KubernetesService:
def __init__(self):
super().__init__()
load_config()