Home > database >  How does Gitlab runner with Kubernetes executor create pods when it is a pod itself?
How does Gitlab runner with Kubernetes executor create pods when it is a pod itself?

Time:03-03

Hey I'm new to CI/CD with gitlab and I am a bit confused.

I got a Kubernetes cluster connected to a Gitlab instance to run CI/CD pipelines. There is a gitlab runner with kubernetes executor, from what I understand it means there is a pod which runs the pipelines.

A look with kubectl get pods -n gitlab-runner supports that (now there is some other issue, but normally it is 1/1 running):

NAMESPACE        NAME                                           READY   STATUS    RESTARTS   AGE
gitlab-runner    gitlab-runner-gitlab-runner-6b7bf4d766-9t4k6   0/1     Running   248        29d

The CI/CD pipelines calls commands like kubectl apply -f [...], to create new deployments and pods. But why does that work? If the pipeline commands are run the pod, modifications to the host cluster config should be impossible, right? I thought the whole point of containerization is that guests can't modify the host.

Where is the flaw in my logic?

CodePudding user response:

I thought the whole point of containerization is that guests can't modify the host.

You are overlooking the serviceAccount that is optionally injected into every Pod, and those ServiceAccount objects can be bound to Role or ClusterRole objects to optionally grant them privileges to operate against the kubernetes API, which is exposed in-cluster on the well known DNS address https://kubernetes.default.svc.cluster.local

So, yes, they mostly can't modify the host but kubernetes is an orchestration engine, so the GitLab runner can request that a new Pod spin up within the cluster, which so long as the GitLab runner has the correct kubernetes credentials will be taken just as seriously as if a user had requested the same action from kubectl

Another way to look at this problem is that you would have equal success if you ran the gitlab-runner outside of kubernetes, but provided it with credentials to the cluster, you'd just then have the problem of running another VM outside of your existing cluster infrastructure, coming with all the maintenance burdens that always comes with

  • Related