Home > Mobile >  Unable to run kubectl inside dockerized jenkins
Unable to run kubectl inside dockerized jenkins

Time:11-01

I have installed jenkins using docker like -

docker network create jenkins

docker volume create jenkins-docker-certs
docker volume create jenkins-data

docker image pull docker:dind

docker image pull jenkinsci/blueocean

docker container run --name jenkins-docker \
  --restart unless-stopped \
  --detach \
  --privileged --network jenkins \
  --network-alias docker \
  --env DOCKER_TLS_CERTDIR=/certs \
  --volume jenkins-docker-certs:/certs/client \
  --volume jenkins-data:/var/jenkins_home \
  --publish 2376:2376\
  docker:dind

docker container run --name jenkins-blueocean \
  --restart unless-stopped \
  --detach \
  --network jenkins \
  --env DOCKER_HOST=tcp://docker:2376 \
  --env DOCKER_CERT_PATH=/certs/client \
  --env DOCKER_TLS_VERIFY=1 \
  --volume jenkins-data:/var/jenkins_home \
  --volume jenkins-docker-certs:/certs/client:ro \
  --publish 8080:8080 \
  --publish 50000:50000 \
jenkinsci/blueocean

Now I want to use kubectl inside jenkins, so I added kubernetes-cli plugin and installed kubectl as mentioned here. I have my jenkins file as (kubecreds in kube config file)

pipeline {
    agent any
    stages {
        stage('Cloning Repo') {
            steps {
                checkout([$class: 'GitSCM', branches: [[name: '${branch}']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'githubcreds', url: '<repo url>']]])
            }
        }
        stage('List pods') {
            steps {
                withKubeConfig([credentialsId: 'kubecreds']) {
                    sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl"'
                    sh 'chmod u x ./kubectl'
                    sh './kubectl get pods -n stage'
                }
            }
        }
    }
}

But running this jenkins file throws error -

  ./kubectl get pods -n stage
Unable to connect to the server: getting credentials: exec: executable aws not found

It looks like you are trying to use a client-go credential plugin that is not installed.

To learn more about this feature, consult the documentation available at:
      https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
[Pipeline] }
[kubernetes-cli] kubectl configuration cleaned up
[Pipeline] // withKubeConfig
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE

I added aws sdk plugin but still same error. So I thought of installing awscli manually and tried -

pipeline {
    agent any
    stages {
        stage('Cloning Repo') {
            steps {
                checkout([$class: 'GitSCM', branches: [[name: '${branch}']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'githubcreds', url: '<repo url>']]])
            }
        }
        stage('List pods') {
            steps {
                withKubeConfig([credentialsId: 'kubecreds']) {
                    sh 'curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"'
                    sh 'unzip awscliv2.zip'
                    sh './aws/install --update -i . -b .'
                    sh './aws --version'
                    sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl"'
                    sh 'chmod u x ./kubectl'
                    sh './kubectl get pods -n stage'
                }
            }
        }
    }
}

but got error -

  ./aws/install --update -i . -b .
Found same AWS CLI version: ./v2/2.3.2. Skipping install.
[Pipeline] sh
  ./aws --version
/var/jenkins_home/workspace/callbreak-deploy-job@tmp/durable-b3361486/script.sh: line 1: ./aws: Permission denied
[Pipeline] }
[kubernetes-cli] kubectl configuration cleaned up
[Pipeline] // withKubeConfig
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 126
Finished: FAILURE

Any ideas on how I can make it work, ie. run kubectl successfully inside jenkins job?

Thanks

------ EDIT 1

As recommended in the answer below, I tried making aws executable -

pipeline {
    agent any
    stages {
        stage('Cloning Repo') {
            steps {
                checkout([$class: 'GitSCM', branches: [[name: '${branch}']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'githubcreds', url: '<repo url>']]])
            }
        }
        stage('List pods') {
            steps {
                withKubeConfig([credentialsId: 'kubecreds']) {
                    sh 'rm awscliv2.zip'
                    sh 'rm -rf aws'
                    sh 'curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"'
                    sh 'unzip awscliv2.zip'
                    sh './aws/install --update -i . -b .'
                    sh 'chmod u x ./aws'
                    sh './aws --version'
                    sh 'curl -LO "https://storage.googleapis.com/kubernetes-release/release/v1.20.5/bin/linux/amd64/kubectl"'
                    sh 'chmod u x ./kubectl'
                    sh './kubectl get pods -n stage'
                }
            }
        }
    }
}

but still same error -

  ./aws/install --update -i . -b .
Found same AWS CLI version: ./v2/2.3.2. Skipping install.
[Pipeline] sh
  chmod u x ./aws
[Pipeline] sh
  ./aws --version
/var/jenkins_home/workspace/callbreak-deploy-job@tmp/durable-cf2a75a8/script.sh: line 1: ./aws: Permission denied
[Pipeline] }
[kubernetes-cli] kubectl configuration cleaned up
[Pipeline] // withKubeConfig
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 126
Finished: FAILURE

CodePudding user response:

Error is from AWS cli it's have permission

you should run

chmod  x /usr/bin/aws*

or

sh 'chmod u x ./aws'

or binary give permission of AWS cli to execute wherever it's installed.

as you are doing it for

sh 'chmod u x ./kubectl'

CodePudding user response:

I was finally able to run kubectl in jenkins after creating a custom jenkins docker image (with kubectl and aws cli already installed) and using aws plugins.

My dockerfile:

FROM jenkins/jenkins:2.303.2-jdk11
USER root
RUN apt-get update && apt-get install -y apt-transport-https \
       ca-certificates curl gnupg2 \
       software-properties-common
RUN curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
RUN apt-key fingerprint 0EBFCD88
RUN add-apt-repository \
       "deb [arch=amd64] https://download.docker.com/linux/debian \
       $(lsb_release -cs) stable"
RUN apt-get update && apt-get install -y docker-ce-cli
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
RUN chmod  x kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
USER jenkins
RUN aws --version
RUN kubectl version --client
RUN jenkins-plugin-cli --plugins "blueocean:1.25.0 docker-workflow:1.26"

and my new jenkinsfile:

pipeline {
    agent any
    stages {
        stage('Cloning Repo') {
            steps {
                checkout([$class: 'GitSCM', branches: [[name: '${branch}']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[credentialsId: 'githubcreds', url: '<repo url>']]])
            }
        }
        stage('List pods') {
            steps {
                withAWS([credentials: 'awscreds']) {
                    sh 'aws eks --region ap-south-1 update-kubeconfig --name <name>'
                    sh 'kubectl apply -f deploy/stage/$service.yaml'
                }
            }
        }
    }
}
  • Related