Home > other >  deploying on eks via travis ci fails
deploying on eks via travis ci fails

Time:05-05

i am trying to make a cicd pipeline github->travisci->aws eks everything works fine images are posted to dockerhub and all.but when travis is executing kubectl apply -f "the files" it is throwing a error.. error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

(theres nothing wrong with the source coe/deployment/service files as i manually deployed them on aws eks and they worked fine.)


#-----------------travis.yml-------------
sudo: required
services:
  - docker
env:
  global:
    - SHA=$(git rev-parse HEAD)
before_install:
# Install kubectl
  - curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
  - chmod  x ./kubectl
  - sudo mv ./kubectl /usr/local/bin/kubectl

 # Install AWS CLI
  - if ! [ -x "$(command -v aws)" ]; then curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" ; unzip awscliv2.zip ; sudo ./aws/install ; fi
  # export environment variables for AWS CLI (using Travis environment variables)
  - export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
  - export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
  - export AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION}
  # Setup kubectl config to use the desired AWS EKS cluster
  - aws eks update-kubeconfig --region ${AWS_DEFAULT_REGION} --name ${AWS_EKS_CLUSTER_NAME}
  
  - echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
  - docker build -t akifboi/multi-client -f ./client/Dockerfile.dev ./client
 # - aws s3 ls

script:
  - docker run -e CI=true akifboi/multi-client npm test

deploy:
  provider: script
  script: bash ./deploy.sh
  on:
    branch: master
#----deploy.sh--------
# docker build -t akifboi/multi-client:latest -t akifboi/multi-client:$SHA -f ./client/Dockerfile ./client
# docker build -t akifboi/multi-server:latest -t akifboi/multi-server:$SHA -f ./server/Dockerfile ./server
# docker build -t akifboi/multi-worker:latest -t akifboi/multi-worker:$SHA -f ./worker/Dockerfile ./worker

# docker push akifboi/multi-client:latest
# docker push akifboi/multi-server:latest
# docker push akifboi/multi-worker:latest

# docker push akifboi/multi-client:$SHA
# docker push akifboi/multi-server:$SHA
# docker push akifboi/multi-worker:$SHA

echo "starting"
aws eks --region ap-south-1 describe-cluster --name test001 --query cluster.status #eikhane ashe problem hoitese!
echo "applying k8 files"
kubectl apply -f ./k8s/
# kubectl set image deployments/server-deployment server=akifboi/multi-server:$SHA
# kubectl set image deployments/client-deployment client=akifboi/multi-client:$SHA
# kubectl set image deployments/worker-deployment worker=akifboi/multi-worker:$SHA

echo "done"
#------travis;logs----------
last few lines:

starting

"ACTIVE"

applying k8 files

error: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"

done

Already up to date.

HEAD detached at c1858f7

Untracked files:

  (use "git add <file>..." to include in what will be committed)

    aws/

    awscliv2.zip

nothing added to commit but untracked files present (use "git add" to track)

Dropped refs/stash@{0} (3b51f951e824689d6c35fc40dadf6fb8881ae225)

Done. Your build exited with 0.

CodePudding user response:

We were installing the latest version of kubectl in CI and hit this error today. After pinning to a previous version (1.18) the error was resolved.

the last working version was 1.23.6 and we saw errors with 1.24

CodePudding user response:

I confirmed, it's working with version v1.22.0

If anyone looking for a circleci solution, they can try below code

    steps:
      - checkout
      - kubernetes/install-kubectl:
          kubectl-version: v1.22.0
  • Related