Home > Software design >  How to identify/Fail the Azure Devops pipeline when the pods aren't successfully started
How to identify/Fail the Azure Devops pipeline when the pods aren't successfully started

Time:08-05

I am using Azure devops to deploy services to AKS. There are few instances that even though the pods weren't started(in crashed state) the pipeline still shows it as success. I want to make the task/pipeline to fail if the deployment aren't successfully rolled out. I tried to use

kubectl rollout status deployment name --namespace nsName

but its just stating the status. Even if there is an error in the deployment.yaml the task simply say that there is error and the rollout status is saying successfully rollout. Is there a way I make the pipeline fail when there is an error in the deployment or the pods aren't created ?

My Yaml Task

- task: AzureCLI@2
  inputs:
    azureSubscription: ${{ parameters.svcconn }}
    scriptType: 'pscore'
    scriptLocation: 'inlineScript'
    inlineScript: |
      az aks get-credentials -n $(clusterName) -g $(clusterRG)
      kubectl apply -f '$(Pipeline.Workspace)/Manifest/deployment.yaml' --namespace=${{ parameters.NS }}
      kubectl rollout status deployment ***svc-deployment --namespace=${{ parameters.NS }}

CodePudding user response:

Your kubectl commands are getting applied successfully to the cluster which means the YAML's are getting updated in the kube cluster and so you will not see an error with apply or rollout commands in your Azure pipelines.

It's your Kubernetes controller manager that is not able to apply the changes from the updated YAML.

In your inline script, you can add some sleep time and then read the pod running status.

You can get a pod name if you have added an app label in your deployment-

POD=$(kubectl get pod -l app=my-app -o jsonpath="{.items[0].metadata.name}")

For that pod then you can pull the running status -

STATUS=$(kubectl get pods -n default $POD -o jsonpath="{.status.phase}")

You can then manipulate the STATUS variable to manipulate the pipeline result.

CodePudding user response:

Based on the approach from @DevUtkarsh I am able to fail my pipeline when the pods are not in running state Thanks to https://kubernetes.io/docs/reference/kubectl/cheatsheet/. basically, I am doing two things. 1. Validate whether the right version is deployed 2.validate pods are in running state.

inlineScript: |
  $imageVersion = kubectl get deployment <deployment_name> --namespace ${{ parameters.kubernetesNS }} -o=jsonpath='{$.spec.template.spec.containers[:1].image}'
  # validate whether the right version is deployed
  if ( "$imageVersion" -ne "$(expectedImageVersion)" )
  {
      "invalid image version"
      exit 1
  }
  #validate whether minimum number of pods are running
  $containerCount = (kubectl get pods --selector=app=<appname> --field-selector=status.phase=Running --namespace ${{ parameters.kubernetesNS }}).count
  $containerCount
  if ( $containerCount -ge 4 )
  {
      "pods are not in running state"
      exit 1
  }

though there is still one issue, even though the total pods running is 3 it still shows as 4 when running the count, includes the header.

  • Related