I have been using following snippet to manage kubernetes auto scaling with terraform
resource "helm_release" "cluster-autoscaler" {
depends_on = [
module.eks
]
name = "cluster-autoscaler"
namespace = local.k8s_service_account_namespace
repository = "https://kubernetes.github.io/autoscaler"
chart = "cluster-autoscaler"
version = "9.10.7"
create_namespace = false
While all of this has been in working state for months (Gitlab CI/CD), it has suddenly stopped working and throwing following error.
module.review_vpc.helm_release.cluster-autoscaler: Refreshing state... [id=cluster-autoscaler]
╷
│ Error: Kubernetes cluster unreachable: exec plugin: invalid apiVersion "client.authentication.k8s.io/v1alpha1"
│
│ with module.review_vpc.helm_release.cluster-autoscaler,
│ on ..\..\modules\aws\eks.tf line 319, in resource "helm_release" "cluster-autoscaler":
│ 319: resource "helm_release" "cluster-autoscaler" {
I am using AWS EKS with kubernetes version 1.21.
The terraform providers are as follows
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "1.14.0"
}
}
UPDATE 1
Here is the module for eks
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "17.24.0"
CodePudding user response:
This looks like a Helm v3.9 version issue.
Check which version you are using, if so, just do the downgrade to v3.8.
Don't forget to confirm that you are also using the version of kubectl v1.21 and aws-cli v2.7
CodePudding user response:
I had to do couple of changes to terraform scripts (not sure whey they were not required earlier).
Added helm to required_providers section
helm = { source = "hashicorp/helm" version = "2.3.0" }
Replaced token generation from
exec { api_version = "client.authentication.k8s.io/v1alpha1" args = ["eks", "get-token", "--cluster-name", var.eks_cluster_name] command = "aws" }
to
token = data.aws_eks_cluster_auth.cluster.token
Note that I am using hashicorp/terraform:1.0.11
image on Gitlab runner to execute Terraform Code. Hence manually installing kubectl or aws CLI is not applicable in my case.