Home > Back-end >  How to limit disk usage on kubernetes cluster using terraform
How to limit disk usage on kubernetes cluster using terraform

Time:12-13

Background

I have been playing around with GCP free plan. I am trying to learn how to apply gitops with IaC. To do so I'm trying to create the infrastructure for a kubernetes cluster using terraform, using google as the cloud provider.

Managed to configure the Github Actions to apply the changes when a push is made. However I get the following error:

│ Error: googleapi: Error 403: Insufficient regional quota to satisfy request: resource "SSD_TOTAL_GB": request requires '300.0' and is short '50.0'. project has a quota of '250.0' with '250.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=swift-casing-370717., forbidden
│ 
│   with google_container_cluster.primary,
│   on main.tf line 26, in resource "google_container_cluster" "primary":
│   26: resource "google_container_cluster" "primary" ***

Configuration

The above mentioned terraform configuration file is the following:

# https://registry.terraform.io/providers/hashicorp/google/latest/docs
provider "google" {
  project = "redacted"
  region  = "europe-west9"
}

# https://www.terraform.io/language/settings/backends/gcs
terraform {
  backend "gcs" {
    bucket = "redacted"
    prefix = "terraform/state"
  }
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 4.0"
    }
  }
}

resource "google_service_account" "default" {
  account_id   = "service-account-id"
  display_name = "We still use master"
}

resource "google_container_cluster" "primary" {
  name     = "k8s-cluster"
  location = "europe-west9"

  # We can't create a cluster with no node pool defined, but we want to only use
  # separately managed node pools. So we create the smallest possible default
  # node pool and immediately delete it.
  remove_default_node_pool = true
  initial_node_count       = 1
}

resource "google_container_node_pool" "primary_preemptible_nodes" {
  name       = "k8s-node-pool"
  location   = "europe-west9"
  cluster    = google_container_cluster.primary.name
  node_count = 1

  node_config {
    preemptible  = true
    machine_type = "e2-small"

    # Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
    service_account = google_service_account.default.email
    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]
  }
}

Problem

It seems that I need to limit the resources so they use up to 250 GB, how can I do so?

What I have tried

Reducing node_pool size.

According to the documentation default size is 100GB and changed it to 50 as follows:

resource "google_container_node_pool" "primary_preemptible_nodes" {
  name       = "k8s-node-pool"
  location   = "europe-west9"
  cluster    = google_container_cluster.primary.name
  node_count = 1

  node_config {
    preemptible  = true
    machine_type = "e2-small"
    disk_size_gb = 50

    # Google recommends custom service accounts that have cloud-platform scope and permissions granted via IAM Roles.
    service_account = google_service_account.default.email
    oauth_scopes = [
      "https://www.googleapis.com/auth/cloud-platform"
    ]
  }
}

Despite reducing the size the error message did not change at all.

CodePudding user response:

The google_container_cluster resource also allows you to specify disk usage. Update the configuration as follows:

resource "google_container_cluster" "primary" {
  name     = "k8s-cluster"
  location = "europe-west9"

  # We can't create a cluster with no node pool defined, but we want to only use
  # separately managed node pools. So we create the smallest possible default
  # node pool and immediately delete it.
  remove_default_node_pool = true
  initial_node_count       = 1

  node_config {
    disk_size_gb = 50
  }

}

  • Related