Home > OS >  Run Job in GKE from a cloud function
Run Job in GKE from a cloud function

Time:08-31

So I was able to connect to a GKE cluster from a java project and run a job using this: https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/JobExample.java

All I needed was to configure my machine's local kubectl to point to the GKE cluster.

Now I want to ask if it is possible to trigger a job inside a GKE cluster from a Google Cloud Function, which means using the same library https://github.com/fabric8io/kubernetes-client but from a serverless environment. I have tried to run it but obviously kubectl is not installed in the machine where the cloud function runs. I have seen something like this working using a lambda function from AWS that uses https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/ecs/AmazonECS.html to run jobs in an ECS cluster. We're basically trying to migrate from that to GCP, so we're open to any suggestion regarding triggering the jobs in the cluster from some kind of code hosted in GCP in case the cloud function can't do it.

CodePudding user response:

Yes you can trigger your GKE method from a serverless environment. But you have to be aware to some edge cases.

With Cloud Functions, you don't manage the runtime environment. Therefore, you can't control what is installed on the container. And you can't control, or install Kubectl on it.

You have 2 solutions:

  • Kubernetes expose API. From your Cloud Functions you can simply call an API. Kubectl is an APIf call wrapper, nothing more! Of course, it requires more efforts, but if you want to stay on Cloud Functions you don't have any other choice
  • You can switch to Cloud Run. With Cloud Run, you can define your own container and therefore install Kubectl in it, in addition of your webserver (you have to wrap your Function in a webserver with Cloud Run, but it's pretty easy ;) )

Whatever the solution chosen, you also have to be aware of the GKE control plane exposition. If it is publicly exposed (generally not recommended), there is no issue. But, if you have a private GKE cluster, your control plane is only accessible from the internal network. To solve that with a serverless product, you have to create a serverless VPC connector to bridge the serverless Google-managed VPC with your GKE control-plane VPC.

  • Related