Home > other >  Private GKE cluster behind firewall getting calls from external IP
Private GKE cluster behind firewall getting calls from external IP

Time:03-27

We are getting logs that calls to k8s are being made, despite our cluster being private, as well as being behind the gcp firewall with a rule that blocks all ingress except IAP IPs (and ICMP). What am I missing?

"protoPayload":{
   "@type":"type.googleapis.com/google.cloud.audit.AuditLog"
   "authenticationInfo":{
      "principalEmail":"system:anonymous"
   }
   "authorizationInfo":["0":{2}]
   "methodName":"io.k8s.post"
   "requestMetadata":{
      "callerIp":"45.*.*.*"
      "callerSuppliedUserAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"
   }
   "resourceName":"Autodiscover/Autodiscover.xml"
   "serviceName":"k8s.io"
   "status":{
      "code":"7"
      "message":"Forbidden"
   }
}

CodePudding user response:

The private clusters have a control plane private endpoint and a control plane public endpoint and you can choose to disable the control plane public endpoint, this is the highest level of restricted access. So you can manage the cluster with the private endpoint internal IP address with tools like kubectl and any VM that uses the same subnet that your cluster can also access the private endpoint.However, it is important to say that even if you disable the public endpoint access, Google can use the control plane public endpoint for cluster management purposes, such as scheduled maintenance and automatic control plane upgrades. If you need more information about how to create a private cluster with public endpoint disable, you can consult the following public document.

You can review your public endpoints with the following command:

gcloud container clusters describe YOUR_CLUSTER_NAME

Also, you can verify that your cluster's nodes do not have external IP addresses with the following command:

kubectl get nodes --output wide
  • Related