Home > database >  Connect to gRPC service via Kubernetes API server proxy?
Connect to gRPC service via Kubernetes API server proxy?

Time:08-03

Let's say we have a Kubernetes service which serves both a RESTful HTTP API and a gRPC API:

apiVersion: v1
kind: Service
metadata:
  namespace: mynamespace
  name: myservice
spec:
  type: ClusterIP
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  - port: 8080
    targetPort: 8080
    protocol: TCP
    name: grpc

We want to be able to reach those service endpoints externally, for example from another Kubernetes cluster.

This could be achieved by changing the service type from ClusterIP to LoadBalancer. However, let's assume that this is not desirable, for example because it requires additional public IP addresses.

An alternative approach would be to use the apiserver proxy which

connects a user outside of the cluster to cluster IPs which otherwise might not be reachable

This works with the http endpoint. For example, if the http API exposes an endpoint /api/foo, it can be reached like this:

http://myapiserver/api/v1/namespaces/mynamespace/services/myservice:http/proxy/api/foo

Is it somehow possible to also reach the gRPC service via the apiserver proxy? It would seem that since gRPC uses HTTP/2, the apiserver proxy won't support it out of the box. e.g. doing something like this on the client side...

grpc.Dial("myapiserver/api/v1/namespaces/mynamespace/services/myservice:grpc/proxy")

... won't work.

Is there a way to connect to a gRPC service via the apiserver proxy?

If not, is there a different way to connect to the gRPC service from external, without using a LoadBalancer service?

CodePudding user response:

You can use NodePort service. Each of your k8s workers will start listening on some high port. You can connect to any of the workers and your traffic would be routed to the target service.

apiserver-proxy solution looks like workaround to me and is far from production grade solution. You shouldn't route the traffic to your services through k8s API servers (even though it's technically possible). Control plane should be doing just control plane things and not data plane (traffic routing, running workloads, ...)

LoadBalancer service can be typically configured to create Internal LB (with internal IP from your VPC) instead External LB. This frankly the only 'correct' solution.

CodePudding user response:

...not to require an additional public IP

NodePort is not bound to public IP. That is, your worker node can sits in the private network and reachable at the node private IP:nodePort#. The meantime, you can use kubectl port-forward --namespace mynamespace service myservice 8080:8080 and connect thru localhost.

  • Related