Home > Enterprise >  In istio ingress-gateway, how Istio Proxy figures out the used service port?
In istio ingress-gateway, how Istio Proxy figures out the used service port?

Time:10-22

My question is related or kinda the same as How are the various Istio Ports used?, but I will try to simplify it.

Consider that I have the following istio operator config:

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
...
spec:
...
  components:
...
    ingressGateways:
...
service:
          - port: 80
            targetPort: 8080
            nodePort: 30080 
            name: http2
            protocol: TCP

And I have an ingress already forward the traffic to the port 80.

My question is about the target port here, so the request will go through the ingress on, e.g port 80, it will go through the node nodePort, the port, then the targetPort 8080 where it will land on Istio Proxy for ingress controller.

The right configurations for gateway to mesh this request will be 80 not 8080, something like:

apiVersion: networking.istio.io/v1beta1
kind: Gateway
---
spec: 
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80

The question is, given that the ingressgateway receives the traffic on port 8080, how it figures out the used service port was originally 80? Note that this example works fine without any issue.

Is it like the proxy queries kube-api about what's the service port for the port number I receive the request on on based on that will decide which Gateway this request belongs to?

CodePudding user response:

My question is about the target port here, so the request will go through the ingress on, e.g port 80, it will go through the node nodePort, the port, then the targetPort 8080 where it will land on Istio Proxy for ingress controller.

The incoming requests which will be coming from outside are routed to ingress gateway pod. The port and target port are related to ingress gateway service and ingress gateway pod. Please run below command to check the ingress gateway service

kubectl get svc istio-ingressgateway -n istio-system

The question is, given that the ingressgateway receives the traffic on port 8080, how it figures out the used service port was originally 80? Note that this example works fine without any issue.

When an application specific gateway and virtual service object is created with particular hosts mentioned in those, its virtual service responsibility for forwarding traffic arriving at a particular host or gateway port. More details about gateway and virtualservice are their in documentation.

CodePudding user response:

In my answer to the question you mentioned, I described how kubernetes takes the k8s.service definition to setup routing in the cluster.

You're question focuses more on the istio part, so let's got deeper on that part here, but you might want to read the other answer first.

So you send a request to port 80, which is internally routed to the ingressgateway port on port 8080. But the request is still "for port 80", so you need to configure the ingressgateway application to listen on that port.

After installing istio on a vanilla minikube you can get the initial config of the ingressgateway pod by running istioctl pc all pod/istio-ingressgateway-<randomstring>.istio-system -o json > /tmp/init, saving it for later.

Now add a simple Gateway like

$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: gateway
  namespace: default
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "nginx.example.com"
EOF

Get the config again stioctl pc all pod/istio-ingressgateway-<randomstring>.istio-system -o json > /tmp/with-gw and compare it with the /tmp/init file.

You'll see that a listener was created with the address that has the port_value 8080 and the default filter_chains (which I removed because it's quite long) and also a "blackhole" route:

       dynamic_listeners: [
         {
           name: "0.0.0.0_8080"
           active_state: {
             listener: {
               @type: "type.googleapis.com/envoy.config.listener.v3.Listener"
               name: "0.0.0.0_8080"
               address: {
                 socket_address: {
                   address: "0.0.0.0"
                   port_value: 8080
                 }
               }
[...]
       dynamic_route_configs: [
         {
           route_config: {
             @type: "type.googleapis.com/envoy.config.route.v3.RouteConfiguration"
             name: "http.8080"
             virtual_hosts: [
               {
                 name: "blackhole:80"
                 domains: [
                   "*"
                 ]
               }
             ]
             validate_clusters: false
           }
         }
       ]

Now add a VirtualService with some simple config like:

$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: vs-nginx
  namespace: default
spec:
  gateways:
  - gateway
  hosts:
  - "nginx.example.com"
  http:
  - match:
    - uri:
        prefix: /
    headers:
      response:
        add:
          My-Custom-Header1: "abc-123"
    route:
    - destination:
        port:
          number: 8080
        host: nginx-test.default.svc.cluster.local
EOF

get the config istioctl pc all pod/istio-ingressgateway-<randomstring>.istio-system -o json > /tmp/with-vs and diff it again with the /tmp/with-gw file. Now the route_config changed with an added route to nginx-test.default.svc.cluster.local (the fqdn of my nginx application running in the default namespace), the instructions to add the header, the retry policy and the added domain nginx.example.com.

           route_config: {
             virtual_hosts: [
               {
                 routes: [
                   {
                     match: {
                       prefix: "/"
                       case_sensitive: true
                     }
                     route: {
                       cluster: "outbound|8080||nginx-test.default.svc.cluster.local"
                       timeout: "0s"
                       retry_policy: {
                         retry_on: "connect-failure,refused-stream,unavailable,cancelled,retriable-status-codes"
                         num_retries: 2
                         retry_host_predicate: [
                           {
                             name: "envoy.retry_host_predicates.previous_hosts"
                           }
                         ]
                         host_selection_retry_max_attempts: "5"
                         retriable_status_codes: [
                           503
                         ]
                       }
                       max_stream_duration: {
                         max_stream_duration: "0s"
                         grpc_timeout_header_max: "0s"
                       }
                     }
                     metadata: {
                       filter_metadata: {
                         istio: {
                           config: "/apis/networking.istio.io/v1alpha3/namespaces/default/virtual-service/vs-nginx"
                         }
                       }
                     }
                     decorator: {
                       operation: "nginx-test.default.svc.cluster.local:8080/*"
                     }
                     response_headers_to_add: [
                       {
                         header: {
                           key: "My-Custom-Header1"
                           value: "abc-123"
                         }
                         append: true
                       }
                     ]
                   }
                 ]
                 include_request_attempt_count: true
-                name: "blackhole:80"
                 name: "nginx.example.com:80"
                 domains: [
-                  "*"
                   "nginx.example.com"
                   "nginx.example.com:*"
                 ]
               }
             ]
           }
         }
       ]
     }

For more on the how to debug envoy config check out this youtube session by solo.io.

  • Related