Home > Mobile >  Exposing a Kubernetes service outside the cluster
Exposing a Kubernetes service outside the cluster

Time:11-09

I am trying to expose a simple grafana service exposed outside the cluster. Here is what i have done so far in terms of troubleshooting and researching before posting here and little bit of details of my setup.

Grafana deployment YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: grafana
  name: grafana
spec:
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      replicas: 2
      securityContext:
        fsGroup: 472
        supplementalGroups:
          - 0
      containers:
        - name: grafana
          image: 'grafana/grafana:8.0.4'
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 3000
              name: http-grafana
              protocol: TCP
          env:
            - name: GF_DATABASE_CA_CERT_PATH
              value: /etc/grafana/BaltimoreCyberTrustRoot.crt.pem
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /robots.txt
              port: 3000
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 30
            successThreshold: 1
            timeoutSeconds: 2
          livenessProbe:
            failureThreshold: 3
            initialDelaySeconds: 30
            periodSeconds: 10
            successThreshold: 1
            tcpSocket:
              port: 3000
            timeoutSeconds: 1
          resources:
            requests:
              cpu: 250m
              memory: 750Mi
          volumeMounts:
            - name: grafana-configmap-pv
              mountPath: /etc/grafana/grafana.ini
              subPath: grafana.ini
            - name: grafana-pv
              mountPath: /var/lib/grafana
            - name: cacert
              mountPath: /etc/grafana/BaltimoreCyberTrustRoot.crt.pem
              subPath: BaltimoreCyberTrustRoot.crt.pem
      volumes:
        - name: grafana-pv
          persistentVolumeClaim:
            claimName: grafana-pvc
        - name: grafana-configmap-pv
          configMap:
            name: grafana-config
        - name: cacert
          configMap:
            name: mysql-cacert

Grafana Service yaml

apiVersion: v1
kind: Service
metadata:
  name: grafana
spec:
  type: ClusterIP
  ports:
    - port: 3000
      protocol: TCP
      targetPort: http-grafana
  clusterIP: 10.7.2.57
  selector:
    app: grafana
  sessionAffinity: None

I have nginx installed as Ingress controller. Here is the YAML for nginx controller service

kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/version: 1.0.0
    helm.sh/chart: ingress-nginx-4.0.1
spec:
  ports:
    - name: http
      protocol: TCP
      appProtocol: http
      port: 80
      targetPort: http
      nodePort: 32665
    - name: https
      protocol: TCP
      appProtocol: https
      port: 443
      targetPort: https
      nodePort: 32057
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  clusterIP: 10.7.2.203
  clusterIPs:
    - 10.7.2.203
  type: NodePort
  sessionAffinity: None
  externalTrafficPolicy: Cluster
status:
  loadBalancer: {}

Ingress resource yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: grafana-ingress
spec:
  rules:
  - host: test.grafana.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: grafana
            port:
              number: 3000

The ingress ip 10.7.0.5 is not accessible at all. I have tried redploying the resources various times. The grafana POD ips are accessible with port 3000, i am able to login etc but just been unable to access grafana through the nginx load balancer. What am i missing?

EDITED:

Results of kubectl get services

NAME          TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
grafana       ClusterIP   10.7.2.55    <none>        3000/TCP   2d14h
hello-world   ClusterIP   10.7.2.140   <none>        80/TCP     42m
kubernetes    ClusterIP   10.7.2.1     <none>        443/TCP    9d

Results of kubectl get ingress

NAME              CLASS    HOSTS   ADDRESS    PORTS   AGE
grafana-ingress   <none>   *       10.7.0.5   80      2d2h

Results of kubectl get pods

default         grafana-85cdb8c656-6zgkg                    1/1     Running     0          2d21h
default         grafana-85cdb8c656-7n626                    1/1     Running     0          2d21h
default         hello-world-78796d6bfd-fwb98                1/1     Running     0          2d12h
ingress-nginx   ingress-nginx-controller-57ffff5864-rw57w   1/1     Running     0          2d12h

CodePudding user response:

Your ingress controller’s Service is of type NodePort, which means it does not have a public IP address. The Service’s ClusterIP (10.7.2.203) is only useful in the cluster’s internal network.

If your cluster’s nodes have public IP addresses, you can use those to connect to the ingress controller. Since its Service is of type NodePort, it listens on specific ports on all of your cluster’s nodes. Based on the Service spec you provided, these ports are 32665 for HTTP and 32057 for HTTPS.

If you want your ingress controller to have a dedicated IP address, you can change its Service’s type to LoadBalancer. Your Kubernetes service provider will assign a public IP address to your Service. You can then use that IP address to connect to your ingress controller.

This only works if you are using a managed Kubernetes service. If you are self-managing, then you need to set up a loadbalancer that listens on a public IP address and routes traffic to your cluster’s nodes.

CodePudding user response:

You are still exposing your service limited to k8s internal network.

  • one solution is to use "hostNetwork: True" in your pod definition (link), so your pod will be exposed with your hosts network instead of k8s internal network (mind that it really expands your security risks).

  • the other way is to use load balancer service from your cloud provider or deploy an on-premise loadbalancer service like MetalB

  • or you can manually deploy a proxy service like nginx or haproxy on one or more nodes to proxy the traffic from k8s internal network to your host network and outer world.

  • Related