Home > Blockchain >  Do we still use api endpoint health check in Kubernetes
Do we still use api endpoint health check in Kubernetes

Time:02-13

I want to use API endpoint health check like /healthz or /livez or /readyz. I recently heard about it, tried to search but didn't find any place which tell me how we use it. I see that /healthz is deprecated. Can you tell me if have to use it in a pod definition file or we have to use it in code. I have a basic hello world app. can some one help me understand it and tell me where does it fit in ?

import time
import os

import redis
from flask import Flask

from flask_basicauth import BasicAuth


app = Flask(__name__)
cache = redis.Redis(host=os.environ['REDIS_HOST'], port=os.environ['REDIS_PORT'])

app.config['BASIC_AUTH_USERNAME'] = os.environ['USERNAME']
app.config['BASIC_AUTH_PASSWORD'] = os.environ['PASSWORD']

basic_auth = BasicAuth(app)

def get_hit_count():
    retries = 5
    while True:
        try:
            return cache.incr('hits')
        except redis.exceptions.ConnectionError as exc:
            if retries == 0:
                raise exc
            retries -= 1
            time.sleep(0.5)


@app.route('/')
def hello():
    count = get_hit_count()
    return 'Hello World! I have been seen {} times.\n'.format(count)

@app.route('/supersecret')
@basic_auth.required
def secret_view():
    count = get_hit_count()
    return os.environ['THEBIGSECRET']  '\n'

I already tried something like below and it seems working but if I use /supersecret instead of / it is not working

        readinessProbe:
          httpGet:
            path: /
            port: 5000
          initialDelaySeconds: 20
          periodSeconds: 5
        livenessProbe:
        httpGet:
          path: /
          port: 5000
        intialDelaySeconds: 30
        periodSeconds: 5

CodePudding user response:

You can refer to these docs for an explanation of how "health checks", or probes as they're called in Kubernetes, work. For your question you may be particularly interested in the section on HTTP probes.

You can pick any HTTP path of your application for these probes, while indeed /healthz, /livez and /readyz are common ones and happen to be the ones used by the Kubernetes API server (with, as you found, the /healthz one being deprecated in favour of the more specific alternatives).

So, to implement health probes, simply add a GET route for each of the probes you'd like to the application and make sure the status code returned is 2xx or 3xx to signal a healthy state, or anything else when Kubernetes should consider the application unhealthy.

Any code greater than or equal to 200 and less than 400 indicates success. Any other code indicates failure.

In my opinion, it makes sense to return HTTP 200 for success and 500 or 503 for failure.

Once your application is set up to respond accordingly to the prober, you should instruct Kubernetes to use it by adding something along the following lines to the container exposing your API in the PodSpec:

    livenessProbe:
      httpGet:
        path: /your-liveness-probe-path
        port: 8080

If I understood your question regarding the probe failing for the /supersecret path, ignoring the indentation error in your YAML definition, this is most likely because that route requires authentication. The probes you have defined do not supply that. For this, you may use httpHeaders:

        readinessProbe:
          httpGet:
            path: /
            port: 5000
            httpHeaders:
            - name: Authorization
              value: "Bearer secret"
          initialDelaySeconds: 20
          periodSeconds: 5

However, it is in my experience more common to not require authentication for your probe routes, but instead, to not expose those to the public internet (or even on a separate port). How to accomplish that is a whole different story and depends on how you are exposing your application in the first place, but one way could be to have an ingress with appropriate "ingress rules".

On a more general note about troubleshooting failing probes: running kubectl get events is a good starting point.

  • Related