Home > Blockchain >  Hangfire dashboard url setup on kubernetes
Hangfire dashboard url setup on kubernetes

Time:02-16

I have hangfire dashboard working properly on local environment. Now, I'm trying to setup it as a container (pod) inside my cluster, deployed to azure so I can access hangfire dashboard through its url. However, I'm having issue access to it.

Below is my setup:

        [UsedImplicitly]
        public void Configure(IApplicationBuilder app)
        {
            var hangFireServerOptions = new BackgroundJobServerOptions
            {
                Activator = new ContainerJobActivator(app.ApplicationServices)
            };

            app.UseHealthChecks("/liveness");
            app.UseHangfireServer(hangFireServerOptions);

            app.UseHangfireDashboard("/hangfire", new DashboardOptions()
            {
                AppPath = null,
                DashboardTitle = "Hangfire Dashboard",
                Authorization = new[]
                {
                    new HangfireCustomBasicAuthenticationFilter
                    {
                        User = Configuration.GetSection("HangfireCredentials:UserName").Value,
                        Pass = Configuration.GetSection("HangfireCredentials:Password").Value
                    }
                }
            });

            app.UseHttpsRedirection();
            app.UseRouting();
            app.UseAuthorization();
            app.UseEndpoints(endpoints =>
            {
                endpoints.MapControllers();
            });

            HangfireJobScheduler.ScheduleJobs(app.ApplicationServices.GetServices<IScheduledTask>()); //serviceProvider.GetServices<IScheduledTask>()
        }

Service.yml

apiVersion: v1
kind: Service
metadata:
  name: task-scheduler-api
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http
  - port: 443
    targetPort: 443
    name: https
  selector:
    app: task-scheduler-api     

Deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: task-scheduler
spec:
  selector:
    matchLabels:
      app: task-scheduler  
  template:
    metadata:
      labels:
        app: task-scheduler
    spec:
      containers:
      - name: task-scheduler
        image: <%image-name%>
        # Resources and limit
        resources:
          requests:
            cpu: <%cpu_request%>
            memory: <%memory_request%>
          limits:
            cpu: <%cpu_limit%>
            memory: <%memory_limit%>
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443        
        readinessProbe:
          httpGet:
            path: /liveness
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 5
          timeoutSeconds: 30
        livenessProbe:
          httpGet:
            path: /liveness
            port: 80
          initialDelaySeconds: 15
          periodSeconds: 10
          timeoutSeconds: 5
          failureThreshold: 7

Ingress.yml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: "nginx"    
    cert-manager.io/cluster-issuer: letsencrypt
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: task-scheulder-api-ingress
  namespace: default
spec:
  rules:
    - host: example.com
      http:
        paths:
        - path: /          
          pathType: Prefix
          backend:
            service:
              name: task-scheduler-api
              port:
                number: 80
  tls:
    - hosts:
        - example.com
      secretName: task-scheduler-tls-production        

I'm trying to access the dashboard by running: example.com/hangfire, but got 503 Service Temporarily Unavailable.

I'm checking logs on the pod. Every seem to be fine:

...
...
Content root path: /data
Now listening on: http://0.0.0.0:80
Now listening on: https://0.0.0.0:443
Application started. Press Ctrl C to shut down.
....

Would anyone know what I'm missing and how to resolve it ? Thank you

CodePudding user response:

This could be related to the ingress class, bcs this moved from annotation to an own field in networking.k8s.io/v1 :

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: task-scheulder-api-ingress
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - "example.com"
      secretName: task-scheduler-tls-production
  rules:
    - host: "example.com"
      http:
        paths:
          - path: /hangfire
            pathType: ImplementationSpecific
            backend:
              service:
                name: task-scheduler-api
                port:
                  number: 8080

You also do not need to specify port 80 & 443 at the service as the ingress is responsible for implementing TLS:

apiVersion: v1
kind: Service
metadata:
  name: task-scheduler-api
spec:
  ports:
  - port: 8080
    targetPort: http
    protocol: TCP
    name: http
  selector:
    app: task-scheduler-api

For convenience you should also update the deployment:

- name: http
  containerPort: 80
  protocol: TCP

CodePudding user response:

I have figured out the issue. The main issue is that I did not have the match value for selector app in deployment.yml and service.yml. If I do kubectl get ep, it is showing me that I do not have any endpoint assign to the task scheduler pod, meaning that it is not really deployed yet.

As soon as I updated values in the deployment.yml and service.yml, url is accessible.

service.yml

apiVersion: v1
kind: Service
metadata:
  name: task-scheduler-api
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http
  - port: 443
    targetPort: 443
    name: https
  selector:
    app: task-scheduler-api  # This needs to match with value inside the deployment   

deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: task-scheduler
spec:
  selector:
    matchLabels:
      app: task-scheduler  # This needs to match with value in service
  template:
    metadata:
      labels:
        app: task-scheduler # This needs to match as well
    spec:
      containers:
      - name: task-scheduler
        image: <%image-name%>
        # Resources and limit
        resources:
          requests:
            cpu: <%cpu_request%>
            memory: <%memory_request%>
          limits:
            cpu: <%cpu_limit%>
            memory: <%memory_limit%>
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443        
        readinessProbe:
          httpGet:
            path: /liveness
            port: 80
          initialDelaySeconds: 3
          periodSeconds: 5
          timeoutSeconds: 30
        livenessProbe:
          httpGet:
            path: /liveness
            port: 80
          initialDelaySeconds: 15
          periodSeconds: 10
          timeoutSeconds: 5
          failureThreshold: 7

Hopefully someone would find it useful. Thank you

  • Related