I have two services that I am trying to deploy through a Helm chart:
- Frontend Service(which is accessible through the host, and uses NodePort)
- Backend Service(which is only accessible inside the cluster, and uses ClusterIP)
I am facing an issue with the Ingress of the deployment I am using AWS ALB where it throws a 404 Not Found
error when accessing the Frontend Service.
ingress.yaml
:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "metaflow-ui.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- $fullNameStatic := include "metaflow-ui.fullname-static" . -}}
{{- $svcPortStatic := .Values.serviceStatic.port -}}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: metaflow-ui
name: {{ $fullName }}
labels:
{{- include "metaflow-ui.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
alb.ingress.kubernetes.io/healthcheck-path: "/api"
alb.ingress.kubernetes.io/success-codes: "200"
{{- end }}
spec:
rules:
- host: {{ .Values.externalDNS }}
http:
paths:
- path: /api
pathType: Prefix
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: metaflow-ui
name: {{ $fullName }}
labels:
{{- include "metaflow-ui.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "200"
{{- end }}
spec:
rules:
- host: {{ .Values.externalDNS }}
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: {{ $fullNameStatic }}
servicePort: {{ $svcPortStatic }}
---
{{ end }}
These are the annotations for Ingress under values.yaml
:
ingress:
enabled: true
className: ""
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/group.name: metaflow-ui
alb.ingress.kubernetes.io/security-groups: # removed
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/certificate-arn: # removed
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
I read that attaching a group.name
was the fix to enable a single AWS ALB to be shared across multiple Ingress but it didn't fix the issue. If I were to remove the second ingress the entire site is deployed(but without the backend service).
EDIT:
I found this article that goes over this exact application, How do I achieve path-based routing on an Application Load Balancer?, will try it out.
CodePudding user response:
I managed to get it to work using the following Ingress set up. Instead of having a single Ingress per service I ended up using a single Ingress but kept the group.name
.
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "metaflow-ui.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- $fullNameStatic := include "metaflow-ui.fullname-static" . -}}
{{- $svcPortStatic := .Values.serviceStatic.port -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "metaflow-ui.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
alb.ingress.kubernetes.io/healthcheck-path: "/"
alb.ingress.kubernetes.io/success-codes: "200"
{{- end }}
spec:
rules:
- host: {{ .Values.externalDNS }}
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: {{ $fullName }}
port:
number: {{ $svcPort }}
- path: /
pathType: Prefix
backend:
service:
name: {{ $fullNameStatic }}
port:
number: {{ $svcPortStatic }}
- path: /
pathType: Prefix
backend:
service:
name: ssl-redirect
port:
name: use-annotation
- path: /static
pathType: Prefix
backend:
service:
name: {{ $fullNameStatic }}
port:
number: {{ $svcPortStatic }}
{{ end }}