I have deployed an AWS ALB Controller and I create listeners with ingress resources in a EKS cluster.
The steps I followed are the following:
- I had an ingress for a service named
first-test-api
and all where fine - I deploy a new Helm release [
first
] with just renaming the chart fromtest-api
tomain-api
. So now isfirst-main-api
. - Noting seems to break in terms of k8s resources , but...
- the
test-api.mydomain.com
listener in the AWS ALB is stuck to the old service
Has anyone encounter such a thing before?
I could delete the listener manually, but I don't want to. I'd like to know what is happening and why it didn't happened automatically :)
EDIT:
The ingress had an ALB annotation that enabled the deletion protection.
CodePudding user response:
I will provide some generic advice on things I would look at, but it might be better to detail a small example.
Yes, ALB controller should automatically manage changes on the backend.
I would suggest ignoring the helm chart and looking into the actual objects:
kubectl get ing -n <namespace>
shows the ingress you are expecting?kubectl get ing -n <ns> <name of ingress> -o yaml
points to the correct/new service?kubectl get svc -n <ns> <name of new svc>
shows the new service?kubectl get endpoints -n <ns> <name of new svc>
shows the pod you are expecting?
And then gut feeling.
- Check the labels in your new service are differents from the labels in the old service if you expect to both services serve different things.
- Get the logs of the ALB controller. You will see registering/deregistering stuff. Sometimes errors. Especially if the role the node/service account doesn't have the proper IAM permissions.
Happy to modify the answer if you expand the question with more details.