We performed our kubernetes cluster upgrade from v1.21 to v1.22. After this operation we discovered that our nginx-ingress-controller deployment’s pods are failing to start with the following error message:
pkg/mod/k8s.io/[email protected]/tools/cache/reflector.go:125: Failed to list *v1beta1.Ingress: the server could not find the requested resource
We have found out that this issue is tracked over here: https://github.com/bitnami/charts/issues/7264
Because azure doesn't let to downgrade the cluster back to the 1.21 could you please help us fixing the nginx-ingress-controller deployment? Could you please be specific with what should be done and from where (local machine or azure cli, etc) as we are not very familiar with helm
.
This is our deployment current yaml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: nginx-ingress-controller
namespace: ingress
uid: 575c7699-1fd5-413e-a81d-b183f8822324
resourceVersion: '166482672'
generation: 16
creationTimestamp: '2020-10-10T10:20:07Z'
labels:
app: nginx-ingress
app.kubernetes.io/component: controller
app.kubernetes.io/managed-by: Helm
chart: nginx-ingress-1.41.1
heritage: Helm
release: nginx-ingress
annotations:
deployment.kubernetes.io/revision: '2'
meta.helm.sh/release-name: nginx-ingress
meta.helm.sh/release-namespace: ingress
managedFields:
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:replicas: {}
subresource: scale
- manager: Go-http-client
operation: Update
apiVersion: apps/v1
time: '2020-10-10T10:20:07Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/managed-by: {}
f:chart: {}
f:heritage: {}
f:release: {}
f:spec:
f:progressDeadlineSeconds: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
f:rollingUpdate:
.: {}
f:maxSurge: {}
f:maxUnavailable: {}
f:type: {}
f:template:
f:metadata:
f:labels:
.: {}
f:app: {}
f:app.kubernetes.io/component: {}
f:component: {}
f:release: {}
f:spec:
f:containers:
k:{"name":"nginx-ingress-controller"}:
.: {}
f:args: {}
f:env:
.: {}
k:{"name":"POD_NAME"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:fieldRef: {}
k:{"name":"POD_NAMESPACE"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:fieldRef: {}
f:image: {}
f:imagePullPolicy: {}
f:livenessProbe:
.: {}
f:failureThreshold: {}
f:httpGet:
.: {}
f:path: {}
f:port: {}
f:scheme: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":80,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:name: {}
f:protocol: {}
k:{"containerPort":443,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:name: {}
f:protocol: {}
f:readinessProbe:
.: {}
f:failureThreshold: {}
f:httpGet:
.: {}
f:path: {}
f:port: {}
f:scheme: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:resources:
.: {}
f:limits: {}
f:requests: {}
f:securityContext:
.: {}
f:allowPrivilegeEscalation: {}
f:capabilities:
.: {}
f:add: {}
f:drop: {}
f:runAsUser: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:dnsPolicy: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:serviceAccount: {}
f:serviceAccountName: {}
f:terminationGracePeriodSeconds: {}
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2022-01-24T01:23:22Z'
fieldsType: FieldsV1
fieldsV1:
f:status:
f:conditions:
.: {}
k:{"type":"Available"}:
.: {}
f:type: {}
k:{"type":"Progressing"}:
.: {}
f:type: {}
- manager: Mozilla
operation: Update
apiVersion: apps/v1
time: '2022-01-28T23:18:41Z'
fieldsType: FieldsV1
fieldsV1:
f:spec:
f:template:
f:spec:
f:containers:
k:{"name":"nginx-ingress-controller"}:
f:resources:
f:limits:
f:cpu: {}
f:memory: {}
f:requests:
f:cpu: {}
f:memory: {}
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2022-01-28T23:29:49Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:deployment.kubernetes.io/revision: {}
f:status:
f:conditions:
k:{"type":"Available"}:
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
k:{"type":"Progressing"}:
f:lastTransitionTime: {}
f:lastUpdateTime: {}
f:message: {}
f:reason: {}
f:status: {}
f:observedGeneration: {}
f:replicas: {}
f:unavailableReplicas: {}
f:updatedReplicas: {}
subresource: status
spec:
replicas: 2
selector:
matchLabels:
app: nginx-ingress
app.kubernetes.io/component: controller
release: nginx-ingress
template:
metadata:
creationTimestamp: null
labels:
app: nginx-ingress
app.kubernetes.io/component: controller
component: controller
release: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1
args:
- /nginx-ingress-controller
- '--default-backend-service=ingress/nginx-ingress-default-backend'
- '--election-id=ingress-controller-leader'
- '--ingress-class=nginx'
- '--configmap=ingress/nginx-ingress-controller'
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
resources:
limits:
cpu: 300m
memory: 512Mi
requests:
cpu: 200m
memory: 256Mi
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
allowPrivilegeEscalation: true
restartPolicy: Always
terminationGracePeriodSeconds: 60
dnsPolicy: ClusterFirst
serviceAccountName: nginx-ingress
serviceAccount: nginx-ingress
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 16
replicas: 3
updatedReplicas: 2
unavailableReplicas: 3
conditions:
- type: Available
status: 'False'
lastUpdateTime: '2022-01-28T22:58:07Z'
lastTransitionTime: '2022-01-28T22:58:07Z'
reason: MinimumReplicasUnavailable
message: Deployment does not have minimum availability.
- type: Progressing
status: 'False'
lastUpdateTime: '2022-01-28T23:29:49Z'
lastTransitionTime: '2022-01-28T23:29:49Z'
reason: ProgressDeadlineExceeded
message: >-
ReplicaSet "nginx-ingress-controller-59d9f94677" has timed out
progressing.
CodePudding user response:
Kubernetes 1.22 is supported only with NGINX Ingress Controller 1.0.0 and higher = https://github.com/kubernetes/ingress-nginx#support-versions-table
You need tu upgrade your nginx-ingress-controller
Bitnami Helm Chart to Version 9.0.0 in Chart.yaml
. Then run a helm upgrade nginx-ingress-controller bitnami/nginx-ingress-controller
.
You should also regularly update specially your ingress controller, as the version v0.34.1 is very very old bcs the ingress is normally the only entry appoint from outside to your cluster.