Home > Enterprise >  kubernetes node fail after pod migration running pod
kubernetes node fail after pod migration running pod

Time:06-19

I use the on-premise kubernetes. create with rke. I created a basic pod on the Kubernetes. this pod running the worker2. I shot down worker2 but this pod can't change a worker. I see too fail screen.

How to solve this problem? I try to Priority Class but this technology can't solve the problem.

CodePudding user response:

I created a basic pod... I shot down worker2...

When a Pod is terminated it won't restart itself. You use workload controller such as Deployment, StatefulSet to start new Pod when an existing one was terminated. Workload controller will keep the number of pod(s) in the cluster according to the replicas count that you set.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine

Assumed that the Deployment created a Pod on "worker2" and you terminated the "worker2"; the Deployment will try to spin up a new Pod on next available (healthy) node.

  • Related