bash 如何删除已完成的 kubernetes pod?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/55072235/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-18 17:06:11  来源:igfitidea点击:

How to delete completed kubernetes pod?

bashkubernetes

提问by Paymahn Moghadasian

I have a bunch of pods in kubernetes which are completed (successfully or unsuccessfully) and I'd like to clean up the output of kubectl get pods. Here's what I see when I run kubectl get pods:

我在 kubernetes 中有一堆已完成(成功或失败)的 Pod,我想清理kubectl get pods. 这是我运行时看到的kubectl get pods

NAME                                           READY   STATUS             RESTARTS   AGE
intent-insights-aws-org-73-ingest-391c9384     0/1     ImagePullBackOff   0          8d
intent-postgres-f6dfcddcc-5qwl7                1/1     Running            0          23h
redis-scheduler-dev-master-0                   1/1     Running            0          10h
redis-scheduler-dev-metrics-85b45bbcc7-ch24g   1/1     Running            0          6d
redis-scheduler-dev-slave-74c7cbb557-dmvfg     1/1     Running            0          10h
redis-scheduler-dev-slave-74c7cbb557-jhqwx     1/1     Running            0          5d
scheduler-5f48b845b6-d5p4s                     2/2     Running            0          36m
snapshot-169-5af87b54                          0/1     Completed          0          20m
snapshot-169-8705f77c                          0/1     Completed          0          1h
snapshot-169-be6f4774                          0/1     Completed          0          1h
snapshot-169-ce9a8946                          0/1     Completed          0          1h
snapshot-169-d3099b06                          0/1     ImagePullBackOff   0          24m
snapshot-204-50714c88                          0/1     Completed          0          21m
snapshot-204-7c86df5a                          0/1     Completed          0          1h
snapshot-204-87f35e36                          0/1     ImagePullBackOff   0          26m
snapshot-204-b3a4c292                          0/1     Completed          0          1h
snapshot-204-c3d90db6                          0/1     Completed          0          1h
snapshot-245-3c9a7226                          0/1     ImagePullBackOff   0          28m
snapshot-245-45a907a0                          0/1     Completed          0          21m
snapshot-245-71911b06                          0/1     Completed          0          1h
snapshot-245-a8f5dd5e                          0/1     Completed          0          1h
snapshot-245-b9132236                          0/1     Completed          0          1h
snapshot-76-1e515338                           0/1     Completed          0          22m
snapshot-76-4a7d9a30                           0/1     Completed          0          1h
snapshot-76-9e168c9e                           0/1     Completed          0          1h
snapshot-76-ae510372                           0/1     Completed          0          1h
snapshot-76-f166eb18                           0/1     ImagePullBackOff   0          30m
train-169-65f88cec                             0/1     Error              0          20m
train-169-9c92f72a                             0/1     Error              0          1h
train-169-c935fc84                             0/1     Error              0          1h
train-169-d9593f80                             0/1     Error              0          1h
train-204-70729e42                             0/1     Error              0          20m
train-204-9203be3e                             0/1     Error              0          1h
train-204-d3f2337c                             0/1     Error              0          1h
train-204-e41a3e88                             0/1     Error              0          1h
train-245-7b65d1f2                             0/1     Error              0          19m
train-245-a7510d5a                             0/1     Error              0          1h
train-245-debf763e                             0/1     Error              0          1h
train-245-eec1908e                             0/1     Error              0          1h
train-76-86381784                              0/1     Completed          0          19m
train-76-b1fdc202                              0/1     Error              0          1h
train-76-e972af06                              0/1     Error              0          1h
train-76-f993c8d8                              0/1     Completed          0          1h
webserver-7fc9c69f4d-mnrjj                     2/2     Running            0          36m
worker-6997bf76bd-kvjx4                        2/2     Running            0          25m
worker-6997bf76bd-prxbg                        2/2     Running            0          36m

and I'd like to get rid of the pods like train-204-d3f2337c. How can I do that?

我想摆脱像train-204-d3f2337c. 我怎样才能做到这一点?

回答by pjincz

You can do this a bit easier, now.

你现在可以更容易地做到这一点。

You can list all completed pods by:

您可以通过以下方式列出所有已完成的 pod:

kubectl get pod --field-selector=status.phase==Succeeded

And delete all completed pods by:

并通过以下方式删除所有已完成的 pod:

kubectl delete pod --field-selector=status.phase==Succeeded

回答by Arslanbekov

If this pods created by CronJob, you can use spec.failedJobsHistoryLimitand spec.successfulJobsHistoryLimit

如果这个 pods 是由 CronJob 创建的,你可以使用spec.failedJobsHistoryLimitspec.successfulJobsHistoryLimit

Example:

例子:

apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: my-cron-job
spec:
  schedule: "*/10 * * * *"
  failedJobsHistoryLimit: 1
  successfulJobsHistoryLimit: 3
  jobTemplate:
    spec:
      template:
         ...

回答by Paymahn Moghadasian

Here's a one liner which will delete all pods which aren't in the Runningor Pendingstate (note that if a pod name has Runningor Pendingin it, it won't get deleted ever with this one liner):

这是一个单行,它将删除所有不在RunningorPending状态的 pod(请注意,如果一个 pod 名称包含RunningPending在其中,它不会被这个单行删除):

kubectl get pods --no-headers=true |grep -v "Running" | grep -v "Pending" | sed -E 's/([a-z0-9-]+).*/\1/g' | xargs kubectl delete pod

kubectl get pods --no-headers=true |grep -v "Running" | grep -v "Pending" | sed -E 's/([a-z0-9-]+).*/\1/g' | xargs kubectl delete pod

Here's an explanation:

这是一个解释:

  1. get all pods without any of the headers
  2. filter out pods which are Running
  3. filter out pods which are Pending
  4. pull out the name of the pod using a sed regex
  5. use xargsto delete each of the pods by name
  1. 获取没有任何标题的所有豆荚
  2. 过滤掉豆荚 Running
  3. 过滤掉豆荚 Pending
  4. 使用 sed 正则表达式提取 pod 的名称
  5. 用于xargs按名称删除每个 Pod

Note, this doesn't account for all pod states. For example, if a pod is in the state ContainerCreatingthis one liner will delete that pod too.

请注意,这并未考虑所有 pod 状态。例如,如果一个 pod 处于这种状态,ContainerCreating这个 liner 也会删除那个 pod。

回答by Lukasz Dynowski

You can do it on two ways.

你可以通过两种方式做到这一点。

$ kubectl delete pod $(kubectl get pods | grep Completed | awk '{print }')

or

或者

$ kubectl get pods | grep Completed | awk '{print }' | xargs kubectl delete pod

Both solutions will do the job.

两种解决方案都可以胜任。