CronJob计划任务
简介:在k8s中周期性运行计划任务,与linux中的crontab相同;注意点 CornJob执行的时间是controller-manager的时间,所以一定要确保controller-manager的时间是准确的,另外cornjob
cron表达式
文章参考
┌─────────────秒 (0 - 59)│ ┌─────────────分钟 (0 - 59)│ │ ┌─────────────小时 (0 - 23)│ │ │ ┌─────────────日 (1 - 31)│ │ │ │ ┌─────────────月 (1 - 12)│ │ │ │ │ ┌─────────────周 (0 - 6) (0 表示周日)│ │ │ │ │ │ ┌─────────────年 (可选,1970 - 2099)│ │ │ │ │ │ ││ │ │ │ │ │ │* * * * * * *
配置文件
cron-job-pd.yaml
apiVersion: batch/v1
kind: CronJob # 定时任务
metadata:name: cron-job-test # 定时任务名字
spec:concurrencyPolicy: Allow # 开发调度策略 Allow允许开发调度,Forbid 不允许开发调度 Replace 如果之前的任务还没有执行完,就直接执行新的,放弃上一个failedJobsHistoryLimit: 1 # 保留多少个失败任务successfulJobsHistoryLimit: 3 # 保留多少个成功任务suspend: false # 是否挂起任务,若为true 则该任务不会执行schedule: "* * * * *" # 调度策略jobTemplate:spec:template:spec:containers:- name: helloimage: busybox:1.28imagePullPolicy: IfNotPresentcommand:- /bin/sh- -c- date; echo Hello from the Kubernetes clusterrestartPolicy: OnFailure
操作:
# 创建
kubectl create -f cron-job-pd.yaml# 查看
kubectl get cronjob
kubectl get cj# 描述
kubectl describe cj cron-job-test
初始化容器
简介:相对于postStart来说,首先InitContainer 能够保证一定在EntryPoint之前执行,而postStart 不能,其次postStart更适合去执行一些命令操作,而initContainer实际就是一个容器,可以在其他基础容器环境下执行更复杂的初始化功能
配置参考 nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploylabels:type: nginx-deploytest: 1.0.0namespace: default
spec:replicas: 1 # 副本数revisionHistoryLimit: 10 # 保留的历史版本数selector: # 选择器matchLabels:app: nginx-deploytest: 1.0.0strategy: # 更新策略type: RollingUpdate # 更新策略类型 RollingUpdate、RecreaterollingUpdate:maxUnavailable: 25% # 更新时最大不可用副本数maxSurge: 25% # 更新时最大超出副本数template: # 模板metadata:labels:app: nginx-deploytest: 1.0.0spec:initContainers:- image: nginx:latestimagePullPolicy: IfNotPresentcommand: ["sh", "-c", "echo 'inited;' >> ~/.init"]name: init-testcontainers:- name: nginximage: nginx:latestimagePullPolicy: IfNotPresentresources:requests:cpu: 100mmemory: 256Milimits:cpu: 200mmemory: 512MiterminationMessagePath: /dev/termination-log # 容器终止时的消息路径terminationMessagePolicy: File # 容器终止时的消息策略dnsPolicy: ClusterFirst # DNS策略restartPolicy: Always # 重启策略schedulerName: default-scheduler # 调度器名称securityContext: {} # 安全上下文terminationGracePeriodSeconds: 30 # pod被删除时的等待时间
测试
# 创建
kubectl create -f nginx-deploy.yaml# 查看po
kubectl get po -o wide# 结果
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deploy-5bcc8cd95b-bb7fn 1/1 Running 0 85s 10.244.107.210 k8s-node3 <none> <none>
nginx-deploy-5bcc8cd95b-fq5gs 1/1 Running 0 75s 10.244.169.177 k8s-node2 <none> <none>
nginx-pod 0/1 Terminating 0 46h <none> k8s-node3 <none> <none>
pvc-test-pd 1/1 Running 0 3h30m 10.244.122.110 k8s-node4 <none> <none># 查看日志
kubectl exec -it nginx-deploy-5bcc8cd95b-bb7fn -- sh
# 结果
Defaulted container "nginx" out of: nginx, init-test (init)
污点和容忍度
简介:
容忍:是标在pod上的,当pod被调度时,如果没有配置容忍,则该pod不会被调度到有污点的节点上,只有该pod上标注了满足某个节点的所有污点,则会被调度到这些节点
tolerations:- key: "污点的key"value: "污点的value"offset:"NoSchedule" # 污点产生的影响operator:"Equal" # 表示value与污点的value要相等,也可以设置为Exists 表示存在key即可,此时可以不用设置 value
nginx-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deploylabels:type: nginx-deploytest: 1.0.0namespace: default
spec:replicas: 1 # 副本数revisionHistoryLimit: 10 # 保留的历史版本数selector: # 选择器matchLabels:app: nginx-deploytest: 1.0.0strategy: # 更新策略type: RollingUpdate # 更新策略类型 RollingUpdate、RecreaterollingUpdate:maxUnavailable: 25% # 更新时最大不可用副本数maxSurge: 25% # 更新时最大超出副本数template: # 模板metadata:labels:app: nginx-deploytest: 1.0.0spec:tolerations:- key: "memory"operator: "Equal"value: "low"effect: "NoSchedule"# initContainers:# - image: nginx:latest# imagePullPolicy: IfNotPresent# command: ["sh", "-c", "echo 'inited;' >> ~/.init"]# name: init-testcontainers:- name: nginximage: nginx:latestimagePullPolicy: IfNotPresentresources:requests:cpu: 100mmemory: 256Milimits:cpu: 200mmemory: 512MiterminationMessagePath: /dev/termination-log # 容器终止时的消息路径terminationMessagePolicy: File # 容器终止时的消息策略dnsPolicy: ClusterFirst # DNS策略restartPolicy: Always # 重启策略schedulerName: default-scheduler # 调度器名称securityContext: {} # 安全上下文terminationGracePeriodSeconds: 30 # pod被删除时的等待时间
测试
# 创建
kubectl create -f nginx-deploy.yaml# 查看po
kubectl get po -o wide# 结果
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deploy-5997fbff9d-h7m8f 1/1 Running 0 9s 10.244.107.212 k8s-node3 <none> <none>
nginx-deploy-5997fbff9d-lbgmn 1/1 Running 0 12s 10.244.122.111 k8s-node4 <none> <none>
nginx-pod 0/1 Terminating 0 47h <none> k8s-node3 <none> <none># 编辑 查看 nginx-deploy
kubectl edit deploy nginx-deploy
核心代码
tolerations:
- effect: NoSchedulekey: memoryoperator: Equalvalue: low# 将node4中的 NoSchedule 移除
kubectl taint no k8s-node4 memory=flow:NoSchedule-
kubectl taint no k8s-node4 memory=flow:NoExecute# 在查看po
kubectl get po -o wide
# 结果
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deploy-5997fbff9d-h7m8f 1/1 Running 0 13m 10.244.107.212 k8s-node3 <none> <none>
nginx-deploy-5997fbff9d-q8z92 1/1 Running 0 11s 10.244.169.178 k8s-node2 <none> <none>
nginx-pod 0/1 Terminating 0 47h <none> k8s-node3 <none> <none># 由之前部署在k8s-node4跑到了 k8s-node2# 查看node4
kubectl describe no k8s-node4#核心内容
...
Taints: memory=flow:NoExecute
...
污点Taint
NoSchedule 不能容忍的pod不能被调度到该节点,但是已经存在的节点不会被驱逐
NoExecute 不能容忍的节点会被立即清除,能容忍且没有配置tolerationSeconds属性的可以一直运行,设置了的 则还会在该node运行对应的时间
有点懵。。。。
污点
亲和力
简介: