部署calico网络插件
之前的k8s环境中主要使用了flannel作为网络插件,这次改用calico。calico支持多种安装方式,以下是具体的操作步骤。
1. 准备工作
- 环境信息
# 系统信息
root@master1:~# cat /etc/issue
Ubuntu 24.04 LTS \n \lroot@master1:~# uname -r
6.8.0-31-generic# k8s版本
root@master1:~# kubectl get node
NAME STATUS ROLES AGE VERSION
master1 NotReady control-plane 2m2s v1.28.2
node1 NotReady <none> 84s v1.28.2
node2 NotReady <none> 79s v1.28.2
- 版本配套
以最新的calico v3.28版本为例,适配如下k8s版本,我选用这个版本进行安装。
- v1.27
- v1.28
- v1.29
- v1.30
参考自:System requirements | Calico Documentation (tigera.io)
2. Operator方式安装
- 安装operator
# 下载operator资源清单文件
root@master1:~/calico# wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml
# 或者
root@master1:~/calico# curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/tigera-operator.yaml -O# 创建应用资源清单文件,创建operator
root@master1:~/calico# kubectl create -f tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
- 创建CRD资源
# 下载CRD资源清单文件
root@master1:~/calico# wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/custom-resources.yaml#修改custom-resources.yaml的ip
root@master1:~/calico# vim custom-resources.yamlapiVersion: operator.tigera.io/v1
kind: Installation
metadata:name: default
spec:# Configures Calico networking.calicoNetwork:ipPools:- name: default-ipv4-ippoolblockSize: 24 # 改为24,每个节点一个C段地址cidr: 10.244.0.0/16 # 与kubeadm初始化时"--pod-network-cidr=10.244.0.0/16"保持一致encapsulation: VXLANCrossSubnetnatOutgoing: EnablednodeSelector: all()# 创建应用资源清单文件,创建operator
root@master1:~/calico# kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
- 确认安装的pod
watch kubectl get pods -n calico-system
问题记录
- node节点NotReady
root@master1:~/calico# kubectl get pod -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-b7fb9d96c-pbf9s 1/1 Running 0 47m
calico-node-gdsjw 1/1 Running 2 (16m ago) 47m
calico-node-hvqg4 1/1 Running 0 15m
calico-node-nntpd 0/1 Running 0 42s
calico-typha-55ccdf44bf-v2zmm 1/1 Running 0 15m
calico-typha-55ccdf44bf-w5l8w 1/1 Running 0 47m
csi-node-driver-bqvb7 2/2 Running 0 47m
csi-node-driver-cw59h 2/2 Running 0 47m
csi-node-driver-hbw2n 2/2 Running 0 47mWarning Unhealthy 55s (x2 over 56s) kubelet Readiness probe failed: calico/node is not ready: BIRD is not ready: Error querying BIRD: unable to connect to BIRDv4 socket: dial unix /var/run/calico/bird.ctl: connect: connection refusedWarning Unhealthy 28s kubelet Readiness probe failed: 2024-07-01 08:57:58.226 [INFO][267] confd/health.go 202: Number of node(s) with BGP peering established = 2
calico/node is not ready: felix is not ready: Get "http://localhost:9099/readiness": dial tcp: lookup localhost on 8.8.8.8:53: no such host
解决:
root@master1:~/calico# vi custom-resources.yaml
...
spec:# Configures Calico networking.calicoNetwork:ipPools:- name: default-ipv4-ippoolblockSize: 24cidr: 10.244.0.0/16encapsulation: VXLANCrossSubnetnatOutgoing: EnablednodeSelector: all()nodeAddressAutodetectionV4: #增加的配置interface: ens*# 更新
root@master1:~/calico# kubectl apply -f custom-resources.yaml# 同时修改节点的dns
root@node1:~# cat /etc/netplan/01-concfg.yaml
network:version: 2renderer: networkdethernets:ens33:dhcp4: noaddresses:- 192.168.0.62/24routes:- to: 0.0.0.0/0via: 192.168.0.1nameservers:addresses:- 223.5.5.5- 223.6.6.6# 更改生效
root@node1:~# netplan apply
3. Manifest方式安装
根据node节点数量和使用的db选择如下之一方式进行安装:
- Install Calico with Kubernetes API datastore, 50 nodes or less
- Install Calico with Kubernetes API datastore, more than 50 nodes
- Install Calico with etcd datastore
以小于50个node,使用Kubernetes API datastore为例:
-
下载使用 Kubernetes API datastore的manifest
curl https://raw.githubusercontent.com/projectcalico/calico/v3.28.0/manifests/calico.yaml -O
-
修改Pod CIDR
取消注释CALICO_IPV4POOL_CIDR,修改Pod CIDR。
-
基于YAML进行创建
kubectl apply -f calico.yaml
4. helm方式安装
本次部署安装calico作为k8s的网络插件,使用helm进行安装:
# 下载安装helm
wget https://get.helm.sh/helm-v3.15.2-linux-amd64.tar.gz
tar xf helmelm-v3.15.2-linux-amd64.tar.gz
cd linux-amd64/helm /usr/local/bin/helm
helm version
安装calico:
# 创建tigera-operator命名空间
kubectl create namespace tigera-operator
# 安装tigera calico operator ,创建crd资源
helm install calico projectcalico/tigera-operator --version v3.28.0 --namespace tigera-operator
# 确认相关pod运行正常
watch kubectl get pods -n calico-system
准备工作
- 已经按照helm3
- 已经按照k8s环境
- 已经配置完成
kubeconfig
- Calico可以管理主机上的
cali
和tunl
接口。如果使用了NetworkManager,参考 Configure NetworkManager.
安装
- 添加calico helm repo:
helm repo add projectcalico https://docs.tigera.io/calico/charts
如果要自定义chart相关的参数,可以配置values.yaml
cat > values.yaml <<EOF
installation:kubernetesProvider: AKScni:type: CalicocalicoNetwork:bgp: DisabledipPools:- cidr: 10.244.0.0/16encapsulation: VXLAN
EOF
- 创建
tigera-operator
命名空间.
kubectl create namespace tigera-operator
- 使用helm chart安装Tigera Calico operator 和CRD
helm install calico projectcalico/tigera-operator --version v3.28.0 --namespace tigera-operator
或者使用values.yaml
传递参数值 :
helm install calico projectcalico/tigera-operator --version v3.28.0 -f values.yaml --namespace tigera-operator
- 确认pod运行正常
watch kubectl get pods -n calico-system
说明
Tigera operator安装到calico-system命名空间,其他安装方式使用了
kube-system
命名空间。
参考:
Install using Helm | Calico Documentation (tigera.io)
5. 参考资料
- https://helm.sh/zh/docs/intro/install/
- https://github.com/helm/helm/releases/
- https://docs.tigera.io/calico/latest/getting-started/kubernetes/
- Releases · projectcalico/calico (github.com)