【Kubeedge小白安装教程】Centos7.9+K8Sv1.22.17(kubeadm)+Kubeedgev1.13.1部署教程详解----亲测过

第一:部署kubernetes1.22版本

这里部署k8s版本为1.22或者1.23版本。这里参考kubeedge支持的程度,选择1.22版本,后续会支持更高的k8s版本。

相关部署环境及部署组件:

主机名ip地址节点类型系统版本
k8s-master192.168.0.61master、etcd、workercentos7.9
edge-01192.168.0.232centos7.9

环境准备

准备工作需要在所有节点上操作,包含的过程如下:

  • 配置主机名
  • 添加/etc/hosts
  • 清空防火墙
  • 设置yum源
  • 配置时间同步
  • 关闭swap
  • 配置内核参数
  • 加载ip_vs内核模块
  • 安装Containerd
  • 安装kubelet、kubectl、kubeadm

配置主机名

hostnamectl set-hostname k8s-master  &&  bash
hostnamectl set-hostname edge-01  &&  bash

添加/etc/hosts

cat > /etc/hosts  << EOF
192.168.0.61 k8s-master
192.168.0.232 edge-01
EOF

清空防火墙

systemctl stop firewalld
systemctl disable firewalld
iptables -F
setenforce 0 
sed -i 's/SELINUX=/SELINUX=disabled/g' /etc/selinux/config

设置yum源

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo

配置时间同步

yum install -y chrony -y 
cat > /etc/chrony.conf << EOF
server ntp.aliyun.com iburst
stratumweight 0
driftfile /var/lib/chrony/drift
rtcsync
makestep 10 3
bindcmdaddress 127.0.0.1
bindcmdaddress ::1
keyfile /etc/chrony.keys
commandkey 1
generatecommandkey
logchange 0.5
logdir /var/log/chrony
EOFsystemctl enable --now chronyd 
chronyc sources 

关闭swap

默认情况下,kubernetes不允许其安装节点开启swap,如果已经开始了swap的节点,建议关闭掉swap

swapoff -a # 临时禁用swap
sed -i 's/.*swap.*/#&/' /etc/fstab # 永久# 修改/etc/fstab,将swap挂载注释掉,可确保节点重启后swap仍然禁用# 可通过如下指令验证swap是否禁用: 
free -m  # 可以看到swap的值为0total        used        free      shared  buff/cache   available
Mem:           7822         514         184         431        7123        6461
Swap:             0           0           0

配置内核参数

cat > /etc/modprobe.d/k8s.conf <<EOF
#!/bin/bash
#modprobe -- br_netfilter
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOFchmod 755 /etc/modprobe.d/k8s.conf && \
bash /etc/modprobe.d/k8s.conf && \
lsmod | grep -E "ip_vs|nf_conntrack_ipv4"

这些内核模块主要用于后续将kube-proxy的代理模式从iptables切换至ipvs

在linux kernel 4.19版本已经将nf_conntrack_ipv4 更新为 nf_conntrack,如果在加载内核时出现如下报错:modprobe: FATAL: Module nf_conntrack_ipv4 not found.,则将nf_conntrack_ipv4 改为nf_conntrack即可

加载ip_vs内核模块

cat > /etc/sysctl.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.ip_forward=1
net.ipv4.ip_forward_use_pmtu = 0
EOFsysctl --system
sysctl -a|grep "ip_forward"

如果出现sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No Such file or directory这样的错误,可以忽略

bridge-nf 使 netfilter 可以对 Linux 网桥上的 IPv4/ARP/IPv6 包过滤。比如,设置net.bridge.bridge-nf-call-iptables=1后,二层的网桥在转发包时也会被 iptables的 FORWARD 规则所过滤。常用的选项包括:

  • net.bridge.bridge-nf-call-arptables:是否在 arptables 的 FORWARD 中过滤网桥的 ARP 包
  • net.bridge.bridge-nf-call-ip6tables:是否在 ip6tables 链中过滤 IPv6 包
  • net.bridge.bridge-nf-call-iptables:是否在 iptables 链中过滤 IPv4 包
  • net.bridge.bridge-nf-filter-vlan-tagged:是否在 iptables/arptables 中过滤打了 vlan 标签的包。
  • fs.may_detach_mounts:centos7.4引入的新内核参数,用于在容器场景防止挂载点泄露

安装Docker

# 安装docker
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache fast
yum -y install docker-ce-20.10.0-3.el7   #这里指定版本# 生成docker配置文件
mkdir -p /etc/docker/
touch /etc/docker/daemon.json
cat > /etc/docker/daemon.json << EOF
{"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"]
} 
EOF# 启动docker
systemctl enable docker --now
systemctl start docker
systemctl status docker
docker --version

安装kubelet、kubectl、kubeadm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOFyum list kubeadm --showduplicatesyum install -y kubelet-1.22.17-0 kubeadm-1.22.17-0 kubectl-1.22.17-0
systemctl enable kubelet  # 注意: 在这里kubelet是无法正常启动的,只是确保其可以开机自启

部署master

部署master,只需要在master节点上配置,包含的过程如下:

  • 生成kubeadm.yaml文件
  • 编辑kubeadm.yaml文件
  • 根据配置的kubeadm.yaml文件部署master

安装master节点:

kubeadm init --apiserver-advertise-address=192.168.0.61  --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.22.17 --service-cidr=10.96.0.0/12  --pod-network-cidr=10.244.0.0/16

如果k8s集群出现问题,就重置再进行部署:

# 重置集群
kubeadm reset
# 停止kubelet
systemctl stop kubelet
# 删除已经部署的容器
crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -aq |xargs crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock rm 
# 清理所有目录
rm -rf /etc/kubernetes /var/lib/kubelet /var/lib/etcd /var/lib/cni/

此时如下看到类似如下输出即代表master安装完成:

Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.200.129:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:d3e83e7c3907dc42039bbf845022d1fa95bba9a4f5af018c17809e269ec6175d

配置访问集群:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

配置网络

在master完成部署之后,发现两个问题:

  1. master节点一直notready
  2. coredns pod一直pending

其实这两个问题都是因为还没有安装网络插件导致的,kubernetes支持众多的网络插件,详情可参考这里: https://kubernetes.io/docs/concepts/cluster-administration/addons/

caclico网站为:https://docs.tigera.io/calico/latest/getting-started/kubernetes/

我们这里使用calico网络插件,安装如下:

curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml -O
kubectl apply -f calico.yaml

查看节点状态:

[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   15m   v1.22.17

检查master组件是否正常:

[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-7b99746d5-cb9g9   1/1     Running   0          6m31s
calico-node-dprtz                         1/1     Running   0          6m31s
coredns-7f6cbbb7b8-h7dj2                  1/1     Running   0          16m
coredns-7f6cbbb7b8-mwr8n                  1/1     Running   0          16m
etcd-k8s-master                           1/1     Running   1          16m
kube-apiserver-k8s-master                 1/1     Running   1          16m
kube-controller-manager-k8s-master        1/1     Running   1          16m
kube-proxy-hdwsw                          1/1     Running   0          16m
kube-scheduler-k8s-master                 1/1     Running   1          16m

让master节点成为worker节点

这里因为资源不足的原因,使用master单节点。去除master污点,让master成为工作节点。

#查看污点
[root@k8s-master ~]# kubectl describe node k8s-master |grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule
#去除污点
[root@k8s-master ~]# kubectl taint node k8s-master node-role.kubernetes.io/master-
#检查
[root@k8s-master ~]# kubectl describe node k8s-master |grep Taint
Taints:             <none>

第二:部署kubeedge

部署cloudcore

获取keadm工具

wget https://github.com/kubeedge/kubeedge/releases/download/v1.13.1/keadm-v1.13.1-linux-amd64.tar.gz #下载
tar -zxvf keadm-v1.13.1-linux-amd64.tar.gz #解压
cp keadm-v1.13.1-linux-amd64/keadm/keadm /usr/local/bin/  #移动keadm
keadm version  #测试

cloudcore部署

[root@k8s-master ~]# keadm init --advertise-address=192.168.0.61 --set iptablesManager.mode="external" --profile version=v1.13.1
#192.168.200.160为MetaILB分配的地址
Kubernetes version verification passed, KubeEdge installation will start...
CLOUDCORE started
=========CHART DETAILS=======
NAME: cloudcore
LAST DEPLOYED: Thu Aug  3 16:34:37 2023
NAMESPACE: kubeedge
STATUS: deployed
REVISION: 1
[root@k8s-master ~]# ps -ef|grep cloudcore  #查询到进程
root      2447  8609  0 09:54 pts/0    00:00:00 grep --color=auto cloudcore
root      9990  9943  0 09:09 ?        00:00:01 cloudcore
[root@k8s-master ~]# netstat -anltp #查看端口
[root@k8s-master ~]# keadm gettoken  #生成的token
13de01e5acbb313f515e2ae2e68ac067c3f88413db39992a5c21ac65f04aeebb.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTc1MDQ5NTJ9.aTx39tgGIw7vKTbXmcbjYhkbt1JpSvEZf_VSvOhr0sQ[root@k8s-master ~]# kubectl get pod -n kubeedge
NAME                           READY   STATUS    RESTARTS   AGE
cloud-iptables-manager-tk9nk   1/1     Running   0          4m37s
cloudcore-5475cc4b46-hclrt     1/1     Running   0          4m36s[root@k8s-master ~]# kubectl get deploy -n kubeedge
NAME        READY   UP-TO-DATE   AVAILABLE   AGE
cloudcore   1/1     1            1           6m47s[root@k8s-master ~]# kubectl get svc -n kubeedge
NAME        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                                             AGE
cloudcore   ClusterIP   10.102.82.46   <none>        10000/TCP,10001/TCP,10002/TCP,10003/TCP,10004/TCP   7m16s

修改cloudcore的svc类型为NodePort


#修改yaml文件
kubectl edit svc cloudcore -n kubeedgetype: NodePort  #这个需要修改[root@k8s-master ~]# kubectl get svc -n kubeedge  #这里暴露的端口尤其是10000的对外端口有用
NAME        TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                           AGE
cloudcore   NodePort   10.108.150.105   <none>        10000:32615/TCP,10001:31967/TCP,10002:32761/TCP,10003:32230/TCP,10004:30681/TCP   7m53s
[root@k8s-master ~]# 

因为边缘计算的硬件条件都不好,这里我们需要打上标签,让一些应用不扩展到edge节点上去

[root@k8s-master ~]# kubectl get daemonset -n kube-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n kube-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
kubectl get daemonset -n metallb-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n metallb-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'

但凡是daemonset的都不可以去占用edge节点的硬件资源

部署metrics-server
#[root@k8s-master ~]# 
wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
kubectl apply -f components.yaml
kubectl get pods -n kube-system

因为没有证书的缘故,metrics-server一只呈现这样的状态

修改yaml文件,让metrics-server不需要识别证书

[root@k8s-master ~]# kubectl patch deploy metrics-server -n kube-system --type='json' -p='[{"op":"add","path":"/spec/template/spec/containers/0/args/-","value":"--kubelet-insecure-tls"}]' 

metrics-server正常运行

第三:部署edgecore

获取keadm工具

wget https://github.com/kubeedge/kubeedge/releases/download/v1.13.1/keadm-v1.13.1-linux-amd64.tar.gz #下载
tar -zxvf keadm-v1.13.1-linux-amd64.tar.gz #解压
cp keadm-v1.13.1-linux-amd64/keadm/keadm /usr/local/bin/  #移动keadm
keadm version  #测试

安装docker-ce

cat > /etc/docker/daemon.json << EOF
{"exec-opts": ["native.cgroupdriver=cgroupfs"],"registry-mirrors": ["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"]
}
EOF# 启动docker
systemctl enable docker --now
systemctl start docker
systemctl status docker
docker --version

这里注意:cgroupdriver=cgroupfs。修改为systemd则会出现两者之间不一致的问题

部署edgecore

在master节点上获取token

[root@k8s-master ~]# keadm gettoken
13de01e5acbb313f515e2ae2e68ac067c3f88413db39992a5c21ac65f04aeebb.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTc1MDQ5NTJ9.aTx39tgGIw7vKTbXmcbjYhkbt1JpSvEZf_VSvOhr0sQ

在edge节点上执行

[root@edge ~]# TOKEN=13de01e5acbb313f515e2ae2e68ac067c3f88413db39992a5c21ac65f04aeebb.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTc1MDQ5NTJ9.aTx39tgGIw7vKTbXmcbjYhkbt1JpSvEZf_VSvOhr0sQ

暴露节点和端口号

[root@edge ~]# SERVER=192.168.0.61:32615  #master执行这句暴露的端口kubectl get svc -n kubeedge

加油kubeedge

[root@edge ~]# keadm join --token=$TOKEN --cloudcore-ipport=$SERVER --kubeedge-version=v1.13.1

报错1:

I0803 20:40:53.928138   10458 join.go:184] 4. Pull Images
Pulling kubeedge/installation-package:v1.13.1 ...
E0803 20:40:53.929276   10458 remote_image.go:160] "Get ImageStatusfrom image service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService" image="kubeedge/installation-package:v1.13.1"
Error: edge node join failed: pull Images failed: rpc error: code =Unimplemented desc = unknown service runtime.v1alpha2.ImageService
execute keadm command failed:  edge node join failed: pull Images failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService

解决:

rm -rf /etc/containerd/config.toml
containerd config default > /etc/containerd/config.toml
systemctl restart containerd

报错2:

I0803 20:47:32.605796   11139 join.go:184] 5. Copy resources from the image to the management directory
E0803 20:47:52.606330   11139 remote_runtime.go:198] "RunPodSandboxfrom runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Error: edge node join failed: copy resources failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
execute keadm command failed:  edge node join failed: copy resources failed: rpc error: code = DeadlineExceeded desc = context deadlineexceeded

解决:

keadm join --token=$TOKEN --cloudcore-ipport=$SERVER --kubeedge-version=1.13.1 --runtimetype=dockerkeadm join --token=13de01e5acbb313f515e2ae2e68ac067c3f88413db39992a5c21ac65f04aeebb.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2OTc1MDQ5NTJ9.aTx39tgGIw7vKTbXmcbjYhkbt1JpSvEZf_VSvOhr0sQ --cloudcore-ipport=192.168.0.61:32615  --kubeedge-version=v1.13.1  --runtimetype=docker
[root@edge-01 kubeedge]# ll
total 12
drwxr-xr-x 2 root root 4096 Oct 16 09:49 ca
drwxr-xr-x 2 root root 4096 Oct 16 09:49 certs
drwxr-xr-x 2 root root 4096 Oct 16 09:49 config
srwxr-xr-x 1 root root    0 Oct 16 09:49 dmi.sock
[root@edge-01 kubeedge]# pwd
/etc/kubeedge
[root@edge-01 kubeedge]# [root@edge-01 kubeedge]# journalctl -u edgecore.service -xe  #查看日志正常[root@edge-01 kubeedge]# systemctl status edgecore  #正常启动
● edgecore.serviceLoaded: loaded (/etc/systemd/system/edgecore.service; enabled; vendor preset: disabled)Active: active (running) since Mon 2023-10-16 09:49:41 CST; 2min 37s agoMain PID: 27574 (edgecore)Tasks: 14Memory: 34.9MCGroup: /system.slice/edgecore.service└─27574 /usr/local/bin/edgecoreOct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.390116   27574 client.go:89] edge-hub-cli subscribe topic to $hw/events/upload/#
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.390211   27574 client.go:153] finish hub-client pub
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.390221   27574 eventbus.go:71] Init Sub And Pub Client for external mqtt broker tcp...essfully
Oct 16 09:49:56 edge-01 edgecore[27574]: W1016 09:49:56.390251   27574 eventbus.go:168] Action not found
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.390458   27574 client.go:89] edge-hub-cli subscribe topic to $hw/events/device/+/state/update
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.390905   27574 client.go:89] edge-hub-cli subscribe topic to $hw/events/device/+/twin/+
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.391326   27574 client.go:89] edge-hub-cli subscribe topic to $hw/events/node/+/membership/get
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.391804   27574 client.go:89] edge-hub-cli subscribe topic to SYS/dis/upload_records
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.392005   27574 client.go:89] edge-hub-cli subscribe topic to +/user/#
Oct 16 09:49:56 edge-01 edgecore[27574]: I1016 09:49:56.392182   27574 client.go:97] list edge-hub-cli-topics status, no record, skip sync
Hint: Some lines were ellipsized, use -l to show in full.
[root@edge-01 kubeedge]# 

csdn解决网址:https://blog.csdn.net/MacWx/article/details/129527231

1.13版本默认使用containerd,如果需要使用docker,runtimetype和remote runtime endpoint都要在keadm join时指定

如果都不可以,edgecore加入不进k8s集群,使用这个网站

https://blog.csdn.net/MacWx/article/details/130200209

检查是否安装成功

在edgecore节点上运行容器

[root@k8s-master ~]# cat > nginx.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:name: nginx-deployment
spec:selector:matchLabels:app: nginxreplicas: 3  # 可根据需要进行调整template:metadata:labels:app: nginxspec:nodeName: edge-01  #调度到指定机器containers:- name: nginximage: nginxports:- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:name: nginx-service
spec:selector:app: nginxports:- protocol: TCPport: 80targetPort: 80type: NodePort
EOF
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
edge-01      Ready    agent,edge             22m   v1.23.15-kubeedge-v1.13.1
k8s-master   Ready    control-plane,master   89m   v1.22.17[root@k8s-master ~]# kubectl get pods,svc
NAME                                   READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-c4c8d6d8d-cr8xf   1/1     Running   0          66s
pod/nginx-deployment-c4c8d6d8d-jwlr4   1/1     Running   0          66s
pod/nginx-deployment-c4c8d6d8d-mnxf6   1/1     Running   0          66sNAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes      ClusterIP   10.96.0.1       <none>        443/TCP        89m
service/nginx-service   NodePort    10.103.238.37   <none>        80:31207/TCP   66s
[root@k8s-master ~]# 

第四:启用 kubectl logs 功能

在master节点上运行

启用Kubectl logs/exec/attach等能力 | KubeEdge

[root@k8s-master ~]# iptables -t nat -A OUTPUT -p tcp --dport 10350 -j DNAT --to 192.168.0.61:10003   #这个IP要注意[root@edge-01 ~]# vim /etc/kubeedge/config/edgecore.yaml  #修改edgeStream下的enable: true为这个edgeStream:enable: true   #只修改这个handshakeTimeout: 30readDeadline: 15[root@edge-01 config]# systemctl restart edgecore
[root@edge-01 config]# systemctl status  edgecorels /etc/kubernetes/pki/
export CLOUDCOREIPS="192.168.0.61"
echo $CLOUDCOREIPS
sudo su  #需要到root下执行mkdir -p /etc/kubeedge
cd /etc/kubeedge/
wget https://github.com/kubeedge/kubeedge/blob/v1.13.1/build/tools/certgen.sh #下载对应版本
chmod +x certgen.sh
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE    VERSION
edge-01      Ready    agent,edge             91m    v1.23.15-kubeedge-v1.13.1
k8s-master   Ready    control-plane,master   158m   v1.22.17
[root@k8s-master ~]# kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-c4c8d6d8d-cr8xf   1/1     Running   0          70m
nginx-deployment-c4c8d6d8d-jwlr4   1/1     Running   0          70m
nginx-deployment-c4c8d6d8d-mnxf6   1/1     Running   0          70m[root@k8s-master ~]# kubectl logs nginx-deployment-c4c8d6d8d-mnxf6  #master上看到的日志
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/10/16 02:11:46 [notice] 1#1: using the "epoll" event method
2023/10/16 02:11:46 [notice] 1#1: nginx/1.21.5
2023/10/16 02:11:46 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2023/10/16 02:11:46 [notice] 1#1: OS: Linux 3.10.0-1160.92.1.el7.x86_64
2023/10/16 02:11:46 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/10/16 02:11:46 [notice] 1#1: start worker processes
2023/10/16 02:11:46 [notice] 1#1: start worker process 31
2023/10/16 02:11:46 [notice] 1#1: start worker process 32
[root@k8s-master ~]# 
[root@edge-01 config]# docker ps -a
CONTAINER ID   IMAGE                COMMAND                  CREATED             STATUS             PORTS                                            NAMES
2f4bf337a7ce   nginx                "/docker-entrypoint.…"   About an hour ago   Up About an hour                                                    k8s_nginx_nginx-deployment-c4c8d6d8d-cr8xf_default_6059be8e-d186-40ed-bc8f-22ec4c091067_0
2653c891ba2e   nginx                "/docker-entrypoint.…"   About an hour ago   Up About an hour                                                    k8s_nginx_nginx-deployment-c4c8d6d8d-jwlr4_default_790b1623-6d0c-4849-a3d3-f37bb547804a_0
6b5df42b50e0   nginx                "/docker-entrypoint.…"   About an hour ago   Up About an hour                                                    k8s_nginx_nginx-deployment-c4c8d6d8d-mnxf6_default_bcce0320-a106-4321-949e-febf50323bb9_0
74525d0ebaa8   kubeedge/pause:3.6   "/pause"                 About an hour ago   Up About an hour                                                    k8s_POD_nginx-deployment-c4c8d6d8d-jwlr4_default_790b1623-6d0c-4849-a3d3-f37bb547804a_0
9a487feef1f8   kubeedge/pause:3.6   "/pause"                 About an hour ago   Up About an hour                                                    k8s_POD_nginx-deployment-c4c8d6d8d-cr8xf_default_6059be8e-d186-40ed-bc8f-22ec4c091067_0
a22916f46182   kubeedge/pause:3.6   "/pause"                 About an hour ago   Up About an hour                                                    k8s_POD_nginx-deployment-c4c8d6d8d-mnxf6_default_bcce0320-a106-4321-949e-febf50323bb9_0
fbe5a7a44d57   3a05ba674344         "/docker-entrypoint.…"   2 hours ago         Up 2 hours                                                          k8s_mqtt_mqtt-kubeedge_default_ea17e8b2-889d-4f8e-b45c-5793f96218c1_0
2c16459e7d7b   kubeedge/pause:3.6   "/pause"                 2 hours ago         Up 2 hours         0.0.0.0:1883->1883/tcp, 0.0.0.0:9001->9001/tcp   k8s_POD_mqtt-kubeedge_default_ea17e8b2-889d-4f8e-b45c-5793f96218c1_0[root@edge-01 config]# docker logs 2f4bf337a7ce #edge边缘节点看到的日志
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2023/10/16 02:12:01 [notice] 1#1: using the "epoll" event method
2023/10/16 02:12:01 [notice] 1#1: nginx/1.21.5
2023/10/16 02:12:01 [notice] 1#1: built by gcc 10.2.1 20210110 (Debian 10.2.1-6) 
2023/10/16 02:12:01 [notice] 1#1: OS: Linux 3.10.0-1160.92.1.el7.x86_64
2023/10/16 02:12:01 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2023/10/16 02:12:01 [notice] 1#1: start worker processes
2023/10/16 02:12:01 [notice] 1#1: start worker process 31
2023/10/16 02:12:01 [notice] 1#1: start worker process 32
[root@edge-01 config]# 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/107097.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

简易的慢SQL自定义告警实战经验(支持多数据源)

背景 对于慢SQL相信大家都不陌生了,一旦遇到后,相信大家会很快的提供出来对应的优化方法、索引优化建议工具使用等等,对于此我相信大家已经熟悉的不能再熟悉了,但是比较不尽人意的是:在此之前我们往往是花费了大量时间才发现造成系统出现问题的是慢SQL引起的,风险自然而…

如何正确维护实验室超声波清洗器?

实验室一直被视为一个严谨而严肃的场所&#xff0c;实验应遵循一定的步骤&#xff0c;使用的设备也经历了详细的选择&#xff0c;如实验室超声波清洗机&#xff0c;其特点远强于一般类型的清洗机。专门负责采购的实验室人员一般对优质服务的实验室超声波清洗机印象深刻&#xf…

故障维修无忧服务:OLED透明拼接屏的专业技术支持与保修服务

OLED透明拼接屏作为未来显示技术的领军者&#xff0c;以其卓越的画质和全方位的优势在市场上备受推崇。 本文将深入探讨OLED透明拼接屏的画质特点和独有的优势&#xff0c;并为您提供选购指南、价格表以及故障维修服务&#xff0c;助您了解并选择最适合的OLED透明拼接屏。 一、…

使用CFimagehost源码搭建无需数据库支持的PHP免费图片托管私人图床

文章目录 1.前言2. CFImagehost网站搭建2.1 CFImagehost下载和安装2.2 CFImagehost网页测试2.3 cpolar的安装和注册 3.本地网页发布3.1 Cpolar临时数据隧道3.2 Cpolar稳定隧道&#xff08;云端设置&#xff09;3.3.Cpolar稳定隧道&#xff08;本地设置&#xff09; 4.公网访问测…

NPM 常用命令(十二)

目录 1、npm unpublish 1.1 使用语法 1.2 描述 2、npm unstar 2.1 使用语法 3、npm update 3.1 使用语法 3.2 描述 3.3 示例 插入符号依赖 波浪号依赖 低于 1.0.0 的插入符号依赖 子依赖 更新全局安装的包 4、npm version 4.1 使用语法 5、npm view 5.1 使用语…

Raven2靶机渗透

1. 信息收集 1.1 主机探测 sudo arp-scan -l1.2 端口扫描 nmap -p- -A 192.168.16.185开放了80端口&#xff0c;尝试登录网址查看信息&#xff0c;通过浏览器插件找出指纹 1.3 目录扫描 访问登录界面&#xff0c;发现remember Me怀疑是shiro界面 登录/vendor/界面&#xff0…

「深入探究Web页面生命周期:DOMContentLoaded、load、beforeunload和unload事件」

&#x1f3ac; 江城开朗的豌豆&#xff1a;个人主页 &#x1f525; 个人专栏 :《 VUE 》 《 javaScript 》 &#x1f4dd; 个人网站 :《 江城开朗的豌豆&#x1fadb; 》 ⛺️ 生活的理想&#xff0c;就是为了理想的生活 ! 目录 引言 1. DOMContentLoaded 1.1 属性 1.2 A…

【aloam】ubuntu20.04 配置 aloam 环境,编译过程报错及成功解决方法

为什么写这篇博客 ALOAM是slamer的必经之路&#xff0c;official提供的基础环境推荐ubuntu16.04或者18.04&#xff0c;而我用20.04已经有一段时间了&#xff0c;不方便换&#xff0c;但由于其他原因也不得不去配置。过程中出现了几个问题&#xff0c;在这里也就20分钟&#xf…

04在命令行中使用Maven命令创建Maven版的Web工程,并将工程部署到服务器的步骤

创建Maven版的Web工程 使用命令生成Web工程 使用mvn archetype:generate命令生成Web工程时&#xff0c;需要使用一个专门生成Web工程骨架的archetype(参照官网看到它的用法) -D表示后面要附加命令的参数&#xff0c;字母D和后面的参数是紧挨着的&#xff0c;中间没有任何其它…

记一次Redis Cluster Pipeline导致的死锁问题

作者&#xff1a;vivo 互联网服务器团队- Li Gang 本文介绍了一次排查Dubbo线程池耗尽问题的过程。通过查看Dubbo线程状态、分析Jedis连接池获取连接的源码、排查死锁条件等方面&#xff0c;最终确认是因为使用了cluster pipeline模式且没有设置超时时间导致死锁问题。 一、背…

Go指针探秘:深入理解内存与安全性

1. 指针的基础 1.1 什么是指针&#xff1f; 指针是一种变量&#xff0c;其存储的是另一个变量的内存地址&#xff0c;而不是值本身。在很多编程语言中&#xff0c;当我们需要直接访问内存或者希望通过一个变量间接操作另一个变量时&#xff0c;会使用到指针。 示例&#xff1a…

掌握 Scikit-Learn: Python 中的机器学习库入门

机器学习 第二课 Sklearn 入门 概述机器学习与 Python 的完美结合Scikit-Learn 的核心组件与结构安装与配置验证安装 数据表示与预处理特征矩阵和目标向量数据处理 估计器模型的选择思考问题的本质研究数据的分布判断任务的复杂性分类问题回归问题 监督学习分类算法回归算法 无…

列表字典推导式

推导式是可以从一个数据序列构建另一个新的数据序列&#xff08;一个有规律的列表或控制一个有规律列表&#xff09;的结构体. 列表推导式 对比普通语法&#xff0c;创建一个0-9的列表&#xff0c;我们可以用while和for循环两种方式 list1 []# 2. 书写循环&#xff0c;依次…

常见的8个JMeter压测问题

为什么在JMeter中执行压力测试时&#xff0c;出现连接异常或连接重置错误&#xff1f; 答案&#xff1a;连接异常或连接重置错误通常是由于服务器在处理请求时出现问题引起的。这可能是由于服务器过载、网络故障或配置错误等原因导致的。 解决方法&#xff1a; 确定服务器的负载…

智能警用装备管理系统-科技赋能警务

警用物资装备管理系统&#xff08;智装备DW-S304&#xff09;是依托互云计算、大数据、RFID技术、数据库技术、AI、视频分析技术对警用装备进行统一管理、分析的信息化、智能化、规范化的系统。 &#xff08;1&#xff09;感知智能化 装备感知是整个方案的基础&#xff0c;本方…

二.镜头知识之镜头总长,法兰距,安装接口

二.镜头知识之镜头总长,法兰距,安装接口 文章目录 二.镜头知识之镜头总长,法兰距,安装接口2.1 线激光模组镜头的FBL 与 TTL(Total Track Length) 镜头总长2.2 相机法兰距2.3 线激光模组镜头的TTL 以及 From Barrel bottom to image plane2.4 相机的镜头接口2.4.1 海思IPC S…

基于YOLO算法的单目相机2D测量(工件尺寸和物体尺寸)三

1.简介 1.1 2D测量技术 基于单目相机的2D测量技术在许多领域中具有重要的背景和意义。 工业制造&#xff1a;在工业制造过程中&#xff0c;精确测量是确保产品质量和一致性的关键。基于单目相机的2D测量技术可以用于检测和测量零件尺寸、位置、形状等参数&#xff0c;进而实…

PyTorch入门教学——使用PyCharm创建一个PyTorch项目

首先需要创建好PyTorch的虚拟环境&#xff0c;步骤&#xff1a;PyTorch入门教学——简介与环境配置-CSDN博客打开PyCharm&#xff0c;新建项目&#xff0c;选择项目的存放位置。选择先前配置的解释器&#xff0c;也就是虚拟环境中的解释器。&#xff08;记住创建的虚拟环境所在…

vue3后台管理框架之svg封装为全局组件

因为项目很多模块需要使用图标,因此把它封装为全局组件!!! 在开发项目的时候经常会用到svg矢量图,而且我们使用SVG以后,页面上加载的不再是图片资源, 这对页面性能来说是个很大的提升,而且我们SVG文件比img要小的很多,放在项目中几乎不占用资源。 安装SVG依赖插件 pnp…

docker 安装(unbuntu安装)

替换源 有时候因为墙的原因&#xff0c;总是不成功&#xff0c;因此我们直接将apt源换成阿里云的 阿里云ubuntu镜像源 需要在阿里云找到指定版本的源&#xff0c;我这里是Ubuntu20 sudo vim /etc/apt/sources.list更换内容&#xff08;可以先将sources.list备份&#xff09; …