CentOS8.4 部署 k8s 1.31.2

文章目录

  • 配置 aliyun 源
  • 配置时间同步
    • 查看
  • 安装 docker
    • 下载一些必备工具
    • 配置 aliyun 的源
    • 更新源
    • 删除旧的 podman
    • 安装 docker
    • 设置开机启动
  • 配置 hosts 表
    • 多主机协同可以不写
  • 关闭 swap 分区
  • 配置 iptables
  • 配置 k8s 源
    • 初始化 master 节点
    • 初始化 node 节点
  • 查看集群状态

[!warning] System CentOS8.4

配置 aliyun 源

cd /etc/yum.repos.d/
mkdir bak
mv *.repo bak
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-vault-8.5.2111.repo
yum install -y https://mirrors.aliyun.com/epel/epel-release-latest-8.noarch.rpm
sed -i 's|^#baseurl=https://download.example/pub|baseurl=https://mirrors.aliyun.com|' /etc/yum.repos.d/epel*
sed -i 's|^metalink|#metalink|' /etc/yum.repos.d/epel*
cd

配置时间同步

yum install -y chrony
cp /etc/chrony.conf /etc/chrony.conf.bak
sed -i '3ipool ntp.tencent.com iburst' /etc/chrony.conf
sed -i '4d' /etc/chrony.conf
systemctl restart chronyd.service

查看

chronyc sources -v

看到 ^* 106.55.184.199 2 6 277 40 +638us[ +711us] +/- 43ms 则表示时间同步成功

安装 docker

yum remove docker \docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine

下载一些必备工具

yum install -y yum-utils device-mapper-persistent-data lvm2

配置 aliyun 的源

yum-config-manager \--add-repo \https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

更新源

yum makecache

删除旧的 podman

yum erase podman buildah -y

安装 docker

yum install -y docker-ce docker-ce-cli containerd.io

设置开机启动

systemctl enable docker --now

配置 hosts 表

cat >> /etc/hosts << EOF
192.168.142.139 k8s-master
192.168.142.140 k8s-slave1
192.168.142.141 k8s-slave2
EOF

多主机协同可以不写

for i in 140 141 ; do scp /etc/hosts 192.168.142.$i:/etc/hosts ; done

关闭 swap 分区

swapoff -a ;sed -i '/swap/d' /etc/fstab

配置 iptables

设置iptables不对bridge的数据进行处理,启用IP路由转发功能

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf

配置 k8s 源

cat << EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.31/rpm/repodata/repomd.xml.key
EOF
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
getenforce

关闭防火墙

systemctl disabled firewalld.service --now

[!Note] 查看 k8s 有哪些版本

yum list --showduplicates kubeadm --disableexcludes=kubernetes

管理 containerd 容器运行时,以确保它能够与 Kubernetes 集群正确交互。

crictl config image-endpoint unix:///run/containerd/containerd.sock
containerd config default > /etc/containerd/config.toml
systemctl restart containerd.service
crictl completion >>/root/.bash_profile
sed -i 's/config_path = \"\"/config_path = \"\/etc\/containerd\/certs.d\"/g' /etc/containerd/config.toml
cat /etc/containerd/config.toml |grep -B1 'config_path'

结果

    [plugins."io.containerd.grpc.v1.cri".registry]config_path = "/etc/containerd/certs.d"

针对哪个镜像站加速,就创建哪个文件夹,例如针对docker.io加速,就要在certs.d下创建docker.io文件夹,hosts.toml是固定的名字

mkdir -p /etc/containerd/certs.d/docker.io
cat << EOF | tee /etc/containerd/certs.d/docker.io/hosts.toml
server = "https://docker.io
[host. "https://docker.m.daocloud.io"]
capabilities = ["pull", "resolve"]
EOF

查看

cat /etc/containerd/certs.d/docker.io/hosts.toml

结果: 注意格式

初始化 master 节点

拉取不下来
k8s_image_master.tar.gz
k8s_image_node.tar.gz
flannel.tar

通过百度网盘分享的文件:k8s
链接:https://pan.baidu.com/s/1STR-h2sO5WdKzrTv5VpPEw?pwd=5rha
提取码:5rha

导入镜像

ctr -n k8s.io images import k8s_image_master.tar.gz

之后会出现一个 root 目录 node 节点同理

将 node 节点 发送给 140 141 主机

for i in 140 141 ; do scp k8s_image_node.tar.gz 192.168.142.$i:/root ; done

解压之后会有一个 root 目录

mv flannel.tar root
cd root
coredns:v1.11.3.tar.gz                     kube-apiserver:v1.31.0.tar.gz           pause:3.10.tar.gz
etcd:3.5.15-0.tar.gz                       kube-controller-manager:v1.31.0.tar.gz  pause:3.6.tar.gz
flannel-cni-plugin:v1.5.1-flannel2.tar.gz  kube-proxy:v1.31.0.tar.gz
flannel:v0.25.6.tar.gz                     kube-scheduler:v1.31.0.tar.gz
flannel.tar
ctr -n k8s.io images import coredns:v1.11.3.tar.gz
ctr -n k8s.io images import kube-apiserver:v1.31.0.tar.gz
ctr -n k8s.io images import pause:3.10.tar.gz
ctr -n k8s.io images import pause:3.6.tar.gz
ctr -n k8s.io images import etcd:3.5.15-0.tar.gz
ctr -n k8s.io images import kube-controller-manager:v1.31.0.tar.gz
ctr -n k8s.io images import flannel-cni-plugin:v1.5.1-flannel2.tar.gz
ctr -n k8s.io images import flannel:v0.25.6.tar.gz
ctr -n k8s.io images import flannel.tar
ctr -n k8s.io images import scheduler:v1.31.0.tar.gz
ctr -n k8s.io images import kube-proxy:v1.31.0.tar.gz
ctr -n k8s.io images import kube-scheduler:v1.31.0.tar.gz

再次初始化

yum install iproute-tc -y

如果不是第一次初始化就需要执行这条命令

kubeadm reset

结果

W1106 17:20:29.891394   41035 preflight.go:56] [reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1106 17:20:31.171391   41035 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Deleted contents of the etcd data directory: /var/lib/etcd
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.dThe reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
rm -rf $HOME/.kube/*

初始化

kubeadm init --apiserver-advertise-address=192.168.142.139 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16
for i in 140 141 ; do scp flannel.tar 192.168.142.$i:/root

配置 master 节点

mkdir /root/.kube
cp /etc/kubernetes/admin.conf /root/.kube/config
kubectl completion bash >> /root/.bash_profile

注意,它会返回一个 哈希

kubeadm join 192.168.142.139:6443 --token esar7a.1zafybsw63nlugi7 \--discovery-token-ca-cert-hash sha256:97c33c979b8b2de34f26d66c65cec740d46408e7f8d04a9a81cd3f78f5c6f858 

master 安装 flannel

cat > kube-flannel.yml << EOF
apiVersion: v1
kind: Namespace
metadata:labels:k8s-app: flannelpod-security.kubernetes.io/enforce: privilegedname: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: flannelname: flannelnamespace: kube-flannel
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: flannelname: flannel
rules:
- apiGroups:- ""resources:- podsverbs:- get
- apiGroups:- ""resources:- nodesverbs:- get- list- watch
- apiGroups:- ""resources:- nodes/statusverbs:- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:labels:k8s-app: flannelname: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-flannel
---
apiVersion: v1
data:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","EnableNFTables": false,"Backend": {"Type": "vxlan"}}
kind: ConfigMap
metadata:labels:app: flannelk8s-app: flanneltier: nodename: kube-flannel-cfgnamespace: kube-flannel
---
apiVersion: apps/v1
kind: DaemonSet
metadata:labels:app: flannelk8s-app: flanneltier: nodename: kube-flannel-dsnamespace: kube-flannel
spec:selector:matchLabels:app: flannelk8s-app: flanneltemplate:metadata:labels:app: flannelk8s-app: flanneltier: nodespec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxcontainers:- args:- --ip-masq- --kube-subnet-mgrcommand:- /opt/bin/flanneldenv:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: "5000"image: docker.io/flannel/flannel:v0.26.0name: kube-flannelresources:requests:cpu: 100mmemory: 50MisecurityContext:capabilities:add:- NET_ADMIN- NET_RAWprivileged: falsevolumeMounts:- mountPath: /run/flannelname: run- mountPath: /etc/kube-flannel/name: flannel-cfg- mountPath: /run/xtables.lockname: xtables-lockhostNetwork: trueinitContainers:- args:- -f- /flannel- /opt/cni/bin/flannelcommand:- cpimage: docker.io/flannel/flannel-cni-plugin:v1.5.1-flannel2name: install-cni-pluginvolumeMounts:- mountPath: /opt/cni/binname: cni-plugin- args:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistcommand:- cpimage: docker.io/flannel/flannel:v0.26.0name: install-cnivolumeMounts:- mountPath: /etc/cni/net.dname: cni- mountPath: /etc/kube-flannel/name: flannel-cfgpriorityClassName: system-node-criticalserviceAccountName: flanneltolerations:- effect: NoScheduleoperator: Existsvolumes:- hostPath:path: /run/flannelname: run- hostPath:path: /opt/cni/binname: cni-plugin- hostPath:path: /etc/cni/net.dname: cni- configMap:name: kube-flannel-cfgname: flannel-cfg- hostPath:path: /run/xtables.locktype: FileOrCreatename: xtables-lock
EOF
kubectl apply -f kube-flannel.yml

查看主节点情况,都是 running 就没问题

kubectl get pod -A
kube-flannel   kube-flannel-ds-kk4m2                1/1     Running   0          77s
kube-system    coredns-855c4dd65d-gvdxq             1/1     Running   0          12m
kube-system    coredns-855c4dd65d-wd97x             1/1     Running   0          12m
kube-system    etcd-k8s-master                      1/1     Running   0          12m
kube-system    kube-apiserver-k8s-master            1/1     Running   0          12m
kube-system    kube-controller-manager-k8s-master   1/1     Running   0          12m
kube-system    kube-proxy-d9z9b                     1/1     Running   0          12m
kube-system    kube-scheduler-k8s-master            1/1     Running   0          12m

如果在主节点查看 pod 的时候出现这个报错

kubectl get pod -AE1106 18:45:35.690658   55755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1106 18:45:35.692103   55755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1106 18:45:35.693780   55755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1106 18:45:35.695284   55755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
E1106 18:45:35.696826   55755 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
The connection to the server localhost:8080 was refused - did you specify the right host or port?

解决办法

mkdir ~/.kube
cp /etc/kubernetes/kubelet.conf  ~/.kube/config

再查看

初始化 node 节点

tar -zxf k8s_image_node.tar.gz
mv flannel.tar root/
cd root
ctr -n k8s.io images import flannel-cni-plugin\:v1.5.1-flannel2.tar.gz
ctr -n k8s.io images import flannel.tar
ctr -n k8s.io images import flannel:v0.25.6.tar.gz
ctr -n k8s.io images import kube-proxy:v1.31.0.tar.gz 
ctr -n k8s.io images import pause\:3.6.tar.gz 

上面的那个哈希在两个从节点上分别执行

kubeadm join 192.168.142.139:6443 --token ya76as.8n04pysuhk5c3kxx \--discovery-token-ca-cert-hash sha256:26e39428118d1d7a8c638e6a5503cc919302747330557b163e84ac0543e7812f 

运行结果

[preflight] Running pre-flight checks[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 501.384149ms
[kubelet-start] Waiting for the kubelet to perform the TLS BootstrapThis node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

如果加不进去节点

kubeadm join 192.168.142.139:6443 --token esar7a.1zafybsw63nlugi7 \ > --discovery-token-ca-cert-hash sha256:97c33c979b8b2de34f26d66c65cec740d46408e7f8d04a9a81cd3f78f5c6f858 # 报错信息
[preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists [ERROR Port-10250]: Port 10250 is in use [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...` To see the stack trace of this error execute with --v=5 or higher

解决办法

mv /etc/kubernetes/kubelet.conf /etc/kubernetes/kubelet.conf.backup
sudo lsof -i :10250
COMMAND   PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
kubelet 42410 root   11u  IPv6 180886      0t0  TCP *:10250 (LISTEN)
ss -tulnp | awk -F'[:,]' '/10250/ {match($0,/pid=[0-9]+/); if (RSTART)print substr($0, RSTART+4, RLENGTH-4)}' | xargs kill -9
mv /etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/ca.crt.backup

再次加入节点

kubeadm join 192.168.142.139:6443 --token ya76as.8n04pysuhk5c3kxx \--discovery-token-ca-cert-hash sha256:26e39428118d1d7a8c638e6a5503cc919302747330557b163e84ac0543e7812f 

查看集群状态

节点都加入之后在主节点查看集群状态

kubectl get nodes
NAME         STATUS   ROLES           AGE     VERSION
k8s-master   Ready    control-plane   3m36s   v1.31.2
k8s-slave1   Ready    <none>          54s     v1.31.2
k8s-slave2   Ready    <none>          5s      v1.31.2

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/59055.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【大数据学习 | kafka高级部分】kafka的kraft集群

首先我们分析一下zookeeper在kafka中的作用 zookeeper可以实现controller的选举&#xff0c;并且记录topic和partition的元数据信息&#xff0c;帮助多个broker同步数据信息。 在新版本中的kraft模式中可以这个管理和选举可以用kafka自己完成&#xff0c;而不再依赖zookeeper。…

CTF-WEB:php函数杂记(手册)持续更新

exif_imagetype() exif_imagetype 是 PHP 中的一个函数&#xff0c;用于判断图像文件的类型。它通过读取图像文件的前几个字节来推断文件类型&#xff0c;而无需依赖文件扩展名。这在处理文件上传时特别有用&#xff0c;因为文件扩展名可能会被伪造。 函数原型 exif_imagety…

生物发酵装备在制药工业中的应用与发展前景

在现代制药工业中&#xff0c;发酵技术扮演着越来越重要的角色。发酵设备&#xff0c;作为这一技术的核心&#xff0c;不仅促进了抗生素、疫苗和生物药物的生产&#xff0c;还为酶的生物合成提供了必要的条件。 发酵技术是指人们利用微生物的发酵作用&#xff0c;通过一系列的…

linux部分问题以及解决方式

目录 1.ubuntu桌面不显示了&#xff0c;只有命令行1.1启动gdm3服务1.2安装lightdm桌面管理包 1.ubuntu桌面不显示了&#xff0c;只有命令行 有如下两种解决方式。 1.1启动gdm3服务 这种方法只能临时生效&#xff0c;每次重启都要手动启动 sudo service gdm3 restart 1.2安装…

js 数据类型=》理解=》应用

文章目录 js 类型判断Object.prototype.toString.call 方法的理解与实现一、对Object.prototype.toString.call方法的理解二、Object.prototype.toString.call方法的实现原理三、简单的模拟实现示例 Object.prototype上的toString方法 理解应用补充&#xff08;symbol/BigInt&a…

HCIP快速生成树 RSTP

STP&#xff08;Spanning Tree Protocol&#xff0c;生成树协议&#xff09;和RSTP&#xff08;Rapid Spanning Tree Protocol&#xff0c;快速生成树协议&#xff09;都是用于在局域网中消除环路的网络协议。 STP&#xff08;生成树协议&#xff09; 基本概念&#xff1a; ST…

Excel 无法打开文件

Excel 无法打开文件 ‘新建 Microsoft Excel 工作表.xlsx",因为 文件格式或文件扩展名无效。请确定文件未损坏&#xff0c;并且文件扩展名与文件的格式匹配。 原因是卸载WPS之后&#xff0c;注册表未修改过来。 重新下载WPS&#xff0c;新建&#xff0c;xls工作表&#x…

【计算机网络】网络框架

一、网络协议和分层 1.理解协议 什么是协议&#xff1f;实际上就是约定。如果用计算机语言进行表达&#xff0c;那就是计算机协议。 2.理解分层 分层是软件设计方面的优势&#xff08;低耦合&#xff09;&#xff1b;每一层都要解决特定的问题 TCP/IP四层模型和OSI七层模型…

Kafka java 配置

前言&#xff1a; 大家好&#xff0c;大家在springboot项目中&#xff0c;经常采用 KafkaListener 做为消费者。这个是spring为我们封装的。 但是某些情况 注解的方式并不能满足需求。这个时候就需要手动版本了。 介绍&#xff1a; 我们已经集成spring-Kafka 就不需要再…

leetCode 739.每日温度

题目 给定一个整数数组 temperatures &#xff0c;表示每天的温度&#xff0c;返回一个数组 answer &#xff0c;其中 answer[i] 是指对于第 i 天&#xff0c;下一个更高温度出现在几天后。如果气温在这之后都不会升高&#xff0c;请在该位置用 0 来代替。 示例 1: 输入: te…

el-select下拉菜单虚拟列表化(含搜索功能)

需求简介 vue2element-ui项目中&#xff0c;当el-select中数据量较大时&#xff0c;会导致页面加载和渲染卡顿。在现在的el-select的基础上使用分页或者虚拟列表的形式去处理大量的下拉菜单&#xff0c;保证页面的正常渲染及el-select的正常回显。 需求分析 主要涉及几个点&…

DPPE-N3中叠氮基团使得DPPE-N3能够与含有炔基的材料在铜离子的催化下发生点击化学反应,生成稳定的1,2,3-三唑环结构,2252461-33-7

一、基本信息 英文名称&#xff1a;DPPE-N3&#xff0c;DPPE-Azide 中文名称&#xff1a;DPPE-叠氮 CAS号&#xff1a;2252461-33-7 分子式&#xff1a;C43H83N4O9P 分子量&#xff1a;831.13 供应商&#xff1a;陕西新研博美生物科技 结构式&#xff1a; 二、结构特点…

算法学习第一弹——C++基础

早上好啊&#xff0c;大佬们。来看看咱们这回学点啥&#xff0c;在前不久刚出完C语言写的PTA中L1的题目&#xff0c;想必大家都不过瘾&#xff0c;感觉那些题都不过如此&#xff0c;所以&#xff0c;为了我们能更好的去处理更难的题目&#xff0c;小白兔决定奋发图强&#xff0…

[AcWing算法基础课]动态规划之01背包

题目链接&#xff1a;01背包 有 N 件物品和一个容量是 V 的背包。每件物品只能使用一次。第 i 件物品的体积是 vi&#xff0c;价值是 wi。求解将哪些物品装入背包&#xff0c;可使这些物品的总体积不超过背包容量&#xff0c;且总价值最大。输出最大价值。 首先&#xff0c;我们…

【机器学习】机器学习中用到的高等数学知识

机器学习是一个跨学科领域&#xff0c;涉及多种高等数学知识。以下是一些在机器学习中常用的数学概念和技术&#xff1a; 1. 线性代数 (Linear Algebra) 向量和矩阵&#xff1a;用于表示数据集和特征。矩阵运算&#xff1a;加法、乘法和逆矩阵等&#xff0c;用于计算模型参数…

FreeRTOS 24:事件组EventGroup等待、清零、获取操作

等待事件标志位xEventGroupWaitBits() 既然标记了事件的发生&#xff0c;那么我怎么知道他到底有没有发生&#xff0c;这也是需要一个函数来获 取 事 件 是 否 已 经 发 生 &#xff0c; FreeRTOS 提 供 了 一 个 等 待 指 定 事 件 的 函 数 — — xEventGroupWaitBits()&…

世界坐标和Local坐标的区分

TargetPos.position(-TargetPos.forward*-4)TargetPos.up*7 这是相对于TargetPos的位置进行偏移&#xff0c; 动态的与Target的本地坐标改变 new Vector3(TargetPos.position.x, TargetPos.position.y 7, TargetPos.position.z - 5) 这个是直接new了一个世界坐标的Vector3 &…

Ubuntu 22.04 的Python3.11.8 安装

背景 新版本的Python需要更高版本的OpenSSL 依赖。使用操作系统的SSL不然会提示缺少SSL的报错。 部署 ## Openssl部署 wget https://github.com/openssl/openssl/releases/download/openssl-3.4.0/openssl-3.4.0.tar.gz## ./config --prefix/usr/local/openssl make &&…

在Ubuntu下安装RabbitMQ、添加一个新的登录用户并设置密码

在Ubuntu下安装RabbitMQ、添加一个新的登录用户并设置密码 在Ubuntu下安装RabbitMQ可以按照以下步骤进行&#xff1a;步骤 1: 更新系统步骤 2: 安装Erlang步骤 3: 添加RabbitMQ仓库步骤 4: 更新APT索引并安装RabbitMQ步骤 5: 启动RabbitMQ服务步骤 6: 检查RabbitMQ状态步骤 7: …

nacos单机源码解析-服务发现和心跳检测

目录 1 服务发现 1.1 客户端 1.1.1 入口 1.1.2 定时拉取 1.1.3 总结 1.2 服务端 2 心跳检测 2.1 客户端 2.2 服务端 2.2.1 处理心跳请求 2.2.2 开启定时任务进行心跳检测 2.2.3 总结 1 服务发现 服务列表&#xff1a;Nacos 维护一个服务列表&#xff0c;记录所有已注…