kubeadm搭建k8s高可用集群(keepalived+nginx+3master)

目录

    • 前言
    • 服务器准备
    • 架构讲解
    • 环境初始化
    • 安装keepalived软件
    • 安装nginx软件
    • 初始化k8s节点
    • 安装docker
    • 初始化master01节点的控制面板
    • master02、master03节点加入集群
    • node01、node02节点加入集群
    • 检查集群
    • 配置docker和kubectl命令补全
    • 创建应用验证集群功能
    • 验证master节点高可用
    • 方式二,将keepalived+nginx外置

前言

环境:centos7.6 docker-ce-20.10.9 kubernetes-version v1.22.17
本篇来讲解如何在centos下安装部署高可用k8s集群

服务器准备

#准备5台服务器,角色分配如下
192.168.100.23 master01、etcd01、keepalived+nginx(vip:192.168.100.200)
192.168.100.24 master02、etcd02、keepalived+nginx(vip:192.168.100.200)
192.168.100.25 master03、etcd03、keepalived+nginx(vip:192.168.100.200)
192.168.100.26 node01
192.168.100.27 node02

架构讲解

keepalived+nginx实现高可用+反向代理,这里为了节约服务器,将keepalived+nginx部署在master节点上。
keepalived会虚拟一个vip,vip任意绑定在一台master节点上,使用nginx对3台master节点进行反向代理。在初始化k8s集群的使用,IP填写的vip,这样安装好k8s集群之后,kubectl客户端而言,访问的vip:16443端口,该端口是nginx监听的端口,nginx会进行反向代理到3个master节点上的6443端口。

环境初始化

yum install ntp -y && systemctl start ntpd && systemctl enable ntpd;
yum install epel-release -y && yum install jq -y
yum install vim lsof net-tools zip unzip tree wget curl bash-completion pciutils gcc make lrzsz tcpdump bind-utils -y
sed -ri 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config 
setenforce 0
echo "检查是否关闭selinux:";getenforce && grep 'SELINUX=disabled' /etc/selinux/config
systemctl stop firewalld.service && systemctl disable firewalld.service
echo "检查是否关闭防火墙:";systemctl status firewalld.service | grep -E 'Active|disabled'
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapoff -a
echo "检查swap是否关闭:";grep -i 'swap' /etc/fstab;free -h | grep -i 'swap'
systemctl stop NetworkManager.service && systemctl disable NetworkManager.service
echo "检查是否关闭NetworkManager:";systemctl status NetworkManager.service | grep -E 'Active|disabled'
#每台主机设置自己的主机名
hostnamectl set-hostname master01
hostnamectl set-hostname master02
hostnamectl set-hostname master03
hostnamectl set-hostname node01
hostnamectl set-hostname node02
#写入/etc/hosts文件
cat >> /etc/hosts <<EOF
192.168.100.23 master01
192.168.100.24 master02
192.168.100.25 master03
192.168.100.26 node01
192.168.100.27 node02
EOF

安装keepalived软件

3台master节点都要安装keepalived软件:

#3台master节点操作
yum install keepalived -y
cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak#keepalived配置文件的参数含义可以参考:https://blog.csdn.net/MssGuo/article/details/127330115#master01节点的keepalived配置文件内容
#这里配置Keepalived监听1644端口或nginx挂掉的情况,有需要自行添加即可
[root@master01 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVELvrrp_skip_check_adv_addrvrrp_strictvrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_instance VI_1 {state MASTERinterface ens192virtual_router_id 51priority 100advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.100.200}
}#master02节点的keepalived配置文件内容
[root@master02 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVELvrrp_skip_check_adv_addrvrrp_strictvrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_instance VI_1 {state BACKUPinterface ens192virtual_router_id 51priority 60advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.100.200}
}#master03节点的keepalived配置文件内容
[root@master03 ~]# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalivedglobal_defs {router_id LVS_DEVELvrrp_skip_check_adv_addrvrrp_strictvrrp_garp_interval 0vrrp_gna_interval 0
}vrrp_instance VI_1 {state BACKUPinterface ens192virtual_router_id 51priority 40advert_int 1authentication {auth_type PASSauth_pass 1111}virtual_ipaddress {192.168.100.200}
}
#3台依次启动keepalived
systemctl start keepalived.service && systemctl enable keepalived.service
systemctl status  keepalived.service
#查看vip,发现vip现在是在master01上,master02和master03均没有vip
ip a | grep '192.168.100.200'
#检测vip是否会漂移,关闭master01节点的keepalived
systemctl stop keepalived.service
#这时发现vip漂移到了master02上,master01和master03均没有vip
#重启keepalived服务之后vip又回到了master01节点,因为默认配置的是vip抢占模式,符合设计逻辑

安装nginx软件

在3台master节点上nginx软件:

#nginx需要用到pcre库,pcre库全称是Perl compatible regular expressions ,翻译为Perl兼容正则表达式,是为了
#支持Nginx具备URL重写#rewrite模块,若不安装pcre库,则Nginx无法使用rewrite模块。
#安装nginx的依赖
yum -y install gcc gcc-c++ make pcre pcre-devel zlib-devel zlib openssl-devel openssl#参照官网安装nginx,官网地址:http://nginx.org/en/linux_packages.html#RHEL
yum install yum-utils
cat >/etc/yum.repos.d/nginx.repo<<'EOF'
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
EOFyum-config-manager --enable nginx-mainline
yum install nginx -y
#注意:nginx配置为4四层反向代理,配置7层反向代理的好像协议方面存在问题,暂未解决,配置4层就没有问题
#直接修改主配置文件,添加下面的这段stream内容
[root@master01 nginx]# cat /etc/nginx/nginx.conf 
user  nginx;
worker_processes  auto;
error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;
events {worker_connections  1024;
}
#添加了stream 这一段,其他的保持默认即可
stream {log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';access_log  /var/log/nginx/k8s-access.log  main;upstream k8s-apiserver {server 192.168.100.23:6443;  	#master01的IP和6443端口server 192.168.100.24:6443;		#master02的IP和6443端口server 192.168.100.25:6443;		#master03的IP和6443端口}server {listen 16443;					#监听的是16443端口,因为nginx和master复用机器,所以不能是6443端口proxy_pass k8s-apiserver;		#使用proxy_pass模块进行反向代理}
}
#http模块保持默认即可
http {include       /etc/nginx/mime.types;default_type  application/octet-stream;log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log  /var/log/nginx/access.log  main;sendfile        on;#tcp_nopush     on;keepalive_timeout  65;#gzip  on;include /etc/nginx/conf.d/*.conf;
}
[root@master01 nginx]# systemctl  enable --now  nginx
systemctl  status  nginx
netstat  -lntup| grep 16443#将nginx配置文件发送到master02、master03
scp   /etc/nginx/nginx.conf  root@master02:/etc/nginx/
scp   /etc/nginx/nginx.conf   root@master03:/etc/nginx/
#同样启动master02、master03上的nginx
systemctl  enable --now  nginx
systemctl  status  nginx
netstat  -lntup| grep 16443

初始化k8s节点

#master节点和node节点都要配置
touch /etc/sysctl.d/k8s.conf
cat >> /etc/sysctl.d/k8s.conf <<EOF 
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
vm.swappiness=0
EOF
sysctl --system#配置k8s的yum源,master节点和node节点都要配置
cat >/etc/yum.repos.d/kubernetes.repo <<'EOF'
[kubernetes]
name = Kubernetes
baseurl = https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled = 1
gpgcheck = 0
repo_gpgcheck = 0
gpgkey = https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装docker

每台k8s节点都要安装docker:

#在所有k8s节点上,包含master节点和node节点上都要安装docker
yum remove docker 	\docker-client \docker-client-latest \docker-common \docker-latest \docker-latest-logrotate \docker-logrotate \docker-engine \docker-ce
yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
#yum -y install docker-ce docker-ce-cli containerd.io 
#安装docker-ce-20.10而不是安装最新的docker版本,因为k8s 1.22.17不一定支持最新的docker版本
yum -y install docker-ce-20.10.9 docker-ce-cli-20.10.9 containerd.io
mkdir /etc/docker/
cat>> /etc/docker/daemon.json <<'EOF'
{"registry-mirrors": ["https://ghj8urvv.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
#注意,上面这两句是添加镜像加速器地址和修改docker的cgroupdriver为systemd,镜像加速器可以去阿里云获取
#每个人的阿里云账号里面的镜像加速器都是不同的,不要使用我这个,当然也可以不配置镜像加速器
systemctl enable --now docker
systemctl status docker
#检查加速器配置和cgroup是否配置成功
docker info |grep 'Cgroup Driver' ;docker info | grep -A 1 'Registry Mirrors'										
#master节点和node节点都安装kubeadm、kubelet、kubectl
yum list --showduplicates | grep  kubeadm
#正常情况下kubectl只是master节点安装,但是这里因为如果不安装kubectl的话yum会默认作为依赖安装,而安装的版本可能不是1.22.17
#所以干脆所以节点都安装了
yum -y install kubelet-1.22.17 kubeadm-1.22.17 kubectl-1.22.17
systemctl enable kubelet

初始化master01节点的控制面板

#仅在master01节点执行初始化
#注意
#apiserver-advertise-address设置master01本机的ip地址
#apiserver-bind-port是api-server的6443端口,默认也是6443端口
#control-plane-endpoint设置为vip+nginx的端口
#可以使用kubeadm init --help查看一下命令帮助#模拟执行,这里加了--dry-run只是模拟执行看看有没有报错,并未真正安装
kubeadm init \
--apiserver-advertise-address=192.168.100.23 \
--apiserver-bind-port=6443 \
--control-plane-endpoint=192.168.100.200:16443 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.22.17 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 --dry-run
#如何输出没有报错,去掉--dry-run参数,开始真正执行:
kubeadm init \
--apiserver-advertise-address=192.168.100.23 \
--apiserver-bind-port=6443 \
--control-plane-endpoint=192.168.100.200:16443 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.22.17 \
--service-cidr=10.96.0.0/12 \
--pod-network-cidr=10.244.0.0/16 #这时再开一个终端执行docker images就可以看到拉取了很多k8s的镜像
#如果报错了,需要排查错误,然后清空环境
kubeadm reset
rm -rf /etc/cni
iptables -F
yum install ipvsadm -y
ipvsadm --clear
rm -rf $HOME/.kube/config
#然后重新执行kubeadm init命令初始化即可#最终初始成功的后会输出以下信息
.......................................
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join 192.168.100.200:16443 --token x1v36a.lqe5ul9zpzx55b10 \--discovery-token-ca-cert-hash sha256:869a5df85403ce519a47b6444dd120d88feccbf54356e510dc3c09f55a76f678 \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.100.200:16443 --token x1v36a.lqe5ul9zpzx55b10 \--discovery-token-ca-cert-hash sha256:869a5df85403ce519a47b6444dd120d88feccbf54356e510dc3c09f55a76f678 
[root@master01 nginx]#
#按照上面的信息提示,对应的步骤即可
#上面初始化完成master01节点之后会提示你在master节点或node节点执行对应的命令来将master节点或node节点加入k8s集群
#注意:这段kubeamd join命令的token只有24h,24h就过期,需要执行kubeadm token create --print-join-command  重新生成token

master02、master03节点加入集群

#首先需要在master02和master03上下载镜像
#可以在master01上看看需要下载哪些镜像
[root@master01 ~]# docker images
REPOSITORY                                                        TAG        IMAGE ID       CREATED        SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.22.17   2b5e9c96248f   9 months ago   128MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.22.17   c7ab721dfdae   9 months ago   122MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.22.17   d4893b67e97f   9 months ago   52.7MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.22.17   77c8bfac1781   9 months ago   104MB
registry.aliyuncs.com/google_containers/etcd                      3.5.6-0    fce326961ae2   9 months ago   299MB
registry.aliyuncs.com/google_containers/coredns                   v1.8.4     8d147537fb7d   2 years ago    47.6MB
registry.aliyuncs.com/google_containers/pause                     3.5        ed210e3e4a5b   2 years ago    683kB
[root@master01 ~]# 
#然后去master02和master03上下载这些镜像即可
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.22.17
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.22.17   
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.22.17 
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.22.17 
docker pull registry.aliyuncs.com/google_containers/etcd:3.5.6-0  
docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.4   
docker pull registry.aliyuncs.com/google_containers/pause:3.5  #master02、master03节点上创建目录
mkdir /etc/kubernetes/pki/etcd -p# 在master01节点上,将master01节点上的证书拷贝到master02、master03节点上
scp -rp /etc/kubernetes/pki/ca.*  master02:/etc/kubernetes/pki/
scp -rp /etc/kubernetes/pki/sa.*  master02:/etc/kubernetes/pki/
scp -rp /etc/kubernetes/pki/front-proxy-ca.*  master02:/etc/kubernetes/pki/
scp -rp /etc/kubernetes/pki/etcd/ca.*  master02:/etc/kubernetes/pki/etcd/
scp -rp /etc/kubernetes/admin.conf  master02:/etc/kubernetes/scp -rp /etc/kubernetes/pki/ca.*  master03:/etc/kubernetes/pki/
scp -rp /etc/kubernetes/pki/sa.*  master03:/etc/kubernetes/pki/
scp -rp /etc/kubernetes/pki/front-proxy-ca.*  master03:/etc/kubernetes/pki/
scp -rp /etc/kubernetes/pki/etcd/ca.*  master03:/etc/kubernetes/pki/etcd/
scp -rp /etc/kubernetes/admin.conf  master03:/etc/kubernetes/#由上面初始成功的信息提示,复制粘贴命令到master02、master03节点执行即可
kubeadm join 192.168.100.200:16443 --token x1v36a.lqe5ul9zpzx55b10 \--discovery-token-ca-cert-hash sha256:869a5df85403ce519a47b6444dd120d88feccbf54356e510dc3c09f55a76f678 \--control-plane#执行成功如下,安装提示操作即可
[mark-control-plane] Marking the node master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]This node has joined the cluster and a new control plane instance was created:* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.[root@master02 pki]# mkdir -p $HOME/.kube
[root@master02 pki]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master02 pki]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

node01、node02节点加入集群

#node节点直接执行命令即可,不需要做什么配置
#在node01、node02节点执行下面命令
kubeadm join 192.168.100.200:16443 --token x1v36a.lqe5ul9zpzx55b10 \--discovery-token-ca-cert-hash sha256:869a5df85403ce519a47b6444dd120d88feccbf54356e510dc3c09f55a76f678 

检查集群

以上,就创建了3个master节点+2个node节点的k8s集群,在任意一个master节点检查集群:

[root@master01 ~]# kubectl get node
NAME       STATUS     ROLES                  AGE     VERSION
master01   NotReady   control-plane,master   50m     v1.22.17
master02   NotReady   control-plane,master   6m58s   v1.22.17
master03   NotReady   control-plane,master   6m10s   v1.22.17
node01     NotReady   <none>                 39s     v1.22.17
node02     NotReady   <none>                 12s     v1.22.17
[root@master01 ~]# 
[root@master01 ~]# kubectl config view 
apiVersion: v1
clusters:
- cluster:certificate-authority-data: DATA+OMITTEDserver: https://192.168.100.200:16443		#可以看到,监听在vip和16443端口上name: kubernetes
contexts:
- context:cluster: kubernetesuser: kubernetes-adminname: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-adminuser:client-certificate-data: DATA+OMITTEDclient-key-data: DATA+OMITTED#安装flannel网络
wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
[root@master01 nginx]# kubectl  get pod -A
NAMESPACE      NAME                               READY   STATUS    RESTARTS      AGE
kube-flannel   kube-flannel-ds-6tzzk              1/1     Running   0             5m43s
kube-flannel   kube-flannel-ds-8n6nc              1/1     Running   0             5m43s
kube-flannel   kube-flannel-ds-8rtgx              1/1     Running   0             5m43s
kube-flannel   kube-flannel-ds-bwwrv              1/1     Running   0             5m43s
kube-flannel   kube-flannel-ds-nmbzq              1/1     Running   0             5m43s
kube-system    coredns-7f6cbbb7b8-mf22c           1/1     Running   0             60m
kube-system    coredns-7f6cbbb7b8-n2w94           1/1     Running   0             60m
kube-system    etcd-master01                      1/1     Running   4             60m
kube-system    etcd-master02                      1/1     Running   0             17m
kube-system    etcd-master03                      1/1     Running   0             16m
kube-system    kube-apiserver-master01            1/1     Running   4             60m
kube-system    kube-apiserver-master02            1/1     Running   0             17m
kube-system    kube-apiserver-master03            1/1     Running   1 (16m ago)   16m
kube-system    kube-controller-manager-master01   1/1     Running   5 (17m ago)   60m
kube-system    kube-controller-manager-master02   1/1     Running   0             17m
kube-system    kube-controller-manager-master03   1/1     Running   0             15m
kube-system    kube-proxy-6lzs9                   1/1     Running   0             11m
kube-system    kube-proxy-9tljk                   1/1     Running   0             17m
kube-system    kube-proxy-jzq49                   1/1     Running   0             60m
kube-system    kube-proxy-mk5w8                   1/1     Running   0             10m
kube-system    kube-proxy-rhmnv                   1/1     Running   0             16m
kube-system    kube-scheduler-master01            1/1     Running   5 (17m ago)   60m
kube-system    kube-scheduler-master02            1/1     Running   0             17m
kube-system    kube-scheduler-master03            1/1     Running   0             16m
[root@master01 nginx]# kubectl  get nodes
NAME       STATUS   ROLES                  AGE   VERSION
master01   Ready    control-plane,master   61m   v1.22.17
master02   Ready    control-plane,master   17m   v1.22.17
master03   Ready    control-plane,master   16m   v1.22.17
node01     Ready    <none>                 11m   v1.22.17
node02     Ready    <none>                 11m   v1.22.17
[root@master01 nginx]# 

配置docker和kubectl命令补全

#每个节点都配置docker命令自动补全功能
yum install bash-completion -y
curl -L https://raw.githubusercontent.com/docker/compose/1.24.1/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
source /etc/bash_completion.d/docker-compose
#master节点配置kubectl命令补全功能
yum install -y bash-completion
echo 'source /usr/share/bash-completion/bash_completion' >>/root/.bashrc
echo 'source  <(kubectl completion bash)' >>/root/.bashrc
source /root/.bashrc

创建应用验证集群功能

kubectl create deployment httpd --image=httpd
kubectl expose deployment httpd --port=80 --type=NodePort
#验证正常
[root@master01 nginx]# kubectl  get svc httpd
NAME    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
httpd   NodePort   10.106.207.85   <none>        80:30251/TCP   4s
[root@master01 nginx]# curl  master02:30251
<html><body><h1>It works!</h1></body></html>
[root@master01 nginx]# 

验证master节点高可用

#把master01节点关机测试
#发现关掉任意一台master节点,k8s集群master节点kubectl get nodes 时行时不行,原因未知,不知是否与etcd有关,因为etcd都是安装在
#master节点上,也有可能是nginx仍然把请求发送给关机的master节点导致无法响应。

方式二,将keepalived+nginx外置

如果服务器足够,建议将keepalived+nginx单独准备两台服务器,如下:

#准备8台服务器,角色分配如下
192.168.100.21 keepalived+nginx(vip:192.168.100.200)
192.168.100.22 keepalived+nginx(vip:192.168.100.200)
192.168.100.23 master01、etcd01
192.168.100.24 master02、etcd02
192.168.100.25 master03、etcd03
192.168.100.26 node01
192.168.100.27 node02

其余安装步骤与上面相识。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/86514.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

深入解析TI毫米波雷达ROS驱动器的改进:从雷达参数配置、多普勒数据集成,到多雷达协同及传感器融合

第一部分&#xff1a;概述与与原始TI版本的主要区别 1.1 背景简介 毫米波雷达是近年来在汽车、无人机和其他应用中越来越受欢迎的传感器。其优点包括在恶劣天气条件下也能工作、可以提供速度和距离数据、以及不受环境光线影响。Texas Instruments&#xff08;TI&#xff09;是…

QT-day4

画一个时钟 widget.h #ifndef WIDGET_H #define WIDGET_H#include <QWidget> #include <QPaintEvent> #include <QDebug> #include <QPainter> #include <QTimer> #include <QTime>QT_BEGIN_NAMESPACE namespace Ui { class Widget; } Q…

Linux chmod命令——修改权限信息

我们可以使用chmod命令&#xff0c;修改文件、文件夹的权限信息。注意&#xff0c;只有文件、文件夹的所属用户或root用户可以修改。 chmod [-R] 权限 文件或文件夹 -R&#xff0c;对文件夹内的全部内容应用同样的操作 例如&#xff1a; chmod urwx,grx,ox hello.txt &…

100道JVM面试题大全最新版2023版

100道与JVM相关的面试题&#xff0c;包括JVM基本概念、内存管理、垃圾回收、性能调优、JVM内存模型、JVM是什么意思、JVM调优、JVM垃圾回收机制、JVM类加载机制、JVM原理。 1. 什么是JVM&#xff08;Java虚拟机&#xff09;&#xff1f; JVM&#xff08;Java Virtual Machine…

Kafka 笔记 (Non-Root/Container)

目录 1. Kafka 笔记 (Non-Root/Container)1.1. 启动1.2. bitnami/kafka1.2.1. Non-Root Containers 1. Kafka 笔记 (Non-Root/Container) 1.1. 启动 Kafka 需要与 ZooKeeper 一起启动: Kafka with ZooKeeper Run the following commands in order to start all services in…

LeetCode_贪心算法_简单_605.种花问题

目录 1.题目2.思路3.代码实现&#xff08;Java&#xff09; 1.题目 假设有一个很长的花坛&#xff0c;一部分地块种植了花&#xff0c;另一部分却没有。可是&#xff0c;花不能种植在相邻的地块上&#xff0c;它们会争夺水源&#xff0c;两者都会死去。给你一个整数数组 flowe…

机器学习 day35(决策树)

决策树 上图的数据集是一个特征值X采用分类值&#xff0c;即只取几个离散值&#xff0c;同时也是一个二元分类任务&#xff0c;即标签Y只有两个值 上图为之前数据集对应的决策树&#xff0c;最顶层的节点称为根节点&#xff0c;椭圆形节点称为决策节点&#xff0c;矩形节点称…

PythonWeb服务器(HTTP协议)

一、HTTP协议与实现原理 HTTP&#xff08;Hypertext Transfer Protocol&#xff0c;超文本传输协议&#xff09;是一种用于在网络上传输超文本数据的协议。它是Web应用程序通信的基础&#xff0c;通过客户端和服务器之间的请求和响应来传输数据。在HTTP协议中连接客户与服务器的…

Shel简介入门

Shell编程: 1.了解入门 2.变量 变量子串知识 3.特殊位置变量 4.脚本的执行方式 5.脚本传参 6.数值运算 7.数值比较 8.字符串比较 9.正则比较方式 10.条件控制语句 if while for case break continue exit 11.数组 12.案例 一. Shell介绍 Shell的作用 系统…

计算机视觉: 三维物体生成

三维物体生成与编辑 论文地址: Controllable Mesh Generation Through Sparse Latent Point Diffusion Models 背景 数据是目前数字化和AI领域最宝贵的财富之一&#xff0c;但是对于目前的开发者来说&#xff0c;收集数据都意味着极大的成本。所以建立一个高效的生成模型能极…

关于亚马逊云科技云技能学习

云计算作为当今信息技术领域的热门话题&#xff0c;正以其强大的计算和存储能力&#xff0c;灵活性和可扩展性&#xff0c;逐渐改变着人们的工作方式和生活方式。而其中的一家龙头企业——亚马逊云科技&#xff08;Amazon Web Services&#xff0c;AWS&#xff09;&#xff0c;…

用Bosch Sensortec的BMI08X传感器API在C语言中控制IMU传感器:一个完整的集成指南

介绍 在现代的硬件开发领域&#xff0c;惯性测量单元(IMU)已经成为了一个关键的组件&#xff0c;尤其在运动追踪和定位应用中。Bosch Sensortec 的 BMI08X 是其中的一种高度受欢迎的IMU系列传感器。为了更简单、更快速地在C语言项目中集成和使用这些传感器&#xff0c;Bosch S…

数据结构-----堆(完全二叉树)

目录 前言 一.堆 1.堆的概念 2.堆的存储方式 二.堆的操作方法 1.堆的结构体表示 2.数字交换接口函数 3.向上调整&#xff08;难点&#xff09; 4.向下调整&#xff08;难点&#xff09; 5.创建堆 6.堆的插入 7.判断空 8.堆的删除 9.获取堆的根(顶)元素 10.堆的遍历…

如何看待Unity新的收费模式?

文章目录 背景Unity的论点开发者的担忧如何看待Unity新的收费模式&#xff1f;1. 理解Unity的立场2. 考虑小型开发者3. 探索替代方案4. 对市场变化保持敏感5. 提高游戏质量 结论 &#x1f389; 如何看待Unity新的收费模式&#xff1f; ☆* o(≧▽≦)o *☆嗨~我是IT陈寒&#x1…

Jenkins “Trigger/call builds on other project“用法及携带参数

1.功能 “Trigger/call builds on other project” 功能是 Jenkins 中的一个特性&#xff0c;允许您在某个项目的构建过程中触发或调用另一个项目的构建。 当您在 Jenkins 中启用了 “Trigger/call builds on other project” 功能并配置了相应的触发条件后&#xff0c;当主项…

计算机视觉与深度学习-循环神经网络与注意力机制-Attention(注意力机制)-【北邮鲁鹏】

目录 引出Attention定义Attention-based model通俗解释应用在图像领域图像字幕生成&#xff08;image caption generation&#xff09;视频处理 序列到序列学习&#xff1a;输入和输出都是长度不同的序列 引出Attention 传统的机器翻译是&#xff0c;将“机器学习”四个字都学…

软件工程开发模式:从传统到现代的演进

引言 软件工程开发模式是指导软件开发过程的重要框架&#xff0c;旨在提高软件开发的效率和质量。随着技术的不断进步&#xff0c;软件工程开发模式也在不断发展演变&#xff0c;以适应不同的项目需求和开发环境。本文将介绍传统软件工程开发模式和现代敏捷、精益和DevOps软件…

华为OD机试真题-会议接待-2023年OD统一考试(B卷)

题目描述: 某组织举行会议,来了多个代表团同时到达,接待处只有一辆汽车,可以同时接待多个代表团,为了提高车辆利用率,请帮接待员计算可以坐满车的接待方案,输出方案数量。 约束: 1、一个团只能上一辆车,并且代表团人数(代表团数量小于30,每个代表团人数小于30)小于…

八大排序(二)快速排序

一、快速排序的思想 快速排序是Hoare于1962年提出的一种二叉树结构的交换排序方法&#xff0c;其基本思想为&#xff1a;任取待排序元素序列中的某元素作为基准值&#xff0c;按照该排序码将待排序集合分割成两子序列&#xff0c;左子序列中所有元素均小于基准值&#xff0c;右…

【新版】系统架构设计师 - 案例分析 - 软件工程

个人总结&#xff0c;仅供参考&#xff0c;欢迎加好友一起讨论 文章目录 结构化分析SA数据流图DFD数据流图平衡原则答题技巧例题1例题2 面向对象的分析OOA用例图用例模型细化用例描述用例关系【包含、扩展、泛化】分析模型定义概念类确定类之间的关系类图与对象图实体类 - 存储…