Kubeadm - K8S1.20 - 高可用集群部署(博客)

这里写目录标题

  • Kubeadm - K8S1.20 - 高可用集群部署
    • 一.环境准备
      • 1.系统设置
    • 二.所有节点安装docker
    • 三.所有节点安装kubeadm,kubelet和kubectl
      • 1.定义kubernetes源
      • 2.高可用组件安装、配置
    • 四.部署K8S集群
    • 五.问题解决
      • 1.加入集群的 Token 过期
      • 2.master节点 无法部署非系统Pod
      • 3.修改NodePort的默认端口
      • 4.外部 etcd 部署配置

Kubeadm - K8S1.20 - 高可用集群部署

一.环境准备

1.系统设置

注意事项:
master节点cpu核心数要求大于2
●最新的版本不一定好,但相对于旧版本,核心功能稳定,但新增功能、接口相对不稳
●学会一个版本的 高可用部署,其他版本操作都差不多
●宿主机尽量升级到CentOS 7.9
●内核kernel升级到 4.19+ 这种稳定的内核
●部署k8s版本时,尽量找 1.xx.5 这种大于5的小版本(这种一般是比较稳定的版本)

//所有节点,关闭防火墙规则,关闭selinux,关闭swap交换
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab//修改主机名
hostnamectl set-hostname master01
hostnamectl set-hostname master02
hostnamectl set-hostname master03
hostnamectl set-hostname node01
hostnamectl set-hostname node02//所有节点修改hosts文件
vim /etc/hosts
192.168.82.100 master01
192.168.82.101 master02
192.168.82.102 master03
192.168.82.103 node01
192.168.82.104 node02//所有节点时间同步
yum -y install ntpdate
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.comsystemctl enable --now crondcrontab -e
*/30 * * * * /usr/sbin/ntpdate time2.aliyun.com//所有节点实现Linux的资源限制
vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited//所有节点升级内核
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm -O /opt/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm -O /opt/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpmcd /opt/
yum localinstall -y kernel-ml*#更改内核启动方式
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
grubby --default-kernel
reboot//调整内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF#生效参数
sysctl --system  //加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

二.所有节点安装docker

yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.iomkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{"registry-mirrors": ["https://xxxxxxx.mirror.aliyuncs.com"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","log-opts": {"max-size": "500m", "max-file": "3"}
}
EOFsystemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service docker info | grep "Cgroup Driver"
Cgroup Driver: systemd

三.所有节点安装kubeadm,kubelet和kubectl

1.定义kubernetes源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOFyum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15#配置Kubelet使用阿里云的pause镜像
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF//开机自启kubelet
systemctl enable --now kubelet

2.高可用组件安装、配置

//所有 master 节点部署 Haproxy
yum -y install haproxy keepalivedcat > /etc/haproxy/haproxy.cfg << EOF
globallog         127.0.0.1 local0 infolog         127.0.0.1 local1 warningchroot      /var/lib/haproxypidfile     /var/run/haproxy.pidmaxconn     4000user        haproxygroup       haproxydaemonstats socket /var/lib/haproxy/statsdefaultsmode                    tcplog                     globaloption                  tcplogoption                  dontlognulloption                  redispatchretries                 3timeout queue           1mtimeout connect         10stimeout client          1mtimeout server          1mtimeout check           10smaxconn                 3000frontend monitor-inbind *:33305mode httpoption httplogmonitor-uri /monitorfrontend k8s-masterbind *:6444   mode tcpoption tcplogdefault_backend k8s-masterbackend k8s-mastermode tcpoption tcplogoption tcp-checkbalance roundrobinserver k8s-master1 192.168.82.100:6443  check inter 10000 fall 2 rise 2 weight 1server k8s-master2 192.168.82.101:6443  check inter 10000 fall 2 rise 2 weight 1server k8s-master3 192.168.82.102:6443  check inter 10000 fall 2 rise 2 weight 1
EOF
//所有 master 节点部署 keepalived
yum -y install keepalivedcd /etc/keepalived/
vim keepalived.conf
! Configuration File for keepalived
global_defs {router_id LVS_HA1			#路由标识符,每个节点配置不同
}vrrp_script chk_haproxy {script "/etc/keepalived/check_haproxy.sh"interval 2weight 2
}vrrp_instance VI_1 {state MASTER				#本机实例状态,MASTER/BACKUP,备机配置文件中设置BACKUPinterface ens33virtual_router_id 51priority 100				#本机初始权重,备机设置小于主机的值advert_int 1virtual_ipaddress {192.168.82.200          #设置VIP地址}track_script {chk_haproxy}
}vim check_haproxy.sh
#!/bin/bash
if ! killall -0 haproxy; thensystemctl stop keepalived
fisystemctl enable --now haproxy
systemctl enable --now keepalived

四.部署K8S集群

//在 master01 节点上设置集群初始化配置文件
kubeadm config print init-defaults > /opt/kubeadm-config.yamlcd /opt/
vim kubeadm-config.yaml
......
11 localAPIEndpoint:
12   advertiseAddress: 192.168.82.100		#指定当前master节点的IP地址
13   bindPort: 6443                         #注意haproxy中的后端端口21 apiServer:
22   certSANs:								#在apiServer属性下面添加一个certsSANs的列表,添加所有master节点的IP地址和集群VIP地址
23   - 192.168.82.200
24   - 192.168.82.100
25   - 192.168.82.101
26   - 192.168.82.10230 clusterName: kubernetes
31 controlPlaneEndpoint: "192.168.82.200:6444"		#指定集群VIP地址,注意与haproxy的前端访问端口一致
32 controllerManager: {}38 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers			#指定镜像下载地址
39 kind: ClusterConfiguration
40 kubernetesVersion: v1.20.15				#指定kubernetes版本号
41 networking:
42   dnsDomain: cluster.local
43   podSubnet: "10.244.0.0/16"				#指定pod网段,10.244.0.0/16用于匹配flannel默认网段
44   serviceSubnet: 10.96.0.0/16			#指定service网段
45 scheduler: {}
#末尾再添加以下内容
--- 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs									#把默认的kube-proxy调度方式改为ipvs模式#更新集群初始化配置文件
kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml//所有节点拉取镜像
#拷贝yaml配置文件给其他主机,通过配置文件进行拉取镜像
for i in master02 master03 node01 node02; do scp /opt/new.yaml $i:/opt/; donekubeadm config images pull --config /opt/new.yaml//master01 节点进行初始化
kubeadm init --config new.yaml --upload-certs | tee kubeadm-init.log
#提示:
.........
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of the control-plane node running the following command on each as root:
#master节点加入使用的命令,记录!kubeadm join 192.168.82.200:6444 --token 7t2weq.bjbawausm0jaxury \--discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98 \--control-plane --certificate-key 0f2a7ff2c46ec172f834e237fcca8a02e7c29500746594c25d995b78c92dde96Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:
#node节点加入使用的命令。记录!
kubeadm join 192.168.82.200:6444 --token 7t2weq.bjbawausm0jaxury \--discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98#若初始化失败,进行的操作
kubeadm reset -f
ipvsadm --clear 
rm -rf ~/.kube
再次进行初始化
//master01 节点进行环境配置
#配置 kubectl
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config#修改controller-manager和scheduler配置文件
vim /etc/kubernetes/manifests/kube-scheduler.yaml 
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
......#- --port=0					#搜索port=0,把这一行注释掉systemctl restart kubelet#部署网络插件flannel
所有节点上传 flannel 镜像 flannel.tar 和网络插件 cni-plugins-linux-amd64-v0.8.6.tgz 到 /opt 目录,master节点上传 kube-flannel.yml 文件
cd /opt
docker load < flannel.tarmv /opt/cni /opt/cni_bak
mkdir -p /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/binkubectl apply -f kube-flannel.yml //所有节点加入集群
#master 节点加入集群
kubeadm join 192.168.82.200:6444 --token 7t2weq.bjbawausm0jaxury \--discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98 \--control-plane --certificate-key 0f2a7ff2c46ec172f834e237fcca8a02e7c29500746594c25d995b78c92dde96mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config#node 节点加入集群
kubeadm join 192.168.82.200:6444 --token 7t2weq.bjbawausm0jaxury \--discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98
#在 master01 查看集群信息
kubectl get nodes
NAME       STATUS   ROLES                  AGE    VERSION
master01   Ready    control-plane,master   2h5m   v1.20.15
master02   Ready    control-plane,master   2h5m   v1.20.15
master03   Ready    control-plane,master   2h5m   v1.20.15
node01     Ready    <none>                 2h5m   v1.20.15
node02     Ready    <none>                 2h5m   v1.20.15kubectl get pod -n kube-system 
NAME                               READY   STATUS    RESTARTS   AGE
coredns-74ff55c5b-4fg44            1/1     Running   2          2h5m
coredns-74ff55c5b-jsdxz            1/1     Running   0          2h5m
etcd-master01                      1/1     Running   1          2h5m
etcd-master02                      1/1     Running   1          2h5m
etcd-master03                      1/1     Running   1          2h5m
kube-apiserver-master01            1/1     Running   1          2h5m
kube-apiserver-master02            1/1     Running   1          2h5m
kube-apiserver-master03            1/1     Running   1          2h5m
kube-controller-manager-master01   1/1     Running   3          2h5m
kube-controller-manager-master02   1/1     Running   1          2h5m
kube-controller-manager-master03   1/1     Running   2          2h5m
kube-flannel-ds-8qtx6              1/1     Running   2          2h4m
kube-flannel-ds-lmzdz              1/1     Running   0          2h4m
kube-flannel-ds-nb9qx              1/1     Running   1          2h4m
kube-flannel-ds-t4l4x              1/1     Running   1          2h4m
kube-flannel-ds-v592x              1/1     Running   1          2h4m
kube-proxy-6gd5j                   1/1     Running   1          2h5m
kube-proxy-f8k96                   1/1     Running   3          2h5m
kube-proxy-h7nrf                   1/1     Running   1          2h5m
kube-proxy-j96b6                   1/1     Running   1          2h5m
kube-proxy-mgmx6                   1/1     Running   0          2h5m
kube-scheduler-master01            1/1     Running   1          2h5m
kube-scheduler-master02            1/1     Running   2          2h5m
kube-scheduler-master03            1/1     Running   2          2h5m

五.问题解决

1.加入集群的 Token 过期

注意:Token值在集群初始化后,有效期为 24小时 ,过了24小时过期。进行重新生成Token,再次加入集群,新生成的Token为 2小时。1.1、生成Node节点加入集群的 Token
kubeadm token create --print-join-command
kubeadm join 192.168.82.200:16443 --token menw99.1hbsurvl5fiz119n     --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab865	12420215242c4313fb830a4eb981.2、生成Master节点加入集群的 --certificate-key
kubeadm init phase upload-certs  --upload-certs
I1105 12:33:08.201601   93226 version.go:254] remote version is much newer: v1.22.3; falling back to: stable-1.20
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
38dba94af7a38700c3698b8acdf8e23f273be07877f5c86f4977dc023e333deb#master节点加入集群的命令
kubeadm join 192.168.82.200:16443 --token menw99.1hbsurvl5fiz119n     --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98 \--control-plane --certificate-key 38dba94af7a38700c3698b8acdf8e23f273be07877f5c86f4977dc023e333deb

2.master节点 无法部署非系统Pod

解析:主要是因为master节点被加上污点,污点是不允许部署非系统 Pod,在 测试 环境,可以将污点去除,节省资源,可利用率。2.1、查看污点
kubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule
Taints:             node-role.kubernetes.io/master:NoSchedule2.2、取消污点
kubectl  taint node  -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule-
node/master01 untainted
node/master02 untainted
node/master03 untaintedkubectl  describe node -l node-role.kubernetes.io/master=  | grep Taints
Taints:             <none>
Taints:             <none>
Taints:             <none>

3.修改NodePort的默认端口

原理:默认k8s的使用端口的范围为30000左右,作为对外部提供的端口。我们也可以通过对配置文件的修改去指定默认的对外端口的范围。#报错
The Service "nginx-svc" is invalid: spec.ports[0].nodePort: Invalid value: 80: provided port is not in the valid range. The range of valid ports is 30000-32767[root@k8s-master1 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --service-cluster-ip-range=10.96.0.0/16
- --service-node-port-range=1-65535    #找到后进行添加即可#无需重启,k8s会自动生效

4.外部 etcd 部署配置

kubeadm config print init-defaults > /opt/kubeadm-config.yamlcd /opt/
vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.82.104bindPort: 6443
nodeRegistration:criSocket: /var/run/dockershim.sockname: master01taints:- effect: NoSchedulekey: node-role.kubernetes.io/master
---
apiServer:certSANs:- 10.96.0.1- 127.0.0.1- localhost- kubernetes- kubernetes.default- kubernetes.default.svc- kubernetes.default.svc.cluster.local- 192.168.82.200- 192.168.82.100- 192.168.82.101- 192.168.82.102- master01- master02- master03timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.82.200:16443
controllerManager: {}
dns:type: CoreDNS
etcd:external:                                #使用外部etcd的方式endpoints:- https://192.168.82.100:2379- https://192.168.82.101:2379- https://192.168.82.102:2379caFile: /opt/etcd/ssl/ca.pem           #需要把etcd的证书都复制到所有master节点上certFile: /opt/etcd/ssl/server.pemkeyFile: /opt/etcd/ssl/server-key.pem
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.15
networking:dnsDomain: cluster.localpodSubnet: "10.244.0.0/16"serviceSubnet: 10.96.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/133345.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

【PTE-day03 报错注入】

报错注入 1、报错注入 group by count2、报错注入 extractvalue3、报错注入updatexml1、报错注入 group by count http://124.222.124.9:8888/Less-5/?id=-1 union select 1,count(*),concat((select database()),ceil(rand(0)*2)) as a from information_schema.tables grou…

初识C++(2)

文章目录 什么是缺省参数什么是函数重载什么是引用引用在函数中的作用常引用引用跟指针的区别 c中的宏替换 什么是缺省参数 缺省参数:在调用函数中 可以不传参数 并且在半缺省的时候不能跳着传&#xff0c;只能连续传 缺省参数 不能跳跃传 半缺省&#xff08;缺省值只能从右往…

数据结构:Map和Set(1)

搜索树 概念 若它的左子树不为空&#xff0c;则左子树上所有节点的值都小于根节点的值 若它的右子树不为空&#xff0c;则右子树上所有节点的值都大于根节点的值 它的左右子树也分别为二叉搜索树 这棵树的中序遍历结果是有序的 接下来我们来模拟一棵二叉搜索树&#xff0c…

Apifox日常使用(一键本地联调)

背景说明&#xff1a;现在的项目一般都是前后分离&#xff0c;线上出bug或者在进行联调时&#xff0c;有些时候后端需要重复模拟前端数据格式&#xff0c;在使用Apifox的情况下&#xff0c;如何快速造出后端需要的数据呢&#xff1f; 随便找一个网站&#xff0c;点开f12&#…

论文阅读—— UniDetector(cvpr2023)

arxiv&#xff1a;https://arxiv.org/abs/2303.11749 github&#xff1a;https://github.com/zhenyuw16/UniDetector 一、介绍 通用目标检测旨在检测场景那种的一切目标。现有的检测器依赖于大量数据集 通用的目标检测器应该有两个能力&#xff1a;1、可以利用多种来…

计算机网络常见的名词解释

计算机网络常见的名词解释 1.应用层2.传输层3. 网络层4.链路层5. 无线网络和移动网络6.计算机网络中的安全 1.应用层 API &#xff08;Application Programming Interface&#xff09;应用程序编程接口HTTP &#xff08;Hyper Text Transfer Protocol&#xff09; 超文本传输协…

Linux难学?大神告诉你,Linux到底该怎么自学!

Linux难学&#xff1f;大神告诉你&#xff0c;Linux到底该怎么自学&#xff01; 就Linux这个学习曲线&#xff0c;如果不是有人带&#xff0c;分分钟会被自己劝退&#xff01;不过&#xff0c;要是你换个思路&#xff0c;不妨跟着这里的节奏&#xff0c;试试看&#xff1f; 知乎…

AI系统ChatGPT程序源码+AI绘画系统源码+支持GPT4.0+Midjourney绘画

一、AI创作系统 SparkAi创作系统是基于OpenAI很火的ChatGPT进行开发的Ai智能问答系统和Midjourney绘画系统&#xff0c;支持OpenAI-GPT全模型国内AI全模型。本期针对源码系统整体测试下来非常完美&#xff0c;可以说SparkAi是目前国内一款的ChatGPT对接OpenAI软件系统。那么如…

AWS SAP-C02教程0--课程概述

SAP是亚马逊云的解决方案架构师专业级认证&#xff0c;关于本课程&#xff0c;我会简述已下3点&#xff1a; 在本课程中按照自己的分类讲述考试相关的AWS产品&#xff0c;特别会注明每个产品在考试中可能出现的考点会对一些解决方案做对比&#xff0c;通过一些对比给出不同场景…

帧间快速算法论文阅读

Low complexity inter coding scheme for Versatile Video Coding (VVC) 通过分析相邻CU的编码区域&#xff0c;预测当前CU的编码区域&#xff0c;以终止不必要的分割模式。 &#x1d436;&#x1d448;1、&#x1d436;&#x1d448;2、&#x1d436;&#x1d448;3、&#x…

linux在如何安装宝塔,宝塔在linux服务器下如何安装?

宝塔在linux服务器下如何安装&#xff1f; 那首先你得有一台linux的服务器 可以到这里白嫖一台服务器 【基本上】 https://t.aliyun.com/U/9WzPCS 有了服务器之后就可以进行宝塔的安装了&#xff01; 在Linux系统下安装宝塔面板主要分为以下步骤&#xff1a; 1、进入远程连接…

图论(欧拉路径)

理论&#xff1a; 所有边都经过一次&#xff0c;若欧拉路径&#xff0c;起点终点相同&#xff0c;欧拉回路 有向图欧拉路径&#xff1a;恰好一个outin1,一个inout1,其余inout 有向图欧拉回路&#xff1a;所有inout 无向图欧拉路径&#xff1a;两个点度数奇&#xff0c;其余偶 …

3.4、Linux小程序:进度条

个人主页&#xff1a;Lei宝啊 愿所有美好如期而遇 目录 回车与换行的概念和区别 行缓冲区概念 进度条代码 version1 version2 version3 回车与换行的概念和区别 换行\n&#xff0c;回车\r 似乎无需多言 行缓冲区概念 这里我们通过例子来简单理解即可&#xff0c;深入…

vue面试题

Vue.js 是一个渐进式的 JavaScript 框架&#xff0c;被设计用来构建用户界面。与其他重量级框架&#xff08;如 Angular 或 React&#xff09;相比&#xff0c;Vue.js 提供了更简单、更灵活的 API&#xff0c;使得你可以在各种项目中灵活使用。 Vue.js 主要的核心特性是&#…

wx 小程序不打开调试模式无法获取数据

问题开始 最近学习小程序&#xff0c;发布了一个体验版的小程序&#xff0c;发现正常扫码进入后接口数据是无法访问的。也就是原始数据,不过开启调试模式后,数据又一切正常&#xff0c;但是总不能让每个人都开启调试模式用吧&#xff0c;终于查阅资料后找到了解决问题的办法 …

10 索引优化与查询优化

文章目录 索引失效案例关联查询优化对于左外连接对于内连接JOIN语句原理简单嵌套循环连接SNLJ索引嵌套循环连接INLJ块嵌套循环连接BNLJHash Join 子查询优化排序优化filesort算法&#xff1a;双路排序和单路排序 分组优化分页优化优先考虑覆盖索引索引下推ICP使用条件 其他查询…

最新、最全、最详细的 K8S 学习笔记总结

Kubernetes就是一个编排容器的工具&#xff0c;一个可以管理应用全生命周期的工具&#xff0c;从创建应用&#xff0c;应用的部署&#xff0c;应用提供服务&#xff0c;扩容缩容应用&#xff0c;应用更新&#xff0c;都非常的方便&#xff0c;而且可以做到故障自愈。 K8S的前景…

笔记50:正则表达式入门宝典

引自&#xff1a;正则表达式是什么? - 知乎 中“龙吟九野”所写的一个回答&#xff0c;个人感觉看完之后如同醍醐灌顶&#xff0c;查了很多资料都没有这篇文章写的基础和通透&#xff0c;感觉是正则表达式扫盲好文&#xff0c;所以搬运一下&#xff0c;侵权删&#xff0c;感谢…

面向萌新的数学建模入门指南

时间飞逝&#xff0c;我的大一建模生涯也告一段落。感谢建模路上帮助过我的学长和学姐们&#xff0c;滴水之恩当涌泉相报&#xff0c;写下这篇感想&#xff0c;希望可以给学弟学妹们一丝启发&#xff0c;也就完成我的想法了。拙劣的文笔&#xff0c;也不知道写些啥&#xff0c;…