一份详尽的利用 Kubeadm部署 Kubernetes 1.13.1 集群指北

2019独角兽企业重金招聘Python工程师标准>>> hot3.png

我的最新版桌面


概 述

Kubernetes集群的搭建方法其实有多种,比如我在之前的文章《利用K8S技术栈打造个人私有云(连载之:K8S集群搭建)》中使用的就是二进制的安装方法。虽然这种方法有利于我们理解 k8s集群,但却过于繁琐。而 kubeadm是 Kubernetes官方提供的用于快速部署Kubernetes集群的工具,其历经发展如今已经比较成熟了,利用其来部署 Kubernetes集群可以说是非常好上手,操作起来也简便了许多,因此本文详细叙述之。

注: 本文首发于 My Personal Blog:CodeSheep·程序羊,欢迎光临 小站


节点规划

本文准备部署一个 一主两从三节点 Kubernetes集群,整体节点规划如下表所示:

主机名IP角色
k8s-master192.168.39.79k8s主节点
k8s-node-1192.168.39.77k8s从节点
k8s-node-2192.168.39.78k8s从节点

下面介绍一下各个节点的软件版本:

  • 操作系统:CentOS-7.4-64Bit
  • Docker版本:1.13.1
  • Kubernetes版本:1.13.1

所有节点都需要安装以下组件:

  • Docker:不用多说了吧
  • kubelet:运行于所有 Node上,负责启动容器和 Pod
  • kubeadm:负责初始化集群
  • kubectl: k8s命令行工具,通过其可以部署/管理应用 以及CRUD各种资源

准备工作

  • 所有节点关闭防火墙
systemctl disable firewalld.service 
systemctl stop firewalld.service
  • 禁用SELINUX
setenforce 0vi /etc/selinux/config
SELINUX=disabled
  • 所有节点关闭 swap
swapoff -a
  • 设置所有节点主机名
hostnamectl --static set-hostname  k8s-master
hostnamectl --static set-hostname  k8s-node-1
hostnamectl --static set-hostname  k8s-node-2
  • 所有节点 主机名/IP加入 hosts解析

编辑 /etc/hosts文件,加入以下内容:

192.168.39.79 k8s-master
192.168.39.77 k8s-node-1
192.168.39.78 k8s-node-2

组件安装

0x01. Docker安装(所有节点)

不赘述 ! ! !

0x02. kubelet、kubeadm、kubectl安装(所有节点)

  • 首先准备repo
cat>>/etc/yum.repos.d/kubrenetes.repo<<EOF
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
EOF
  • 然后执行如下指令来进行安装
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX= disabled/' /etc/selinux/configyum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

安装kubelet kubeadm kubectl


Master节点配置

0x01. 初始化 k8s集群

为了应对网络不畅通的问题,我们国内网络环境只能提前手动下载相关镜像并重新打 tag :

docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1
docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1           
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1  
docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1           
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1               
docker rmi mirrorgooglecontainers/pause:3.1                        
docker rmi mirrorgooglecontainers/etcd:3.2.24                      
docker rmi coredns/coredns:1.2.6
docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

准备好的镜像

然后再在 Master节点上执行如下命令初始化 k8s集群:

kubeadm init --kubernetes-version=v1.13.1 --apiserver-advertise-address 192.168.39.79 --pod-network-cidr=10.244.0.0/16
  • --kubernetes-version: 用于指定 k8s版本
  • --apiserver-advertise-address:用于指定使用 Master的哪个network interface进行通信,若不指定,则 kubeadm会自动选择具有默认网关的 interface
  • --pod-network-cidr:用于指定Pod的网络范围。该参数使用依赖于使用的网络方案,本文将使用经典的flannel网络方案。

执行命令后,控制台给出了如下所示的详细集群初始化过程:

[root@localhost ~]# kubeadm init --config kubeadm-config.yaml
W1224 11:01:25.408209   10137 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "\u00a0 podSubnet”
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml”
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki”
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.39.79]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes”
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for "kube-apiserver”
[control-plane] Creating static Pod manifest for "kube-controller-manager”
[control-plane] Creating static Pod manifest for "kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.005638 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system” Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''”
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 26uprk.t7vpbwxojest0tvq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each node
as root:kubeadm join 192.168.39.79:6443 --token 26uprk.t7vpbwxojest0tvq --discovery-token-ca-cert-hash sha256:028727c0c21f22dd29d119b080dcbebb37f5545e7da1968800140ffe225b0123[root@localhost ~]#

0x02. 配置 kubectl

在 Master上用 root用户执行下列命令来配置 kubectl:

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile 
echo $KUBECONFIG

0x03. 安装Pod网络

安装 Pod网络是 Pod之间进行通信的必要条件,k8s支持众多网络方案,这里我们依然选用经典的 flannel方案

  • 首先设置系统参数:
sysctl net.bridge.bridge-nf-call-iptables=1
  • 然后在 Master节点上执行如下命令:
kubectl apply -f kube-flannel.yaml

kube-flannel.yaml 文件在此

一旦 Pod网络安装完成,可以执行如下命令检查一下 CoreDNS Pod此刻是否正常运行起来了,一旦其正常运行起来,则可以继续后续步骤

kubectl get pods --all-namespaces -o wide

检查所有pod是否正常启动

同时我们可以看到主节点已经就绪:kubectl get nodes

查看主节点状态


添加 Slave节点

在两个 Slave节点上分别执行如下命令来让其加入Master上已经就绪了的 k8s集群:

kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

如果 token忘记,则可以去 Master上执行如下命令来获取:

kubeadm token list

上述kubectl join命令的执行结果如下:

[root@localhost ~]# kubeadm join 192.168.39.79:6443 --token yndddp.oamgloerxuune80q --discovery-token-ca-cert-hash sha256:7a45c40b5302aba7d8b9cbd3afc6d25c6bb8536dd6317aebcd2909b0427677c8
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.39.79:6443”
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.39.79:6443”
[discovery] Requesting info from "https://192.168.39.79:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.39.79:6443”
[discovery] Successfully established connection with API Server "192.168.39.79:6443”
[join] Reading configuration from the cluster…
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml’
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotationThis node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster.

效果验证

  • 查看节点状态
kubectl get nodes

节点状态

  • 查看所有 Pod状态
kubectl get pods --all-namespaces -o wide

查看所有 Pod状态

好了,集群现在已经正常运行了,接下来看看如何正常的拆卸集群。


拆卸集群

首先处理各节点:

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

一旦节点移除之后,则可以执行如下命令来重置集群:

kubeadm reset

安装 dashboard

就像给elasticsearch配一个可视化的管理工具一样,我们最好也给 k8s集群配一个可视化的管理工具,便于管理集群。

因此我们接下来安装 v1.10.0版本的 kubernetes-dashboard,用于集群可视化的管理。

  • 首先手动下载镜像并重新打标签:(所有节点)
docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
docker tag registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker image rm registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
  • 安装 dashboard:
kubectl create -f dashboard.yaml

dashboard.yaml 文件在此

  • 查看 dashboard的 pod是否正常启动,如果正常说明安装成功:
 kubectl get pods --namespace=kube-system
[root@k8s-master ~]# kubectl get pods --namespace=kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-4rds2                1/1     Running   0          81m
coredns-86c58d9df4-rhtgq                1/1     Running   0          81m
etcd-k8s-master                         1/1     Running   0          80m
kube-apiserver-k8s-master               1/1     Running   0          80m
kube-controller-manager-k8s-master      1/1     Running   0          80m
kube-flannel-ds-amd64-8qzpx             1/1     Running   0          78m
kube-flannel-ds-amd64-jvp59             1/1     Running   0          77m
kube-flannel-ds-amd64-wztbk             1/1     Running   0          78m
kube-proxy-crr7k                        1/1     Running   0          81m
kube-proxy-gk5vf                        1/1     Running   0          78m
kube-proxy-ktr27                        1/1     Running   0          77m
kube-scheduler-k8s-master               1/1     Running   0          80m
kubernetes-dashboard-79ff88449c-v2jnc   1/1     Running   0          21s
  • 查看 dashboard的外网暴露端口
kubectl get service --namespace=kube-system
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   5h38m
kubernetes-dashboard   NodePort    10.99.242.186   <none>        443:31234/TCP   14
  • 生成私钥和证书签名:
openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
rm dashboard.pass.key
openssl req -new -key dashboard.key -out dashboard.csr【如遇输入,一路回车即可】
  • 生成SSL证书:
openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
  • 然后将生成的 dashboard.keydashboard.crt置于路径 /home/share/certs下,该路径会配置到下面即将要操作的

dashboard-user-role.yaml文件中

  • 创建 dashboard用户
 kubectl create -f dashboard-user-role.yaml

dashboard-user-role.yaml 文件在此

  • 获取登陆token
kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system
[root@k8s-master ~]# kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system
Name:         admin-token-9d4vl
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: adminkubernetes.io/service-account.uid: a320b00f-07ed-11e9-93f2-000c2978f207Type:  kubernetes.io/service-account-tokenData
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi05ZDR2bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImEzMjBiMDBmLTA3ZWQtMTFlOS05M2YyLTAwMGMyOTc4ZjIwNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.WbaHx-BfZEd0SvJwA9V_vGUe8jPMUHjKlkT7MWJ4JcQldRFY8Tdpv5GKCY25JsvT_GM3ob303r0yE6vjQdKna7EfQNO_Wb2j1Yu5UvZnWw52HhNudHNOVL_fFRKxkSVjAILA_C_HvW6aw6TG5h7zHARgl71I0LpW1VESeHeThipQ-pkt-Dr1jWcpPgE39cwxSgi-5qY4ssbyYBc2aPYLsqJibmE-KUhwmyOheF4Lxpg7E3SQEczsig2HjXpNtJizCu0kPyiR4qbbsusulH-kdgjhmD9_XWP9k0BzgutXWteV8Iqe4-uuRGHZAxgutCvaL5qENv4OAlaArlZqSgkNWw

token既然生成成功,接下来就可以打开浏览器,输入 token来登录进集群管理页面:

登录集群管理页面

集群概览


后 记

由于能力有限,若有错误或者不当之处,还请大家批评指正,一起学习交流!

  • My Personal Blog:CodeSheep 程序羊
  • 我的半年技术博客之路


转载于:https://my.oschina.net/hansonwang99/blog/2993870

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/281010.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

.NET性能优化-使用内存+磁盘混合缓存

我们回顾一下上一篇文章中的内容&#xff0c;有一个朋友问我这样一个问题&#xff1a;我的业务依赖一些数据&#xff0c;因为数据库访问慢&#xff0c;我把它放在 Redis 里面&#xff0c;不过还是太慢了&#xff0c;有什么其它的方案吗&#xff1f;其实这个问题比较简单的是吧&…

最小生成树详解

注&#xff1a;本文算法使用链式前向星数据结构实现。学习链接&#xff1a;链式前向星-学习笔记 一、Prim算法 普通prim算法模板&#xff1a; //用前向星录数据的时候记得把head初始化为-1 fill(dist,distLEN,MAX); memset(vis,0,sizeof vis); int ans0; dist[1]0; //如…

dropbox文件_Dropbox即将发布的扩展程序更新将添加更多文件编辑支持,包括Pixlr照片...

dropbox文件Dropbox is perhaps the best-known cloud storage platform for consumers, but it’s hoping to become something more. With an upcoming overhaul to its user tools, Dropbox will add more complex editing tools, in addition to what it already provides …

黑客窃取思科、IBM与甲骨文认证管理系统内的敏感数据

目前一套被思科、F5、IBM以及甲骨文等企业所广泛使用的认证管理系统(即Credential Manager System)正面临着数据泄露风险&#xff0c;其中的敏感数据也许已经被黑客们所获取。 根据Pearson VUE(主营计算机测试方案开发与交付)发布的一项公告&#xff0c;某恶意软件已经藏身于该…

Spring下载地址

下载地址&#xff1a;https://repo.spring.io/libs-release-local/org/springframework/spring/ 进入后可选择下载版本&#xff0c;选择版本后&#xff0c;进入目录结构。其中dist是最终发布版本&#xff0c;包含开发所需lib和源码。docs是开发文档。schema是一些约束文件。 Do…

.NET7发布,一大批优秀.NET6项目没人看了吗...(都是好项目)

恍惚间都已经.NET7.0了&#xff0c;不能再呆在旧版本了&#xff01;这里分享一套Vue3 Axios TS Vite Element Plus .NET 6 WebAPI JWT SqlSugar的通用管理后台&#xff0c;各种最新框架组件&#xff0c;学习必备&#xff01;这里把源码、脚本以及专门录制的视频教程都打…

Python的日志记录-logging模块的使用

一、日志 1.1什么是日志 日志是跟踪软件运行时所发生的事件的一种方法&#xff0c;软件开发者在代码中调用日志函数&#xff0c;表明发生了特定的事件&#xff0c;事件由描述性消息描述&#xff0c;同时还包含事件的重要性&#xff0c;重要性也称为级别或严重性。 1.2何时使用日…

询问HTG:白噪声屏幕保护程序,有效的文件命名以及从密码泄露中恢复

Once a week we share three of the questions we’ve answered from the Ask HTG inbox with the greater readership; this week we’re looking at white noise screen savers, efficient file naming systems, and recovering from a password compromise. 每周一次&#…

专家预测第二波WannaCry勒索病毒攻击即将到来!

WannaCry的传播脚步今晨戛然而止 今天一大早&#xff0c;全网的WannaCry蠕虫病毒攻击突然减弱消退了!所有这一切功劳来自于英国研究人员malwaretech&#xff0c;他通过逆向发现WannaCry代码中有一个特殊域名地址&#xff1a; www.iuqerfsodp9ifjaposdfjhgosurijfaewrwergwea.co…

01.HTML基础命令笔记

目录 HTML结构 body内常用标签 常用 div与span img a标签 超链接标签 其他格式标签 列表 表格 表单 select标签 label标签 textarea多行文本 HTML结构 <!DOCTYPE html> <html lang"zh-CN"> <head><meta charset"UTF-8"&…

ios numlock_从“提示”框:默认情况下启用NumLock,无广告的iOS应用和立体声供电的派对灯...

ios numlockOnce a week we round up some of the great tips readers have sent into the tip box. This week we’re looking at how to enable the NumLock by default, stripping ads from iOS apps, and turning Christmas lights into audio-responsive party lights. 每…

Windows 7 自动更新失败导致无法进系统解决方案

故障现象&#xff1a;自动更新后&#xff0c;重启电脑&#xff0c;提示&#xff1a;&#xff08;配置Windows update 失败 还原更改 请勿关闭计算机&#xff09;&#xff0c; 而计算机一直停留该界面&#xff0c;如果半个小时以上都无反应。此时&#xff0c;就不要再继续等待了…

PaperWeekly 第28期 | 图像语义分割之特征整合和结构预测

“ 余昌黔 华中科技大学硕士 研究方向为图像语义分割 知乎专栏 https://zhuanlan.zhihu.com/semantic-segmentation 前言 近来阅读了 PASCAL VOC 2012 排行榜上前几的文章&#xff0c;包括 PSPNet 和林国省老师的几篇论文&#xff0c;觉得现在在 semantic segmentation 领域对于…

02.CSS基础笔记及导入

CSS是什么 CSS&#xff08;Cascading Style Sheet&#xff0c;层叠样式表)定义如何显示HTML元素。 当浏览器读到一个样式表&#xff0c;它就会按照这个样式表来对文档进行格式化&#xff08;渲染&#xff09;。 CSS样式 CSS引入HTML 内部样式与外部样式 <!DOCTYPE> …

如何还原桌面图标_如何为Windows 10桌面图标还原或更改文本的默认外观?

如何还原桌面图标For whatever reason, sooner or later we all have someone or something mess around with our keyboards and create ‘interesting’ results. With that in mind, today’s SuperUser Q&A post has a simple and elegant way to help a frustrated re…

「前端早读君007」css进阶之彻底理解视觉格式化模型

今日励志 不论你在什么时候开始&#xff0c;重要的是开始之后不要停止。 前言 对于部分前端工程师来讲&#xff0c;有时候CSS令他们很头疼&#xff0c;明明设置了某个样式&#xff0c;但是布局就是不起作用。 如果你也有这种问题&#xff0c;那么是时候学习下什么是css视觉格式…

.NET周报【11月第3期 2022-11-22】

国内文章.NET Conf China 2022 第一批讲师阵容大揭秘&#xff01;整个期待了&#xff01;https://mp.weixin.qq.com/s/4p89hhBPw6qv-0OB_T_TOg目光看过来 2022 年 12 月 3-4 日&#xff0c;一场社区性质的国内规模最大的 线上线下.NET Conf 2022 技术大会 即将盛大开幕。目前大…

解读Facebook CAN:如何给人工智能赋予艺术创作的力量

雷锋网 AI 科技评论按&#xff1a;能够迭代进化、模仿指定数据特征的GAN&#xff08;生成式对抗性网络&#xff09;已经是公认的处理图像生成问题的好方法&#xff0c;自从提出以来相关的研究成果不少&#xff0c;在图像增强、超分辨率、风格转换任务中的效果可谓是惊人的。 &a…

全向轮底盘磁导轨寻迹

全向轮底盘上安装两条磁传感器带用于磁导轨寻迹 如简图所示&#xff0c;两条与Y直线相交的黑色线条我们认为是两条磁检测传感器带 矢量方法修正车体位置 定义轨道左为负向&#xff0c;轨道右为正向。传感器左检测为负&#xff0c;右检测为正&#xff1b; 定义底盘坐标系为αβ&…

02-1.CSS边框,边界,布局相关笔记

目录 CSS盒子模型 padding内填充 边框 边框属性 border-radius margin外边距 CSS盒子模型 Content(内容): 盒子的内容&#xff0c;显示文本和图像 >>>>类似word 文字A&#xff0c;改变字体与大小padding: 用于控制内容与边框之间的距离 …