阿里云ESC
创建实例
填入密码即可
云上的防火墙相关设置就是安全组
vpc 专有网络 划分私有ip 子网
vpc 隔离环境域 不同的vpc下 即使相同的子网也不互通
使用交换机继续划分子网
停止 释放 不收钱
k8s
服务器 4核8G*1 + 8核16G *2
git 创建凭证
pipeline 发邮箱 (p124)
k8s 想要发邮件 admin 账号 登录 平台管理 平台设置 通知管理 邮件
admin 工作负载 jenkinsfile 编辑配置文件 配置邮箱
两个地址都需要配置
pipeline 推送镜像
创建凭证
容器镜像服务 个人即可 创建仓库和命名空间 从一个仓库的基本信息中找到仓库地址
例如registry.cn-hangzhou.aliyuncs.com
找到命名空间 例如 jhj
deploy 部署
创建kubeconfig
拉取镜像 由于我们使用阿里云的 所以需要使用阿里云
同时在项目的配置中心 密钥处配置
密钥类型
k8s kubekey 部署
删除之前部署的kk
./kk delete cluster -f config-sample.yaml
下载配置文件
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
生成配置文件
./kk create config --name kk-k8s
使用内部负载均衡部署
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:name: kk-k8s
spec:hosts:- {name: node1, address: 172.16.0.2, internalAddress: 172.16.0.2, user: ubuntu, password: "Qcloud@123"}- {name: node2, address: 172.16.0.3, internalAddress: 172.16.0.3, user: ubuntu, password: "Qcloud@123"}roleGroups:etcd:- node1control-plane: - node1worker:- node1- node2controlPlaneEndpoint:## Internal loadbalancer for apiservers internalLoadbalancer: haproxydomain: lb.kubesphere.localaddress: ""port: 6443kubernetes:version: v1.23.8clusterName: cluster.localautoRenewCerts: truecontainerManager: dockeretcd:type: kubekeynetwork:plugin: calicokubePodsCIDR: 10.233.64.0/18kubeServiceCIDR: 10.233.0.0/18## multus support. https://github.com/k8snetworkplumbingwg/multus-cnimultusCNI:enabled: falseregistry:privateRegistry: ""namespaceOverride: ""registryMirrors: []insecureRegistries: []addons: []
部署
./kk create cluster -f kk-k8s.yaml --with-kubeshpere v3.2.0
如果想同时部署kubeshpere 加上 --with-kubeshpere v3.2.0 即可
查看集群状态
kubectl get node -o wide
kubeshpere 默认密码为admin
配置文件详解
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:name: kk-k8s
spec:hosts:#address 公网地址 internalAddress 内网地址 账户名 密码- {name: node1, address: 172.16.0.2, internalAddress: 172.16.0.2, user: ubuntu, password: "Qcloud@123"}- {name: node2, address: 172.16.0.3, internalAddress: 172.16.0.3, user: ubuntu, password: "Qcloud@123"}roleGroups:#为了部署高可用的 三台节点可以同时为etcd 生产环境一般不同于主节点与工作节点的额外节点 etcd:- node1control-plane: - node1worker:- node1- node2controlPlaneEndpoint:## Internal loadbalancer for apiservers internalLoadbalancer: haproxy # 内部负载均衡器 代理woker节点domain: lb.kubesphere.localaddress: "" #如果使用外部负载均衡 就在配置外部负载均衡的地址port: 6443kubernetes:version: v1.23.8 #版本clusterName: cluster.local # 集群名称autoRenewCerts: truecontainerManager: dockeretcd:type: kubekeynetwork:plugin: calico #网络kubePodsCIDR: 10.233.64.0/18 #子网划分kubeServiceCIDR: 10.233.0.0/18## multus support. https://github.com/k8snetworkplumbingwg/multus-cnimultusCNI:enabled: falseregistry:privateRegistry: "" #镜像仓库 默认部署 namespaceOverride: ""registryMirrors: []insecureRegistries: []addons: [] #插件
如果想直接创建一个带kubesphere 的配置文件
./kk create config --with-kubesphere v3.2.0 -f test.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:name: sample
spec:hosts:- {name: node1, address: 172.16.0.2, internalAddress: 172.16.0.2, user: ubuntu, password: "Qcloud@123"}- {name: node2, address: 172.16.0.3, internalAddress: 172.16.0.3, user: ubuntu, password: "Qcloud@123"}roleGroups:etcd:- node1control-plane: - node1worker:- node1- node2controlPlaneEndpoint:## Internal loadbalancer for apiservers # internalLoadbalancer: haproxydomain: lb.kubesphere.localaddress: ""port: 6443kubernetes:version: v1.23.8clusterName: cluster.localautoRenewCerts: truecontainerManager: dockeretcd:type: kubekeynetwork:plugin: calicokubePodsCIDR: 10.233.64.0/18kubeServiceCIDR: 10.233.0.0/18## multus support. https://github.com/k8snetworkplumbingwg/multus-cnimultusCNI:enabled: falseregistry:privateRegistry: ""namespaceOverride: ""registryMirrors: []insecureRegistries: []addons: [] ---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:name: ks-installernamespace: kubesphere-systemlabels:version: v3.2.0
spec:persistence:storageClass: ""authentication:jwtSecret: ""zone: ""local_registry: ""# dev_tag: ""etcd:monitoring: falseendpointIps: localhostport: 2379tlsEnable: truecommon:core:console:enableMultiLogin: trueport: 30880type: NodePort# apiserver:# resources: {}# controllerManager:# resources: {}redis:enabled: falsevolumeSize: 2Giopenldap:enabled: falsevolumeSize: 2Giminio:volumeSize: 20Gimonitoring:# type: externalendpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090GPUMonitoring:enabled: falsegpu:kinds: - resourceName: "nvidia.com/gpu"resourceType: "GPU"default: truees:# master:# volumeSize: 4Gi# replicas: 1# resources: {}# data:# volumeSize: 20Gi# replicas: 1# resources: {}logMaxAge: 7elkPrefix: logstashbasicAuth:enabled: falseusername: ""password: ""externalElasticsearchUrl: ""externalElasticsearchPort: ""alerting:enabled: false# thanosruler:# replicas: 1# resources: {}auditing:enabled: false# operator:# resources: {}# webhook:# resources: {}devops:enabled: false #想用devops 就选择truejenkinsMemoryLim: 2GijenkinsMemoryReq: 1500MijenkinsVolumeSize: 8GijenkinsJavaOpts_Xms: 512mjenkinsJavaOpts_Xmx: 512mjenkinsJavaOpts_MaxRAM: 2gevents:enabled: false# operator:# resources: {}# exporter:# resources: {}# ruler:# enabled: true# replicas: 2# resources: {}logging:enabled: falsecontainerruntime: dockerlogsidecar:enabled: truereplicas: 2# resources: {}metrics_server:enabled: falsemonitoring:storageClass: ""# kube_rbac_proxy:# resources: {}# kube_state_metrics:# resources: {}# prometheus:# replicas: 1# volumeSize: 20Gi# resources: {}# operator:# resources: {}# adapter:# resources: {}# node_exporter:# resources: {}# alertmanager:# replicas: 1# resources: {}# notification_manager:# resources: {}# operator:# resources: {}# proxy:# resources: {}gpu:nvidia_dcgm_exporter:enabled: false# resources: {}multicluster:clusterRole: none network:networkpolicy:enabled: falseippool:type: nonetopology:type: noneopenpitrix:store:enabled: falseservicemesh:enabled: falsekubeedge:enabled: false cloudCore:nodeSelector: {"node-role.kubernetes.io/worker": ""}tolerations: []cloudhubPort: "10000"cloudhubQuicPort: "10001"cloudhubHttpsPort: "10002"cloudstreamPort: "10003"tunnelPort: "10004"cloudHub:advertiseAddress:- ""nodeLimit: "100"service:cloudhubNodePort: "30000"cloudhubQuicNodePort: "30001"cloudhubHttpsNodePort: "30002"cloudstreamNodePort: "30003"tunnelNodePort: "30004"edgeWatcher:nodeSelector: {"node-role.kubernetes.io/worker": ""}tolerations: []edgeWatcherAgent:nodeSelector: {"node-role.kubernetes.io/worker": ""}tolerations: []
kubekey官网详细的参考
https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md
kubesphere 其实就是ks-installer
https://github.com/kubesphere/ks-installer/blob/master/deploy/cluster-configuration.yaml
如果想开启其他配置可以去查看api 开启配置
https://github.com/kubesphere/kubekey/tree/master/cmd/kk/apis/kubekey/v1alpha2
kk 增加节点
修改config.yaml
执行
./kk add nodes -f config.yaml
kk删除节点
执行
./kk delete node node3 -f config.yaml
集群证书管理
查看到期时间
./kk certs check-expiration -f config.yaml
更新集群证书
默认在 /etc/kubernetes/pki
执行命令更新证书
./kk certs renew -f config.yaml
1.2 2.0 插件自动安装 自动更新
应用商店
在安装后 开启一些组件 crud 中 配置项改为true
https://kubesphere.io/zh/docs/v3.4/pluggable-components/app-store/