kubekey 离线安装harbor、k8s、kubesphere

目录

参考文献

了解kubekey  英文和中文

前提条件

部署准备

下载kubukey

离线包配置和制作

配置离线包

制作离线包

离线安装集群

复制KubeKey 和制品 artifact到离线机器

创建初始换、安装配置文件

 安装镜像仓库harbor

初始化harbor 项目

修改配置文件

安装k8s集群和kubesphere

手动安装依赖包

检查环境是否满足要求

检查防火墙端口是否满足要求

安装完成并登录

安装过程中遇到的问题

制作制品,不能下载github上的操作系统iso

初始化安装harbor报错must specify a CommonName

初始化harbor后,发现harbor的一些模块容器没有正常启动

麒麟系统安装,安装包没有 Fkylin-v10-amd64.iso

离线安装却在线下载calicoctl

k8两个容器(dns相关容器)启动报错

离线安装步骤执行到最后一步发现namespaceOverride: "kubesphereio"配置项并没有生效


参考文献

官网离线安装文档

我在按照官方文档安装的过程中遇到了很多问题,不得不写下这篇文章记录过程中的问题。。。。。。。

在文档最后也有对一些问题的总结,希望能帮助到有同样问题的人。

了解kubekey  英文和中文

./kk --help
Deploy a Kubernetes or KubeSphere cluster efficiently, flexibly and easily. There are three scenarios to use KubeKey.
1. Install Kubernetes only
2. Install Kubernetes and KubeSphere together in one command
3. Install Kubernetes first, then deploy KubeSphere on it using https://github.com/kubesphere/ks-installerUsage:kk [command]Available Commands:add         Add nodes to kubernetes clusteralpha       Commands for features in alphaartifact    Manage a KubeKey offline installation packagecerts       cluster certscompletion  Generate shell completion scriptscreate      Create a cluster or a cluster configuration filedelete      Delete node or clusterhelp        Help about any commandinit        Initializes the installation environmentplugin      Provides utilities for interacting with pluginsupgrade     Upgrade your cluster smoothly to a newer version with this commandversion     print the client version informationFlags:-h, --help   help for kkDeploy a kubernetes or kubesphere cluster efficiently, flexibly and easily. There are three scenarios to use kubekey.
1. 仅安装kubernetes 
2. 一条命令同时安装kubernetes和kubesphere
3. 现在安装kubernetes,然后在使用ks-installer在k8s上部署kubesphere,ks-installer参考:https://github.com/kubesphere/ks-installer
语法:kk [command]
可用命令s:add         k8s集群添加节点alpha       Commands for features in alphaartifact    管理kubekey离线下载的安装包certs       集群证书completion  生成 shell 完成脚本create      创建一个集群或创建集群配置文件delete      删除节点或删除集群help        帮助init        初始化安装环境plugin      Provides utilities for interacting with pluginsupgrade     平滑升级集群version     打印kk版本信息

前提条件

要开始进行多节点安装,您需要参考如下示例准备至少三台主机。

主机 IP主机名称角色
192.168.0.2node1联网主机用于制作离线包
192.168.0.3node2离线环境主节点
192.168.0.4node3离线环境镜像仓库节点

关闭防火墙、selinux、swap、dnsmasq(所有节点)
关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config  #永久
setenforce 0  #临时

关闭swap(k8s禁止虚拟内存以提高性能)

sed -ri 's/.*swap.*/#&/' /etc/fstab #永久
swapoff -a #临时
//关闭dnsmasq(否则可能导致docker容器无法解析域名)

service dnsmasq stop 
systemctl disable dnsmaq

有的机器不允许关闭防火墙可以看下文需要开放的端口

部署准备

下载kubukey

执行以下命令下载 KubeKey 并解压

方式一(可以访问github):

从 GitHub Release Page 下载 KubeKey 或者直接运行以下命令。

curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -

方式二:

首先运行以下命令,以确保您从正确的区域下载 KubeKey。

export KKZONE=cn

运行以下命令来下载 KubeKey:

curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -

离线包配置和制作

配置离线包

在联网主机上执行以下命令,并复制示例中的 manifest 内容。

vim manifest.yaml
---apiVersion: kubekey.kubesphere.io/v1alpha2kind: Manifestmetadata:name: samplespec:arches:- amd64operatingSystems:- arch: amd64type: linuxid: centosversion: "7"repository:iso:localPath:url: https://github.com/kubesphere/kubekey/releases/download/v3.0.10/centos7-rpms-amd64.iso- arch: amd64type: linuxid: ubuntuversion: "20.04"repository:iso:localPath:url: https://github.com/kubesphere/kubekey/releases/download/v3.0.10/ubuntu-20.04-debs-amd64.isokubernetesDistributions:- type: kubernetesversion: v1.23.15components:helm:version: v3.9.0cni:version: v1.2.0etcd:version: v3.4.13calicoctl:version: v3.23.2## For now, if your cluster container runtime is containerd, KubeKey will add a docker 20.10.8 container runtime in the below list.## The reason is KubeKey creates a cluster with containerd by installing a docker first and making kubelet connect the socket file of containerd which docker contained.containerRuntimes:- type: dockerversion: 20.10.8- type: containerdversion: 1.6.4crictl:version: v1.24.0docker-registry:version: "2"harbor:version: v2.5.3docker-compose:version: v2.2.2images:- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.15- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.15- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.15- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.15- registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6- registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6- registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0- registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:3.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:3.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3- registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2- registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.1- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.4.1- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.4.1- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.4.1- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1- registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.1- registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z- registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z- registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0- registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0- registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4- registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2- registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14- registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0- registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.13.0- registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.13.0- registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2- registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.2- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:ks-v3.4.1- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:ks-v3.4.1- registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:ks-v3.4.1- registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.4.0-2.319.3-1- registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18-podman- registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.1- registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/argocd:v2.3.3- registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1- registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2- registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.7.1- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.39.1- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1- registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.6.0- registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1- registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0- registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.31.0- registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v2.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v2.3.0- registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6- registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22- registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch:2.6.0- registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch-dashboards:2.6.0- registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch-curator:v0.0.5- registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.14.0- registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03- registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.9.4- registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:v1.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.6.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.6.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.6.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.14.6- registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.14.6- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.29- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.29- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.29- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.29- registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.29- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.50.1- registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.50- registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1- registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine- registry.cn-beijing.aliyuncs.com/kubesphereio/wget:1.0- registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text- registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache- registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest- registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0- registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v1:1.16.2- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v2:1.16.2- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-details-v1:1.16.2- registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-ratings-v1:1.16.3- registry.cn-beijing.aliyuncs.com/kubesphereio/scope:1.13.0

备注

  • 若需要导出的 artifact 文件中包含操作系统依赖文件(如:conntarck、chrony 等),可在 operationSystem 元素中的 .repostiory.iso.url 中配置相应的 ISO 依赖文件下载地址或者提前下载 ISO 包到本地在 localPath 里填写本地存放路径并删除 url 配置项。

  • 开启 harbor 和 docker-compose 配置项,为后面通过 KubeKey 自建 harbor 仓库推送镜像使用。

  • 默认创建的 manifest 里面的镜像列表从 docker.io 获取。

  • 可根据实际情况修改 manifest-sample.yaml 文件的内容,用于之后导出期望的 artifact 文件。

  • 您可以访问 Release v3.0.7 🌈 · kubesphere/kubekey · GitHub 下载 ISO 文件。

个人备注: 

 在这一步我遇到了操作系统配置在githab上,然后下载不下来的情况

然后我去github上手动下载下来放到了本地,然后下盖下面的配置部分

 operatingSystems:- arch: amd64type: linuxid: centosversion: "7"repository:iso:localPath: 添加你的本地地址url: - arch: amd64type: linuxid: ubuntuversion: "20.04"repository:iso:localPath: 添加你的本地地址url: 

下载地址

 https://github.com/kubesphere/kubekey/releases/tag/v3.0.10

制作离线包

导出制品 artifact。 

方式一(可以访问github):

执行以下命令:

./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz

方式二:

依次运行以下命令:

export KKZONE=cn./kk artifact export -m manifest-sample.yaml -o kubesphere.tar.gz

备注

制品(artifact)是一个根据指定的 manifest 文件内容导出的包含镜像 tar 包和相关二进制文件的 tgz 包。在 KubeKey 初始化镜像仓库、创建集群、添加节点和升级集群的命令中均可指定一个 artifact,KubeKey 将自动解包该 artifact 并在执行命令时直接使用解包出来的文件。

  • 导出时请确保网络连接正常。

  • KubeKey 会解析镜像列表中的镜像名,若镜像名中的镜像仓库需要鉴权信息,可在 manifest 文件中的 .registry.auths 字段中进行配置。

离线安装集群

复制KubeKey 和制品 artifact到离线机器

将下载的 KubeKey 和制品 artifact 通过 U 盘等介质拷贝至离线环境安装节点。

创建初始换、安装配置文件

执行以下命令创建离线集群配置文件: 

./kk create config --with-kubesphere v3.4.1 --with-kubernetes v1.23.15 -f config-sample.yaml

执行以下命令修改配置文件:

vim config-sample.yaml

备注

  • 按照实际离线环境配置修改节点信息。
  • 必须指定 registry 仓库部署节点(用于 KubeKey 部署自建 Harbor 仓库)。
  • registry 里必须指定 type 类型为 harbor,否则默认安装 docker registry。

 安装镜像仓库harbor

执行以下命令安装镜像仓库

  1. ./kk init registry -f config-sample.yaml -a kubesphere.tar.gz

    备注

    命令中的参数解释如下:

    • config-sample.yaml 指离线环境集群的配置文件。

    • kubesphere.tar.gz 指源集群打包出来的 tar 包镜像。

个人备注

在执行初始换安装 harbor时报错

11:16:46 UTC success: [rs-node-178-02]
11:16:46 UTC success: [rs-node-177-01]
11:16:46 UTC success: [rs-master-174-01]
11:16:46 UTC success: [rs-node-179-03]
11:16:46 UTC success: [rs-master-175-02]
11:16:46 UTC success: [rs-master-176-03]
11:16:46 UTC success: [devops-180]
11:16:46 UTC [ConfigureOSModule] configure the ntp server for each node
11:16:46 UTC skipped: [rs-node-179-03]
11:16:46 UTC skipped: [rs-master-174-01]
11:16:46 UTC skipped: [rs-master-175-02]
11:16:46 UTC skipped: [rs-master-176-03]
11:16:46 UTC skipped: [devops-180]
11:16:46 UTC skipped: [rs-node-177-01]
11:16:46 UTC skipped: [rs-node-178-02]
11:16:46 UTC [InitRegistryModule] Fetch registry certs
11:16:46 UTC success: [devops-180]
11:16:46 UTC [InitRegistryModule] Generate registry Certs
[certs] Using existing ca certificate authority
11:16:46 UTC message: [LocalHost]
unable to sign certificate: must specify a CommonName
11:16:46 UTC failed: [LocalHost]
error: Pipeline[InitRegistryPipeline] execute failed: Module[InitRegistryModule] exec failed: 
failed: [LocalHost] [GenerateRegistryCerts] exec failed after 1 retries: unable to sign certificate: must specify a CommonName

解决方案:

https://ask.kubesphere.io/forum/d/22879-kubesphere34-unable-to-sign-certificate-must-specify-a-commonname

修改配置文件 

...  
registry:type: harborauths:"dockerhub.kubekey.local":username: adminpassword: Harbor12345privateRegistry: "dockerhub.kubekey.local"namespaceOverride: ""registryMirrors: []insecureRegistries: []addons: []

然后重新执行即可

 执行安装后去harbor对应服务器检查harbor启动情况,

如果有部分模块启动失败

进入  /opt/harbor  目录

chmod 777 -R ./common

并重启harbor

docker-compose down -vdocker-compose u -d

harbor启动后可以浏览器访问

初始化harbor 项目

备注

由于 Harbor 项目存在访问控制(RBAC)的限制,即只有指定角色的用户才能执行某些操作。如果您未创建项目,则镜像不能被推送到 Harbor。Harbor 中有两种类型的项目:

  • 公共项目(Public):任何用户都可以从这个项目中拉取镜像。
  • 私有项目(Private):只有作为项目成员的用户可以拉取镜像。

Harbor 管理员账号:admin,密码:Harbor12345。Harbor 安装文件在 /opt/harbor , 如需运维 Harbor,可至该目录下。

方式一: 

执行脚本创建 Harbor 项目。

a. 执行以下命令下载指定脚本初始化 Harbor 仓库:

curl -O https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/create_project_harbor.sh

b. 执行以下命令修改脚本配置文件:

vim create_project_harbor.sh

修改成

#!/usr/bin/env bash# Copyright 2018 The KubeSphere Authors.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.url="https://dockerhub.kubekey.local"  #修改url的值为https://dockerhub.kubekey.localuser="admin"passwd="Harbor12345"harbor_projects=(librarykubesphereiokubesphereargoprojcalicocorednsopenebscsipluginminiomirrorgooglecontainersosixiapromthanosiojimmidysongrafanaelasticistiojaegertracingjenkinsweaveworksopenpitrixjoosthofmannginxdemosfluentkubeedgeopenpolicyagent)for project in "${harbor_projects[@]}"; doecho "creating $project"curl -u "${user}:${passwd}" -X POST -H "Content-Type: application/json" "${url}/api/v2.0/projects" -d "{ \"project_name\": \"${project}\", \"public\": true}" -k #curl命令末尾加上 -kdone

备注

  • 修改 url 的值为 https://dockerhub.kubekey.local

  • 需要指定仓库项目名称和镜像列表的项目名称保持一致。

  • 脚本末尾 curl 命令末尾加上 -k

c. 执行以下命令创建 Harbor 项目:

chmod +x create_project_harbor.sh
./create_project_harbor.sh

 方式二:

登录 Harbor 仓库创建项目。将项目设置为公开以便所有用户都能够拉取镜像。关于如何创建项目,请参阅创建项目。

修改配置文件

再次执行以下命令修改集群配置文件:

vim config-sample.yaml
  ...registry:type: harborauths:"dockerhub.kubekey.local":username: adminpassword: Harbor12345privateRegistry: "dockerhub.kubekey.local"namespaceOverride: "kubesphereio"registryMirrors: []insecureRegistries: []addons: []

备注

  • 新增 auths 配置增加 dockerhub.kubekey.local 和账号密码。
  • privateRegistry 增加 dockerhub.kubekey.local
  • namespaceOverride 增加 kubesphereio

安装k8s集群和kubesphere

 执行以下命令安装 KubeSphere 集群:

./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --with-packages

参数解释如下:

  • config-sample.yaml:离线环境集群的配置文件。
  • kubesphere.tar.gz:源集群打包出来的 tar 包镜像。
  • --with-packages:若需要安装操作系统依赖,需指定该选项。

执行以下命令查看集群状态:

kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

执行命令会看到以下提示:

[root@k8s-master kubekey]# ./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --with-packages_   __      _          _   __           
| | / /     | |        | | / /           
| |/ / _   _| |__   ___| |/ /  ___ _   _ 
|    \| | | | '_ \ / _ \    \ / _ \ | | |
| |\  \ |_| | |_) |  __/ |\  \  __/ |_| |
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |__/ ||___/11:07:36 CST [GreetingsModule] Greetings
11:07:37 CST message: [k8s-master]
Greetings, KubeKey!
11:07:37 CST message: [k8s-node]
Greetings, KubeKey!
11:07:37 CST success: [k8s-master]
11:07:37 CST success: [k8s-node]
11:07:37 CST [NodePreCheckModule] A pre-check on nodes
11:07:44 CST success: [k8s-master]
11:07:44 CST success: [k8s-node]
11:07:44 CST [ConfirmModule] Display confirmation form
+------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| name       | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time         |
+------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
| k8s-node   | y    | y    | y       | y        |       | y     |         |           | y      | 24.0.6 | v1.7.3     | y          |             |                  | CST 11:07:44 |
| k8s-master | y    | y    | y       | y        |       | y     |         |           | y      |        | y          | y          |             |                  | CST 11:07:43 |
+------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+This is a simple check of your environment.
Before installation, ensure that your machines meet all requirements specified at
https://github.com/kubesphere/kubekey#requirements-and-recommendationsContinue this installation? [yes/no]: no
手动安装依赖包

然后需要手动离线安装 

socat ipvsadm conntrack ceph client glusterfs client

离线安装依赖包方式参考下文

yum 离线安装 yumdownloader

并确保满足 https://github.com/kubesphere/kubekey#requirements-and-recommendations  中的安装条件

检查环境是否满足要求

  • Minimum resource requirements (For Minimal Installation of KubeSphere only):
    • 2 vCPUs
    • 4 GB RAM
    • 20 GB Storage

/var/lib/docker is mainly used to store the container data, and will gradually increase in size during use and operation. In the case of a production environment, it is recommended that /var/lib/docker mounts a drive separately.

  • OS requirements:
    • SSH can access to all nodes.
    • Time synchronization for all nodes.
    • sudo/curl/openssl should be used in all nodes.
    • docker can be installed by yourself or by KubeKey.
    • Red Hat includes SELinux in its Linux release. It is recommended to close SELinux or switch the mode of SELinux to Permissive
  • It's recommended that Your OS is clean (without any other software installed), otherwise there may be conflicts.
  • A container image mirror (accelerator) is recommended to be prepared if you have trouble downloading images from dockerhub.io. Configure registry-mirrors for the Docker daemon.
  • KubeKey will install OpenEBS to provision LocalPV for development and testing environment by default, this is convenient for new users. For production, please use NFS / Ceph / GlusterFS or commercial products as persistent storage, and install the relevant client in all nodes.
  • If you encounter Permission denied when copying, it is recommended to check SELinux and turn off it first
  • Dependency requirements:

KubeKey can install Kubernetes and KubeSphere together. Some dependencies need to be installed before installing kubernetes after version 1.18. You can refer to the list below to check and install the relevant dependencies on your node in advance.

Kubernetes Version ≥ 1.18
socatRequired
conntrackRequired
ebtablesOptional but recommended
ipsetOptional but recommended
ipvsadmOptional but recommended
  • Networking and DNS requirements:
    • Make sure the DNS address in /etc/resolv.conf is available. Otherwise, it may cause some issues of DNS in cluster.
    • If your network configuration uses Firewall or Security Group,you must ensure infrastructure components can communicate with each other through specific ports. It's recommended that you turn off the firewall or follow the link configuriation: NetworkAccess.
检查防火墙端口是否满足要求

端口开放满足 https://github.com/kubesphere/kubekey/blob/master/docs/network-access.md

If your network configuration uses an firewall,you must ensure infrastructure components can communicate with each other through specific ports that act as communication endpoints for certain processes or services.

servicesprotocolactionstart portend portcomment
sshTCPallow22
etcdTCPallow23792380
apiserverTCPallow6443
calicoTCPallow90999100
bgpTCPallow179
nodeportTCPallow3000032767
masterTCPallow1025010258
dnsTCPallow53
dnsUDPallow53
local-registryTCPallow5000offline environment
local-aptTCPallow5080offline environment
rpcbindTCPallow111use NFS
ipipIPENCAP / IPIPallowcalico needs to allow the ipip protocol

安装完成后,您会看到以下内容:

Warning: resource clusterconfigurations/ks-installer is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterconfiguration.installer.kubesphere.io/ks-installer configured
09:39:40 CST success: [k8s-master]
#####################################################
###              Welcome to KubeSphere!           ###
#####################################################Console: http://172.171.16.236:30880
Account: admin
Password: P@88w0rd
NOTES:1. After you log into the console, please check themonitoring status of service components in"Cluster Management". If any service is notready, please wait patiently until all components are up and running.2. Please change the default password after login.#####################################################
https://kubesphere.io             2024-04-12 09:51:07
#####################################################
09:51:11 CST success: [k8s-master]
09:51:11 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.Please check the result using the command:kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
安装完成并登录

通过 http://{IP}:30880 使用默认帐户和密码 admin/P@88w0rd 访问 KubeSphere 的 Web 控制台。

安装过程中遇到的问题

制作制品,不能下载github上的操作系统iso

  在这一步我遇到了操作系统配置在githab上,然后下载不下来的情况

然后我去github上手动下载下来放到了本地,然后下盖下面的配置部分

 operatingSystems:- arch: amd64type: linuxid: centosversion: "7"repository:iso:localPath: 添加你的本地地址url: - arch: amd64type: linuxid: ubuntuversion: "20.04"repository:iso:localPath: 添加你的本地地址url: 

下载地址

 https://github.com/kubesphere/kubekey/releases/tag/v3.0.10

初始化安装harbor报错must specify a CommonName

11:16:46 UTC success: [rs-node-178-02]
11:16:46 UTC success: [rs-node-177-01]
11:16:46 UTC success: [rs-master-174-01]
11:16:46 UTC success: [rs-node-179-03]
11:16:46 UTC success: [rs-master-175-02]
11:16:46 UTC success: [rs-master-176-03]
11:16:46 UTC success: [devops-180]
11:16:46 UTC [ConfigureOSModule] configure the ntp server for each node
11:16:46 UTC skipped: [rs-node-179-03]
11:16:46 UTC skipped: [rs-master-174-01]
11:16:46 UTC skipped: [rs-master-175-02]
11:16:46 UTC skipped: [rs-master-176-03]
11:16:46 UTC skipped: [devops-180]
11:16:46 UTC skipped: [rs-node-177-01]
11:16:46 UTC skipped: [rs-node-178-02]
11:16:46 UTC [InitRegistryModule] Fetch registry certs
11:16:46 UTC success: [devops-180]
11:16:46 UTC [InitRegistryModule] Generate registry Certs
[certs] Using existing ca certificate authority
11:16:46 UTC message: [LocalHost]
unable to sign certificate: must specify a CommonName
11:16:46 UTC failed: [LocalHost]
error: Pipeline[InitRegistryPipeline] execute failed: Module[InitRegistryModule] exec failed: 
failed: [LocalHost] [GenerateRegistryCerts] exec failed after 1 retries: unable to sign certificate: must specify a CommonName

解决方案:

https://ask.kubesphere.io/forum/d/22879-kubesphere34-unable-to-sign-certificate-must-specify-a-commonname

修改配置文件 

...  
registry:type: harborauths:"dockerhub.kubekey.local":username: adminpassword: Harbor12345privateRegistry: "dockerhub.kubekey.local"namespaceOverride: ""registryMirrors: []insecureRegistries: []addons: []

然后重新执行即可

初始化harbor后,发现harbor的一些模块容器没有正常启动

 执行安装后去harbor对应服务器检查harbor启动情况,

如果有部分模块容器启动失败

进入  /opt/harbor  目录

chmod 777 -R ./common

并重启harbor

docker-compose down -vdocker-compose u -d

harbor启动后可以浏览器访问

麒麟系统安装,安装包没有 Fkylin-v10-amd64.iso

关于##kylin##上安装KubeSphere的问题,如何解决? - KubeSphere 开发者社区

离线安装却在线下载calicoctl

离线安装 kubesphere v3.4.1 报错Failed to download calicoctl binary - KubeSphere 开发者社区

还有几个镜像也是拉取的3.26.1

  - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2- registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.23.2

 修改成

  - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.26.1- registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.26.1- registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.26.1- registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.26.1- registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.26.1

重新制作制品

或者自己在有网的服务器上

docker pull下载这几个镜像

 然后 docker tag 重命名镜像

docker save  保存镜像到本地,并传到没有网的服务器

docker load  加载本地镜像到没有网的服务器

docker push  上传到harbor仓库

k8两个容器(dns相关容器)启动报错

KylinOSv10安装K8S时Coredns容器报错_failed to create shim task: oci runtime create fai-CSDN博客

 KylinOSv10安装K8S时Coredns容器报错。

Message: failed to create shim task: OCI runtime create failed: container_linux.go:318: starting container process caused "process_linux.go:281: applying cgroup configuration for process caused \"No such device or address\"": unknown

解决办法

vim  /etc/docker/daemon.json

"exec-opts": ["native.cgroupdriver=systemd"]修改为"exec-opts": ["native.cgroupdriver=cgroupfs"]

 

 修改完成后重启docker

systemctl daemon-reload
systemctl restart docker

离线安装步骤执行到最后一步发现namespaceOverride: "kubesphereio"配置项并没有生效

https://ask.kubesphere.io/forum/d/23252-an-341wen-dang-chi-xian-an-zhuang-bu-zou-zhi-xing-dao-zui-hou-yi-bu-fa-xian-namespaceoverride-kubesphereiopei-zhi-xiang-bing-mei-you-sheng-xiao

registry: type: harbor auths: “dockerhub.kubekey.local”: username: admin password: Harbor12345 privateRegistry: “dockerhub.kubekey.local” namespaceOverride: “kubesphereio” registryMirrors: [] insecureRegistries: [] addons: []

执行kubectl get pod -A

执行 kubectl get pod -n kubesphere-monitoring-system node-exporter-4mdjs -o yaml

其他镜像同理

其实这些镜像本地都有,但是 都上传到了 kubesphereio 项目,却不是从kubesphereio项目拉取。

不知道官方文档怎么搞的  。。。。。。。。。。

我个人解决办法,手动对所有需要的镜像
pull tag push

docker pull dockerhub.kubekey.local/kubesphereio/snapshot-controller:v4.0.0
docker pull dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4
docker pull dockerhub.kubekey.local/kubesphereio/kube-state-metrics:v2.6.0
docker pull dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1
docker pull dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0
docker pull dockerhub.kubekey.local/kubesphereio/prometheus-operator:v0.55.1
docker pull dockerhub.kubekey.local/kubesphereio/ks-apiserver:v3.4.1
docker pull dockerhub.kubekey.local/kubesphereio/ks-console:v3.4.1
docker pull dockerhub.kubekey.local/kubesphereio/ks-controller-manager:v3.4.1
docker pull dockerhub.kubekey.local/kubesphereio/kubectl:v1.22.0
docker pull dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4
docker pull dockerhub.kubekey.local/kubesphereio/notification-manager-operator:v2.3.0
docker pull dockerhub.kubekey.local/kubesphereio/prometheus-config-reloader:v0.55.1
docker pull dockerhub.kubekey.local/kubesphereio/prometheus:v2.39.1
docker pull dockerhub.kubekey.local/kubesphereio/notification-tenant-sidecar:v3.2.0
docker pull dockerhub.kubekey.local/kubesphereio/alertmanager:v0.23.0
docker pull dockerhub.kubekey.local/kubesphereio/notification-manager:v2.3.0docker tag dockerhub.kubekey.local/kubesphereio/snapshot-controller:v4.0.0 dockerhub.kubekey.local/csiplugin/snapshot-controller:v4.0.0
docker tag dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4 dockerhub.kubekey.local/mirrorgooglecontainers/defaultbackend-amd64:1.4
docker tag dockerhub.kubekey.local/kubesphereio/kube-state-metrics:v2.6.0 dockerhub.kubekey.local/kubesphere/kube-state-metrics:v2.6.0
docker tag dockerhub.kubekey.local/kubesphereio/node-exporter:v1.3.1 dockerhub.kubekey.local/prom/node-exporter:v1.3.1
docker tag dockerhub.kubekey.local/kubesphereio/kube-rbac-proxy:v0.11.0 dockerhub.kubekey.local/kubesphere/kube-rbac-proxy:v0.11.0
docker tag dockerhub.kubekey.local/kubesphereio/prometheus-operator:v0.55.1 dockerhub.kubekey.local/kubesphere/prometheus-operator:v0.55.1
docker tag dockerhub.kubekey.local/kubesphereio/ks-apiserver:v3.4.1 dockerhub.kubekey.local/kubesphere/ks-apiserver:v3.4.1
docker tag dockerhub.kubekey.local/kubesphereio/ks-console:v3.4.1 dockerhub.kubekey.local/kubesphere/ks-console:v3.4.1
docker tag dockerhub.kubekey.local/kubesphereio/ks-controller-manager:v3.4.1 dockerhub.kubekey.local/kubesphere/ks-controller-manager:v3.4.1
docker tag dockerhub.kubekey.local/kubesphereio/kubectl:v1.22.0 dockerhub.kubekey.local/kubesphere/kubectl:v1.22.0
docker tag dockerhub.kubekey.local/kubesphereio/defaultbackend-amd64:1.4 dockerhub.kubekey.local/mirrorgooglecontainers/defaultbackend-amd64:1.4
docker tag dockerhub.kubekey.local/kubesphereio/notification-manager-operator:v2.3.0 dockerhub.kubekey.local/kubesphere/notification-manager-operator:v2.3.0
docker tag dockerhub.kubekey.local/kubesphereio/prometheus-config-reloader:v0.55.1 dockerhub.kubekey.local/kubesphere/prometheus-config-reloader:v0.55.1
docker tag  dockerhub.kubekey.local/kubesphereio/prometheus:v2.39.1 dockerhub.kubekey.local/prom/prometheus:v2.39.1
docker tag dockerhub.kubekey.local/kubesphereio/notification-tenant-sidecar:v3.2.0 dockerhub.kubekey.local/kubesphere/notification-tenant-sidecar:v3.2.0
docker tag dockerhub.kubekey.local/kubesphereio/alertmanager:v0.23.0  dockerhub.kubekey.local/prom/alertmanager:v0.23.0
docker tag dockerhub.kubekey.local/kubesphereio/notification-manager:v2.3.0  dockerhub.kubekey.local/kubesphere/notification-manager:v2.3.0docker push dockerhub.kubekey.local/csiplugin/snapshot-controller:v4.0.0
docker push dockerhub.kubekey.local/mirrorgooglecontainers/defaultbackend-amd64:1.4
docker push dockerhub.kubekey.local/kubesphere/kube-state-metrics:v2.6.0
docker push dockerhub.kubekey.local/prom/node-exporter:v1.3.1
docker push dockerhub.kubekey.local/kubesphere/kube-rbac-proxy:v0.11.0
docker push dockerhub.kubekey.local/kubesphere/prometheus-operator:v0.55.1
docker push dockerhub.kubekey.local/kubesphere/ks-apiserver:v3.4.1
docker push dockerhub.kubekey.local/kubesphere/ks-console:v3.4.1
docker push dockerhub.kubekey.local/kubesphere/ks-controller-manager:v3.4.1
docker push dockerhub.kubekey.local/kubesphere/kubectl:v1.22.0
docker push dockerhub.kubekey.local/mirrorgooglecontainers/defaultbackend-amd64:1.4
docker push dockerhub.kubekey.local/kubesphere/notification-manager-operator:v2.3.0
docker push dockerhub.kubekey.local/kubesphere/prometheus-config-reloader:v0.55.1
docker push dockerhub.kubekey.local/prom/prometheus:v2.39.1
docker push dockerhub.kubekey.local/kubesphere/notification-tenant-sidecar:v3.2.0
docker push dockerhub.kubekey.local/prom/alertmanager:v0.23.0
docker push dockerhub.kubekey.local/kubesphere/notification-manager:v2.3.0docker pull dockerhub.kubekey.local/csiplugin/snapshot-controller:v4.0.0
docker pull dockerhub.kubekey.local/mirrorgooglecontainers/defaultbackend-amd64:1.4
docker pull dockerhub.kubekey.local/kubesphere/kube-state-metrics:v2.6.0
docker pull dockerhub.kubekey.local/prom/node-exporter:v1.3.1
docker pull dockerhub.kubekey.local/kubesphere/kube-rbac-proxy:v0.11.0
docker pull dockerhub.kubekey.local/kubesphere/prometheus-operator:v0.55.1
docker pull dockerhub.kubekey.local/kubesphere/ks-apiserver:v3.4.1
docker pull dockerhub.kubekey.local/kubesphere/ks-console:v3.4.1
docker pull dockerhub.kubekey.local/kubesphere/ks-controller-manager:v3.4.1
docker pull dockerhub.kubekey.local/kubesphere/kubectl:v1.22.0
docker pull dockerhub.kubekey.local/mirrorgooglecontainers/defaultbackend-amd64:1.4
docker pull dockerhub.kubekey.local/kubesphere/notification-manager-operator:v2.3.0
docker pull dockerhub.kubekey.local/kubesphere/prometheus-config-reloader:v0.55.1
docker pull dockerhub.kubekey.local/prom/prometheus:v2.39.1
docker pull dockerhub.kubekey.local/kubesphere/notification-tenant-sidecar:v3.2.0
docker pull dockerhub.kubekey.local/prom/alertmanager:v0.23.0
docker pull dockerhub.kubekey.local/kubesphere/notification-manager:v2.3.0

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/815189.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

实战--------部署搭建ELFK+zookeeper+kafka架构

目录 一、部署jdk环境 二、搭建Elasticsearch 三、搭建logstash 四、搭建kibana服务 五、搭建filebeat服务 六、搭建zookeeper与kafka服务 七、部署ELFKzookeeperkafka Filebeat/Fluentd:负责从各服务器节点上实时收集日志数据,Filebeat轻量级&am…

面试八股——Spring——AOP与事务

AOP的定义 事务的实现 事务的失效场景 异常捕获处理 下图中由于②导致异常: 原因: 解决办法:自己抛出一个非检查异常(具体原因看“抛出检查异常”)。 抛出检查异常 由于①出错,导致抛出了检查异常 原因&…

[通俗易懂:Linux标准输入/输出和重定向]Shell脚本之 > /dev/null 2>1命令详解

目录标题 一、> /dev/null 2>&1 命令解析二、/dev/null 文件浅显理解三、标准输入、标准输出、标准错误输出四、输入重定向、输出重定向五、命令作用与应用场景 如果想看命令意义,可以直接跳到第五部分 一、> /dev/null 2>&1 命令解析 我们在别…

计算机服务器中了360后缀勒索病毒怎么办?360后缀勒索病毒解密步骤

网络技术的不断应用与发展,为企业的生产运营提供了极大便利,利用网络可以开展各项工作业务,可以大大提高企业的生产效率,然而,网络是一把双刃剑,在为企业提供便利的同时,也为企业的数据安全带来…

常见的Linux命令

linux操作系统 ctrl鼠标滚动 放大缩小字体 cd /目录进入目录下 ls查看当前目录下的所有内容 tar -zxvf 压缩包名字 对压缩包进行解压 sync将数据由内存同步到硬盘上 shutdown关机指令 shutdown -h 10 /10 表示十分钟后关机 shutdown -h now 表示现在关机 shutdown -h…

英语新概念2-回译法-lesson6

我刚刚搬家去柏林大街的房子里。昨天一个流浪汉敲我的门,他想我寻求一顿饭和一杯啤酒。未拒绝了这个请求之后,这个流浪汉倒立着唱歌,我给他了一顿饭,他吃了食物并且喝了啤酒,然后他把一片奶酪放到他的口袋里然后走开了。过了一会儿,一个领居告诉我关于这个流浪汉的事情。…

Centos7 K8S 集群 - kubeadm搭建方式

机器准备 搭建环境是centos7, 四核心4G内存四台机器 一个master节点,一个etcd,两台node 机器名称IP 地址master192.168.1.127node1192.168.1.129node2192.168.1.130node3192.168.1.131 机器时间同步 各节点时间要求精确同步,可以直接联网…

算法设计与分析(超详解!) 第二节 递归与分治

1.递归定义 直接或间接地调用自身的算法称为递归算法。用函数自身给出定义的函数称为递归函数。 由分治法产生的子问题往往是原问题的较小模式,这就为使用递归技术提供了方便。在这种情况下,反复应用分治手段,可以使子问题与原问题类型一致…

LeetCode-热题100:226. 翻转二叉树

题目描述 给你一棵二叉树的根节点 root ,翻转这棵二叉树,并返回其根节点。 示例 1: 输入: root [4,2,7,1,3,6,9] 输出: [4,7,2,9,6,3,1] 示例 2: 输入: root [2,1,3] 输出: […

GlusterFS 分布式文件系统 搭建及使用

一、GlusterFS GlusterFS 是一个开源的分布式文件系统,旨在提供高性能、可扩展性和可靠性,适用于现代数据中心和云环境。它以横向扩展的方式设计,可以在多台服务器之间共享文件系统,为应用程序提供统一的文件存储服务。 Gluster…

【C 数据结构】线性表

文章目录 【 1. 线性表 】【 2. 顺序存储结构、链式存储结构 】【 3. 前驱、后继 】 【 1. 线性表 】 线性表,全名为线性存储结构,线性表结构存储的数据往往是可以依次排列的(不考虑数值大小顺序)。 例如,存储类似 {1…

蓝桥杯-数组分割

问题描述 小蓝有一个长度为 N 的数组 A 「Ao,A1,…,A~-1]。现在小蓝想要从 A 对应的数组下标所构成的集合I 0,1,2,… N-1 中找出一个子集 民1,那么 民」在I中的补集为Rz。记S∑reR 4,S2∑rERA,,我们要求S、和 S,均为偶数,请问在这…

c语言-----数组知识汇总

前言 本文为我学习数组知识点之后,对c语言的数组部分进行的知识点汇总。 简单数组介绍 简单来说,数组就是一个数据组,像一个箱子,里面放有多个数据。 [1,2,3,4,5] 数组的定义 基础定义 语法: 数据类型 数组名[数组…

代码随想录-算法训练营day12【休息,复习与总结】

代码随想录-035期-算法训练营【博客笔记汇总表】-CSDN博客 ● day 12 周日休息(4.14) 目录 复习与总结 0417_图论-太平洋大西洋水流问题 0827_图论-最大人工岛 复习与总结 二刷做题速度提升了一大截,ヾ(◍∇◍)ノ゙加…

基于SpringBoot实现的在线拍卖系统

系统开发环境 编程语言:Java数据库:MySQL容器:Tomcat工具:IDEA/Ecilpse、Navicat、Maven 系统实现 管理员功能模块 首页 修改密码 用户管理 商品类型管理 拍卖商品 竞拍公告 轮播图 历史竞拍管理 竞拍订单管理 留言板管理 用户…

多输入多输出 | Matlab实现XGboost多输入多输出预测

多输入多输出 | Matlab实现XGboost多输入多输出预测 目录 多输入多输出 | Matlab实现XGboost多输入多输出预测预测效果基本介绍程序设计往期精彩参考资料 预测效果 基本介绍 Matlab实现XGboost多输入多输出预测 1.data为数据集,10个输入特征,3个输出变量…

使用 vue3-sfc-loader 加载远程Vue文件, 在运行时动态加载 .vue 文件。无需 Node.js 环境,无需 (webpack) 构建步骤

加载远程Vue文件 vue3-sfc-loader vue3-sfc-loader ,它是Vue3/Vue2 单文件组件加载器。 在运行时从 html/js 动态加载 .vue 文件。无需 Node.js 环境,无需 (webpack) 构建步骤。 主要特征 支持 Vue 3 和 Vue 2(参见dist/)仅需…

UDP实现Mini版在线聊天室

实现原理 只有当客户端先对服务器发送online消息的时候,服务器才会把客户端加入到在线列表。当在线列表的用户发消息的时候,服务器会把消息广播给在线列表中的所有用户。而当用户输入offline时,表明自己要下线了,此时服务器把该用…

服务器docker应用一览

文章目录 一、需求概况二、业务流程三、运行效果四、实现过程1. 基础前提2. 源码放送3.核心代码4. 项目打包5.部署步骤 一、需求概况 现有某云主机服务器,用来做项目演示用,上面运行了docker应用,现希望有一总览页面,用来展示部署…

HC-SR04(超声波模块)

工具 1.Proteus 8 仿真器 2.keil 5 编辑器 原理图 讲解 简介 HC-SR04超声波模块是一种常用的测距模块,通过不断检测超声波发射后遇到障碍物所反射的回波,从而测出发射和接收回波的时间差,并据此求出距离。它主要由两个压电陶瓷超声传感器…