Clickhouse集群化(六)clickhosue-operator学习

1. Custom Resource元素

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"
metadata:name: "clickhouse-installation-test"

这是clickhouse operator自定义的资源ClickHouseInstallation

1.1. .spec.defaults

spec:defaults: - section represents default values for sections below.replicasUseFQDN: "no" - should replicas be specified by FQDN in <host></host>distributedDDL:  - reference to <yandex><distributed_ddl></distributed_ddl></yandex>profile: defaultstorageManagement:# Specify PVC provisioner.# 1. StatefulSet. PVC would be provisioned by the StatefulSet# 2. Operator. PVC would be provisioned by the operatorprovisioner: StatefulSet# Specify PVC reclaim policy.# 1. Retain. Keep PVC from being deleted#    Retaining PVC will also keep backing PV from deletion. This is useful in case we need to keep data intact.# 2. DeletereclaimPolicy: Retain

1.2. .spec.configuration

表示ClickHouse配置文件的来源。无论是用户,远程服务器等配置文件。

1.2.1. .spec.configuration.zookeeper

表示标签 <yandex><zookeeper></zookeeper></yandex> 配置

    zookeeper:nodes:- host: zookeeper-0.zookeepers.zoo3ns.svc.cluster.localport: 2181- host: zookeeper-1.zookeepers.zoo3ns.svc.cluster.localport: 2181- host: zookeeper-2.zookeepers.zoo3ns.svc.cluster.localport: 2181session_timeout_ms: 30000operation_timeout_ms: 10000root: /path/to/zookeeper/nodeidentity: user:password

1.2.2. .spec.configuration.profiles

    profiles:readonly/readonly: 1 表示<yandex><profiles></profiles></yandex> settings

1.2.3. .spec.configuration.users

    users:readonly/profile: readonly#     <users>#        <readonly>#          <profile>readonly</profile>#        </readonly>#     </users>test/networks/ip:- "127.0.0.1"- "::/0"#     <users>#        <test>#          <networks>#            <ip>127.0.0.1</ip>#            <ip>::/0</ip>#          </networks>#        </test>#     </users>test/profile: defaulttest/quotas: default

1.2.4. .spec.configuration.settings

    profiles:readonly/readonly: "1"#      <profiles>#        <readonly>#          <readonly>1</readonly>#        </readonly>#      </profiles>default/max_memory_usage: "1000000000"

1.2.5. .spec.configuration.files

file允许通过YAML自定义ClickHouse文件。这可以用于创建复杂的自定义配置。

spec:configuration:settings:dictionaries_config: config.d/*.dictfiles:dict_one.dict: |<yandex><dictionary><name>one</name><source><clickhouse><host>localhost</host><port>9000</port><user>default</user><password/><db>system</db><table>one</table></clickhouse></source><lifetime>60</lifetime><layout><flat/></layout><structure><id><name>dummy</name></id><attribute><name>one</name><expression>dummy</expression><type>UInt8</type><null_value>0</null_value></attribute></structure></dictionary></yandex>

1.2.6. .spec.configuration.clusters

集群配置

    clusters:- name: all-counts # 基础配置 使用layouttemplates:podTemplate: clickhouse-v23.8dataVolumeClaimTemplate: default-volume-claimlogVolumeClaimTemplate: default-volume-claimschemaPolicy:replica: Allshard: Alllayout:shardsCount: 3 分片数replicasCount: 2 副本数- name: shards-onlytemplates:podTemplate: clickhouse-v23.8dataVolumeClaimTemplate: default-volume-claimlogVolumeClaimTemplate: default-volume-claimlayout:shardsCount: 3# replicasCount not specified, assumed = 1, by default- name: replicas-onlytemplates:podTemplate: clickhouse-v23.8dataVolumeClaimTemplate: default-volume-claimlogVolumeClaimTemplate: default-volume-claimlayout:# shardsCount not specified, assumed = 1, by defaultreplicasCount: 3- name: customized #自定义分片和副本 使用shards and replicas标签.templates:podTemplate: clickhouse-v23.8dataVolumeClaimTemplate: default-volume-claimlogVolumeClaimTemplate: default-volume-claimschemaPolicy:replica: Noneshard: Nonelayout:shards:- name: shard0replicasCount: 3weight: 1internalReplication: Disabledtemplates:podTemplate: clickhouse-v23.8dataVolumeClaimTemplate: default-volume-claimlogVolumeClaimTemplate: default-volume-claim- name: shard1templates:podTemplate: clickhouse-v23.8dataVolumeClaimTemplate: default-volume-claimlogVolumeClaimTemplate: default-volume-claimreplicas:- name: replica0- name: replica1- name: replica2- name: shard2replicasCount: 3templates:podTemplate: clickhouse-v23.8dataVolumeClaimTemplate: default-volume-claimlogVolumeClaimTemplate: default-volume-claimreplicaServiceTemplate: replica-service-templatereplicas:- name: replica0tcpPort: 9000httpPort: 8123interserverHTTPPort: 9009templates:podTemplate: clickhouse-v23.8dataVolumeClaimTemplate: default-volume-claimlogVolumeClaimTemplate: default-volume-claimreplicaServiceTemplate: replica-service-template- name: with-secret# Insecure communication.# Opens/Closes insecure portsinsecure: "yes"# Secure communication.# Opens/Closes secure ports# Translates into <secure>1</secure> ClickHouse setting for remote replicassecure: "yes"# Shared secret value to secure cluster communicationssecret:# Auto-generate shared secret value to secure cluster communicationsauto: "True"# Cluster shared secret value in plain textvalue: "plaintext secret"# Cluster shared secret sourcevalueFrom:secretKeyRef:name: "SecretName"key: "Key"layout:shardsCount: 2

1.3. .spec.templates

1.3.1. .serviceTemplates

  templates:serviceTemplates:- name: chi-service-template# generateName understands different sets of macroses,# depending on the level of the object, for which Service is being created:## For CHI-level Service:# 1. {chi} - ClickHouseInstallation name# 2. {chiID} - short hashed ClickHouseInstallation name (BEWARE, this is an experimental feature)## For Cluster-level Service:# 1. {chi} - ClickHouseInstallation name# 2. {chiID} - short hashed ClickHouseInstallation name (BEWARE, this is an experimental feature)# 3. {cluster} - cluster name# 4. {clusterID} - short hashed cluster name (BEWARE, this is an experimental feature)# 5. {clusterIndex} - 0-based index of the cluster in the CHI (BEWARE, this is an experimental feature)## For Shard-level Service:# 1. {chi} - ClickHouseInstallation name# 2. {chiID} - short hashed ClickHouseInstallation name (BEWARE, this is an experimental feature)# 3. {cluster} - cluster name# 4. {clusterID} - short hashed cluster name (BEWARE, this is an experimental feature)# 5. {clusterIndex} - 0-based index of the cluster in the CHI (BEWARE, this is an experimental feature)# 6. {shard} - shard name# 7. {shardID} - short hashed shard name (BEWARE, this is an experimental feature)# 8. {shardIndex} - 0-based index of the shard in the cluster (BEWARE, this is an experimental feature)## For Replica-level Service:# 1. {chi} - ClickHouseInstallation name# 2. {chiID} - short hashed ClickHouseInstallation name (BEWARE, this is an experimental feature)# 3. {cluster} - cluster name# 4. {clusterID} - short hashed cluster name (BEWARE, this is an experimental feature)# 5. {clusterIndex} - 0-based index of the cluster in the CHI (BEWARE, this is an experimental feature)# 6. {shard} - shard name# 7. {shardID} - short hashed shard name (BEWARE, this is an experimental feature)# 8. {shardIndex} - 0-based index of the shard in the cluster (BEWARE, this is an experimental feature)# 9. {replica} - replica name# 10. {replicaID} - short hashed replica name (BEWARE, this is an experimental feature)# 11. {replicaIndex} - 0-based index of the replica in the shard (BEWARE, this is an experimental feature)generateName: "service-{chi}"# type ObjectMeta struct from k8s.io/meta/v1metadata:labels:custom.label: "custom.value"annotations:cloud.google.com/load-balancer-type: "Internal"service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0service.beta.kubernetes.io/azure-load-balancer-internal: "true"service.beta.kubernetes.io/openstack-internal-load-balancer: "true"service.beta.kubernetes.io/cce-load-balancer-internal-vpc: "true"# type ServiceSpec struct from k8s.io/core/v1spec:ports:- name: httpport: 8123- name: clientport: 9000type: LoadBalancer

1.3.2. podTemplates

zone和distribution一起定义节点上ClickHouse实例的分区布局。

    podTemplates:- name: clickhouse-v23.8# We may need to label nodes with clickhouse=allow label for this example to run# See ./label_nodes.sh for this purposezone:key: "clickhouse"values:- "allow"# Shortcut version for AWS installations#zone:#  values:#    - "us-east-1a"# Possible values for podDistribution are:# Unspecified - empty value# ClickHouseAntiAffinity - AntiAffinity by ClickHouse instances.#   Pod pushes away other ClickHouse pods, which allows one ClickHouse instance per topologyKey-specified unit#   CH - (push away) - CH - (push away) - CH# ShardAntiAffinity - AntiAffinity by shard name.#   Pod pushes away other pods of the same shard (replicas of this shard),#   which allows one replica of a shard instance per topologyKey-specified unit.#   Other shards are allowed - it does not push all shards away, but CH-instances of the same shard only.#   Used for data loss avoidance - keeps all copies of the shard on different topologyKey-specified units.#   shard1,replica1 - (push away) - shard1,replica2 - (push away) - shard1,replica3# ReplicaAntiAffinity - AntiAffinity by replica name.#   Pod pushes away other pods of the same replica (shards of this replica),#   which allows one shard of a replica per topologyKey-specified unit.#   Other replicas are allowed - it does not push all replicas away, but CH-instances of the same replica only.#   Used to evenly distribute load from "full cluster scan" queries.#   shard1,replica1 - (push away) - shard2,replica1 - (push away) - shard3,replica1# AnotherNamespaceAntiAffinity - AntiAffinity by "another" namespace.#   Pod pushes away pods from another namespace, which allows same-namespace pods per topologyKey-specified unit.#   ns1 - (push away) - ns2 - (push away) - ns3# AnotherClickHouseInstallationAntiAffinity - AntiAffinity by "another" ClickHouseInstallation name.#   Pod pushes away pods from another ClickHouseInstallation,#   which allows same-ClickHouseInstallation pods per topologyKey-specified unit.#   CHI1 - (push away) - CHI2 - (push away) - CHI3# AnotherClusterAntiAffinity - AntiAffinity by "another" cluster name.#   Pod pushes away pods from another Cluster,#   which allows same-cluster pods per topologyKey-specified unit.#   cluster1 - (push away) - cluster2 - (push away) - cluster3# MaxNumberPerNode - AntiAffinity by cycle index.#   Pod pushes away pods from the same cycle,#   which allows to specify maximum number of ClickHouse instances per topologyKey-specified unit.#   Used to setup circular replication.# NamespaceAffinity - Affinity by namespace.#   Pod attracts pods from the same namespace,#   which allows pods from same namespace per topologyKey-specified unit.#   ns1 + (attracts) + ns1# ClickHouseInstallationAffinity - Affinity by ClickHouseInstallation name.#   Pod attracts pods from the same ClickHouseInstallation,#   which allows pods from the same CHI per topologyKey-specified unit.#   CHI1 + (attracts) + CHI1# ClusterAffinity - Affinity by cluster name.#   Pod attracts pods from the same cluster,#   which allows pods from the same Cluster per topologyKey-specified unit.#   cluster1 + (attracts) + cluster1# ShardAffinity - Affinity by shard name.#   Pod attracts pods from the same shard,#   which allows pods from the same Shard per topologyKey-specified unit.#   shard1 + (attracts) + shard1# ReplicaAffinity - Affinity by replica name.#   Pod attracts pods from the same replica,#   which allows pods from the same Replica per topologyKey-specified unit.#   replica1 + (attracts) + replica1# PreviousTailAffinity - Affinity to overlap cycles. Used to make cycle pod distribution#   cycle head + (attracts to) + previous cycle tailpodDistribution:- type: ShardAntiAffinity- type: MaxNumberPerNodenumber: 2# Apply podDistribution on per-host basistopologyKey: "kubernetes.io/hostname"# Apply podDistribution on per-zone basis#topologyKey: "kubernetes.io/zone"# type ObjectMeta struct {} from k8s.io/meta/v1metadata:labels:a: "b"# type PodSpec struct {} from k8s.io/core/v1spec:containers:- name: clickhouseimage: clickhouse/clickhouse-server:23.8volumeMounts:- name: default-volume-claimmountPath: /var/lib/clickhouseresources:requests:memory: "64Mi"cpu: "100m"limits:memory: "64Mi"cpu: "100m"- name: clickhouse-logimage: clickhouse/clickhouse-server:23.8command:- "/bin/sh"- "-c"- "--"args:- "while true; do sleep 30; done;"# pod template for ClickHouse v23.8- name: clickhouse-v23.8# type ObjectMeta struct {} from k8s.io/meta/v1metadata:labels:a: "b"# type PodSpec struct {} from k8s.io/core/v1spec:containers:- name: clickhouseimage: clickhouse/clickhouse-server:23.8volumeMounts:- name: default-volume-claimmountPath: /var/lib/clickhouseresources:requests:memory: "64Mi"cpu: "100m"limits:memory: "64Mi"cpu: "100m"- name: clickhouse-logimage: clickhouse/clickhouse-server:23.8command:- "/bin/sh"- "-c"- "--"args:- "while true; do sleep 30; done;"

2. 建表

2.1. Manifest

创建一个双副本双飞分片的clickhouse集群

apiVersion: "clickhouse.altinity.com/v1"
kind: "ClickHouseInstallation"metadata:name: "repl-05"spec:defaults:templates: dataVolumeClaimTemplate: defaultpodTemplate: clickhouse:19.6configuration:zookeeper:nodes:- host: zookeeper.zoo1nsclusters:- name: replicatedlayout:shardsCount: 2replicasCount: 2templates:volumeClaimTemplates:- name: defaultspec:accessModes:- ReadWriteOnceresources:requests:storage: 500MipodTemplates:- name: clickhouse:19.6spec:containers:- name: clickhouse-podimage: clickhouse/clickhouse-server:23.8

2.2. Replicated table setup

2.2.1. 宏

Operator provides set of macros, which are:

  1. {installation} -- ClickHouse Installation name
  2. {cluster} -- primary cluster name
  3. {replica} -- replica name in the cluster, maps to pod service name
  4. {shard} -- shard id

ClickHouse also supports internal macros {database} and {table} that maps to current database and table respectively.

2.2.2. Create replicated table

Now we can create replicated table, using specified macros

CREATE TABLE events_local on cluster '{cluster}' (event_date  Date,event_type  Int32,article_id  Int32,title       String
) engine=ReplicatedMergeTree('/clickhouse/{installation}/{cluster}/tables/{shard}/{database}/{table}', '{replica}')
PARTITION BY toYYYYMM(event_date)
ORDER BY (event_type, article_id);
第一个参数是分片的 zk_path 一般按照:/clickhouse/table/{shard}/{table_name} 的格式
第二个参数是副本名称,相同的分片副本名称不能相同。
CREATE TABLE events on cluster '{cluster}' AS events_local
ENGINE = Distributed('{cluster}', default, events_local, rand());
'{cluster}': 代表集群的名称,用于指定数据将如何在集群中分布。default: 指定数据库的名称。在这个例子中,使用 default 数据库,这意味着表会在 default 数据库中创建。events_local: 指定实际存储数据的本地表的名称。在查询 events 表时,数据将从集群中的 events_local 表中检索。events_local 表是在每个节点上本地存储数据的实际表。rand(): 指定分布的方式。在这里,rand() 函数表示使用随机方式将数据分配到不同的分片上。这有助于实现数据的均匀分布。

We can generate some data:

INSERT INTO events SELECT today(), rand()%3, number, 'my title' FROM numbers(100);

And check how these data are distributed over the cluster

SELECT count() FROM events;
SELECT count() FROM events_local;

3. 用户

3.1. The 'default' user

“默认”用户用于从运行的pod连接到ClickHouse实例,也用于分布式查询。它使用空密码部署,这是ClickHouse开箱即用安装的长期默认密码。

为了确保它的安全,operator应用网络安全规则,限制连接到运行ClickHouse集群的pod,而不是其他。

下面的users.xml是由operator为具有两个节点的集群设置的。在此配置中,“默认”用户不能从集群外部连接。

3.2. 修改operate配置

clickhouse:access:secret:# Empty `namespace` means that k8s secret would be looked# in the same namespace where the operator's pod is running.namespace: ""name: "clickhouse-operator"

The following example shows a secret:

apiVersion: v1
kind: Secret
metadata:name: clickhouse-operator
type: Opaque
stringData:username: clickhouse_operatorpassword: chpassword

3.3. Using hashed passwords

ClickHouseInstallation中的用户密码可以明文指定为sha256和双sha1哈希值。

当以明文形式指定密码时,操作员在部署到ClickHouse时将其散列,但在ClickHouse安装中仍以不安全的明文格式保留。

Altinity建议显式提供哈希值如下:

spec:useTemplates:- name: clickhouse-versionconfiguration:users:user1/password: pwduser1  # This will be hashed in ClickHouse config files, but this NOT RECOMMENDEDuser2/password_sha256_hex: 716b36073a90c6fe1d445ac1af85f4777c5b7a155cea359961826a030513e448user3/password_double_sha1_hex: cbe205a7351dd15397bf423957559512bd4be395

3.4. Using secrets

spec:configuration:users:user1/password:valueFrom:secretKeyRef:name: clickhouse_secretkey: pwduser1user2/password_sha256_hex:valueFrom:secretKeyRef:name: clickhouse_secretkey: pwduser2          user3/password_double_sha1_hex:valueFrom:secretKeyRef:name: clickhouse_secretkey: pwduser3    
apiVersion: v1
kind: Secret
metadata:name: clickhouse-secret
type: Opaque
stringData:pwduser1: pwduser1pwduser2: e106728c3541ec3694822a29d4b8b5f1f473508adc148fcb58a60c42bcf3234cpwduser3: cbe205a7351dd15397bf423957559512bd4be395

3.5. Securing the 'default' user

虽然“默认”用户受到网络规则的保护,但信息安全团队通常不允许无密码操作。可以像更改其他用户一样更改“默认”用户的密码。然而,ClickHouse也使用“默认”用户来运行分布式查询。如果密码更改,分布式查询可能会停止工作。

为了在不暴露密码的情况下保持分布式查询的运行,配置ClickHouse为集群间通信使用秘密令牌,而不是“默认”用户凭据。

3.5.1. 'auto' token

下面的示例展示了如何让ClickHouse自动生成秘密令牌。这是最简单和推荐的方法。

spec:configuration:users:default/password_sha256_hex: 716b36073a90c6fe1d445ac1af85f4777c5b7a155cea359961826a030513e448clusters:- name: defaultsecret:auto: "true"

3.5.2. Custom token

The following example shows how to define a token.

spec:configuration:users:default/password_sha256_hex: 716b36073a90c6fe1d445ac1af85f4777c5b7a155cea359961826a030513e448clusters:- name: "default"secret:value: "my_secret"

3.5.3. Custom token from Kubernetes secret

The following example shows how to define a token within a secret.

spec:configuration:users:default/password_sha256_hex: 716b36073a90c6fe1d445ac1af85f4777c5b7a155cea359961826a030513e448clusters:- name: "default"secret:valueFrom:secretKeyRef:name: "secure-inter-cluster-communications"key: "secret"

4. zookeeper

This document describes how to setup ZooKeeper in k8s environment.

Zookeeper installation is available in two options:

  1. Quick start - just run it quickly and ask no questions
  2. Advanced setup - setup internal details, such as storage class, replicas number, etc

During ZooKeeper installation the following items are created/configured:

  1. [OPTIONAL] Create separate namespace to run Zookeeper in
  2. Create k8s resources (optionally, within namespace):
  • Service - used to provide central access point to Zookeeper
  • Headless Service - used to provide DNS namings
  • Disruption Balance - used to specify max number of offline pods
  • [OPTIONAL] Storage Class - used to specify storage class to be used by Zookeeper for data storage
  • Stateful Set - used to manage and scale sets of pods

4.1. Quick start

Quick start is represented in two flavors:

  1. With persistent volume - good for AWS. File are located in deploy/zookeeper/quick-start-persistent-volume
  2. With local emptyDir storage - good for standalone local run, however has to true persistence.
    Files are located in deploy/zookeeper/quick-start-volume-emptyDir

Each quick start flavor provides the following installation options:

  1. 1-node Zookeeper cluster (zookeeper-1- files). No failover provided.
  2. 3-node Zookeeper cluster (zookeeper-3- files). Failover provided.

In case you'd like to test with AWS or any other cloud provider, we recommend to go with deploy/zookeeper/quick-start-persistent-volume persistent storage. In case of local test, you'd may prefer to go with deploy/zookeeper/quick-start-volume-emptyDir emptyDir.

4.1.1. Script-based Installation

In this example we'll go with simple 1-node Zookeeper cluster on AWS and pick deploy/zookeeper/quick-start-persistent-volume. Both create and delete shell scripts are available for simplification.

4.1.2. Manual Installation

In case you'd like to deploy Zookeeper manually, the following steps should be performed:

4.1.3. Namespace

Create namespace

kubectl create namespace zoo1ns

4.1.4. Zookeeper

Deploy Zookeeper into this namespace

kubectl apply -f zookeeper-1-node.yaml -n zoo1ns

Now Zookeeper should be up and running. Let's explore Zookeeper cluster.

IMPORTANT quick-start zookeeper installation are for test purposes mainly.
For fine-tuned Zookeeper setup please refer to advanced setup options.

4.2. Advanced setup

Advanced files are are located in deploy/zookeeper/advanced folder. All resources are separated into different files so it is easy to modify them and setup required options.

Advanced setup is available in two options:

  1. With persistent volume
  2. With emptyDir volume

Each of these options have both create and delete scripts provided

  1. Persistent volume create and delete scripts
  2. EmptyDir volume create and delete scripts

Step-by-step explanations:

4.2.1. Namespace

Create namespace in which all the rest resources would be created

kubectl create namespace zoons

4.2.2. Zookeeper Service

Create service. This service provides DNS name for client access to all Zookeeper nodes.

kubectl apply -f 01-service-client-access.yaml -n zoons

Should have as a result

service/zookeeper created

4.2.3. Zookeeper Headless Service

Create headless service. This service provides DNS names for all Zookeeper nodes

kubectl apply -f 02-headless-service.yaml -n zoons

Should have as a result

service/zookeeper-nodes created

4.2.4. Disruption Budget

Create budget. Disruption Budget instructs k8s on how many offline Zookeeper nodes can be at any time

kubectl apply -f 03-pod-disruption-budget.yaml -n zoons

Should have as a result

poddisruptionbudget.policy/zookeeper-pod-distribution-budget created

4.2.5. Storage Class

This part is not that straightforward and may require communication with k8s instance administrator.

First of all, we need to decide, whether Zookeeper would use Persistent Volume as a storage or just stick to more simple Volume (In doc emptyDir type is used)

In case we'd prefer to stick with simpler solution and go with Volume of type emptyDir, we need to go with emptyDir StatefulSet config 05-stateful-set-volume-emptyDir.yaml as described in next Stateful Set unit. Just move to it.

In case we'd prefer to go with Persistent Volume storage, we need to go with Persistent Volume StatefulSet config 05-stateful-set-persistent-volume.yaml

Shortly, Storage Class is used to bind together Persistent Volumes, which are created either by k8s admin manually or automatically by Provisioner. In any case, Persistent Volumes are provided externally to an application to be deployed into k8s. So, this application has to know Storage Class Name to ask for from the k8s in application's claim for new persistent volume - Persistent Volume Claim. This Storage Class Name should be asked from k8s admin and written as application's Persistent Volume Claim .spec.volumeClaimTemplates.storageClassName parameter in StatefulSet configuration. StatefulSet manifest with emptyDir 05-stateful-set-volume-emptyDir.yaml and/or StatefulSet manifest with Persistent Volume 05-stateful-set-persistent-volume.yaml.

4.2.6. Stateful Set

Edit StatefulSet manifest with emptyDir 05-stateful-set-volume-emptyDir.yaml and/or StatefulSet manifest with Persistent Volume 05-stateful-set-persistent-volume.yaml according to your Storage Preferences.

In case we'd go with Volume of type emptyDir, ensure .spec.template.spec.containers.volumes is in place and look like the following:

volumes:- name: datadir-volumeemptyDir:medium: "" #accepted values:  empty str (means node's default medium) or MemorysizeLimit: 1Gi

and ensure .spec.volumeClaimTemplates is commented.

In case we'd go with Persistent Volume storage, ensure .spec.template.spec.containers.volumes is commented and ensure .spec.volumeClaimTemplates is uncommented.

volumeClaimTemplates:- metadata:name: datadir-volumespec:accessModes:- ReadWriteOnceresources:requests:storage: 1Gi
## storageClassName has to be coordinated with k8s admin and has to be created as a `kind: StorageClass` resourcestorageClassName: storageclass-zookeeper

and ensure storageClassName (storageclass-zookeeper in this example) is specified correctly, as described in Storage Class section

As .yaml file is ready, just apply it with kubectl

kubectl apply -f 05-stateful-set.yaml -n zoons

Should have as a result

statefulset.apps/zookeeper-node created

Now we can take a look into Zookeeper cluster deployed in k8s:

4.3. Explore Zookeeper cluster

4.3.1. DNS names

We are expecting to have ZooKeeper cluster of 3 pods inside zoons namespace, named as:

zookeeper-0
zookeeper-1
zookeeper-2

Those pods are expected to have short DNS names as:

zookeeper-0.zookeepers.zoons
zookeeper-1.zookeepers.zoons
zookeeper-2.zookeepers.zoons

where zookeepers is name of Zookeeper headless service and zoons is name of Zookeeper namespace.

and full DNS names (FQDN) as:

zookeeper-0.zookeepers.zoons.svc.cluster.local
zookeeper-1.zookeepers.zoons.svc.cluster.local
zookeeper-2.zookeepers.zoons.svc.cluster.local

4.3.2. Resources

List pods in Zookeeper's namespace

kubectl get pod -n zoons

Expected output is like the following

NAME             READY   STATUS    RESTARTS   AGE
zookeeper-0      1/1     Running   0          9m2s
zookeeper-1      1/1     Running   0          9m2s
zookeeper-2      1/1     Running   0          9m2s

List services

kubectl get service -n zoons

Expected output is like the following

NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
zookeeper              ClusterIP   10.108.36.44   <none>        2181/TCP                     168m
zookeepers             ClusterIP   None           <none>        2888/TCP,3888/TCP            31m

List statefulsets

kubectl get statefulset -n zoons

Expected output is like the following

NAME            READY   AGE
zookeepers      3/3     10m

In case all looks fine Zookeeper cluster is up and running

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/878058.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Webbench1.5安装使用Ubuntu

1、安装依赖包 sudo apt-get update sudo apt-get install libtirpc-dev2、安装Webbench1.5 参考https://github.com/baiguo/webbench-1.5 # 可能需要root权限&#xff0c;我是切换到root用户才安装成功 wget http://home.tiscali.cz/~cz210552/distfiles/webbench-1.5.tar.…

安卓系统 XBL阶段详解

在安卓系统的启动流程中&#xff0c;XBL&#xff08;eXtensible Boot Loader 或 Secondary Bootloader&#xff09;是一个关键阶段&#xff0c;特别是在使用QualComm&#xff08;高通&#xff09;等SOC&#xff08;System on Chip&#xff09;的设备上。以下是对XBL阶段的详细解…

小程序全局本地存储和读取数据

globalData&#xff1a;全局存储信息 定义全局数据&#xff08;在 app.js 中&#xff09;&#xff1a; 在 app.js 中&#xff0c;你可以通过 globalData 属性定义全局数据 // app.js App({globalData: {userInfo: null,systemInfo: null,theme: light},onLaunch() {console.…

怎么把两个pdf合并成一个pdf?学会这7招,1分钟轻松搞定!

新手小白如何将pdf合并成一个文件&#xff1f;pdf是目前较为主流的一种传输格式&#xff0c;内容包含了丰富的多媒体数据&#xff0c;包括文本、图像、表格等多种元素&#xff0c;很多企业和教育工作者都喜欢使用pdf格式。 pdf文件体积较小&#xff0c;兼容性高&#xff0c;平时…

大数据学习路线基础指南‌

随着信息技术的迅猛发展&#xff0c;‌大数据已成为当今社会的热门话题。‌无论是企业决策、‌市场分析还是科学研究&#xff0c;‌大数据都扮演着举足轻重的角色。‌对于想要投身这一领域的学习者来说&#xff0c;‌制定一份清晰、‌系统的大数据学习路线是至关重要的。‌提供…

Jmeter性能关注指标详解

进行性能测试时&#xff0c;有几个关键的性能指标需要关注&#xff0c;以评估系统的性能和稳定性 一、性能关注指标包含&#xff1a; 响应时间、吞吐量、错误率、资源利用率/使用率(CPU占用率、内存使用率、磁盘I/O等待率、网络I/O) Tips&#xff1a;初步查看数据结果–响应时…

海睿思通过华东江苏大数据交易中心数商认证,提供高质量数据治理服务!

近日&#xff0c;中新赛克海睿思成功通过华东江苏大数据交易中心的数商认证&#xff0c;获得华东江苏大数据交易中心颁发的“数据治理服务商”证书。 华东数交是在实施“国家大数据战略”大背景下&#xff0c;经国家批准的华东地区首个省级特色数据要素交易平台&#xff0c;致力…

学习前端面试知识(14)

2024-8-21 打卡第十四天 js的数据类型 基本类型&#xff1a;String&#xff0c;Number&#xff0c;Object&#xff0c;Null&#xff0c;undefined&#xff0c;Boolean es6之后引入&#xff1a;Symbol&#xff0c;BigInt 判断方式&#xff1a;typeof&#xff0c;instanceof…

鸿蒙HarmonyOS开发:如何使用第三方库,加速应用开发

文章目录 一、如何安装 ohpm-cli二、如何安装三方库1、在 oh-package.json5 文件中声明三方库&#xff0c;以 ohos/crypto-js 为例&#xff1a;2、安装指定名称 pacakge_name 的三方库&#xff0c;执行以下命令&#xff0c;将自动在当前目录下的 oh-package.json5 文件中自动添…

打造敏捷开发环境:JNPF低代码平台的实践与探索

在数字化转型的浪潮中&#xff0c;企业对软件开发的敏捷性和效率提出了更高的要求。传统的软件开发模式通常耗时长、成本高昂&#xff0c;难以迅速适应市场变化。低代码平台的出现&#xff0c;为解决这一问题提供了新的视角。本文将探讨如何运用JNPF低代码平台构建敏捷开发环境…

Tailor:免费开源 AI 视频神器,创作者必备利器

目录 引言一、创新特性&#xff0c;引领视频编辑新潮流1. 智能人脸剪辑2. 精准语音剪辑3. 自动化口播生成4. 多样化字幕生成5. 一键式色彩生成 二、简单易用&#xff0c;新手也能快速上手1. 下载和安装2. 功能选择3. 操作流程 三、广泛应用&#xff0c;满足不同创作需求四、代码…

Python学习-数据库操作

一、安装python库 pip install mysql-connector-python import mysql import re 安装库并导入包 二、定义数据库连接属性&#xff1a; conn mysql.connector.connect(host192.168.1.30, # 替换为你的数据库主机名userkeyijie, # 替换为你的数据库用户名password19kyj20St…

《机器学习》—— AUC评估指标

文章目录 一、什么是AUC&#xff1f;1、什么是ROC曲线&#xff1f;2、ROC曲线的绘制 二、如何计算AUC的值三、代码实现AUC值的计算四、AUC的优缺点 一、什么是AUC&#xff1f; 机器学习中的AUC&#xff08;Area Under the Curve&#xff09;是一个重要的评估指标&#xff0c;特…

springboot故障分析FailureAnalyzer

org.springframework.boot.diagnostics.FailureAnalyzer是springboot框架中的一个接口&#xff0c;用于为应用程序启动失败提供详细的诊断报告。当springboot应用程序无法正常启动时&#xff0c;springboot会尝试查找一个或多个实现了FailureAnalyzer接口的类&#xff0c;以提供…

网络游戏运营

游戏运营是将一款游戏平台推入市场&#xff0c;并通过一系列的策略和行动&#xff0c;使玩家从接触、认识到最终成为忠实玩家的过程。这一过程涵盖了多个方面&#xff0c;包括前期准备、上线运营、活动策划、数据分析、渠道合作以及用户维护等。以下是对游戏运营的详细解析&…

HarmonyOS--认证服务-操作步骤

HarmonyOS–认证服务 文章目录 一、注册华为账号开通认证服务二、添加项目&#xff1a;*包名要与项目的包名保持一致三、获取需要的文件四、创建项目&#xff1a;*包名要与项目的包名保持一致五、添加json文件六、加入请求权限七、加入依赖八、修改构建配置文件&#xff1a;bui…

软硬链接

建立软连接 ln -s 指向的文件 软连接文件 建立硬连接 ln 原来文件 硬连接文件 上面我们发现&#xff0c;建立的软链接文件的inode编号和指向文件不同&#xff0c;而建立的硬链接文件inode编号没变&#xff0c;为什么呢&#xff1f; 为什么不能给目录创建硬链接&#xff1f…

macOS 安装 JMeter

在 macOS 上安装 JMeter 有几种不同的方法&#xff0c;可以通过 Homebrew 安装或者手动下载并配置。下面是这两种方法的详细步骤&#xff1a; 方法 1&#xff1a;使用 Homebrew 安装 JMeter 安装 Homebrew&#xff08;如果还未安装&#xff09;&#xff1a; 打开终端并运行以下…

解决添加MPJ插件启动报错

在项目中需要用到多数据源的级联查询&#xff0c;所以引入了MPJ插件&#xff0c;MPJ的版本是1.2.4&#xff0c;MP的版本是3.5.3&#xff0c;但却在启动的时候报错&#xff0c;报错如下&#xff1a; 解决办法&#xff1a; 将MP的版本降到3.5.1

UnrealEngine学习(02):虚幻引擎编辑器界面详解

学习一款软件&#xff0c;我个人建议是先学习怎么用&#xff0c;然后是学习怎么用的好&#xff0c;再研究源码。 上一篇文章详细描述了我们该如何安装虚幻5引擎&#xff1a; UnrealEngine学习(01)&#xff1a;安装虚幻引擎https://blog.csdn.net/zuodingquan666/article/deta…