向量数据量milvus k8s helm 对接外部安装部署流程

前情概要:历经了太多的坑,从简单的到困难的,该文章主要是为大家尽可能的展现安装部署流程中遇见的坑!
如果2024年7月15日17:13:41 你处在这个时间阶段 附近,你会发现docker下载镜像失败! 这个问题,没有办法,请使用魔法

官方部署网址:https://milvus.io/docs/install_cluster-helm.md
1.如果你想要直接部署,不对接外部组件,直接使用在线部署,当前要注意上面的问题:使用魔法先把需要的镜像下载下来!
镜像如下:

milvusdb/milvus: 
milvusdb/milvus-config-tool:
docker.io/milvusdb/etcd:
zilliz/attu:

value.yaml

## Enable or disable Milvus Cluster mode
cluster:enabled: trueimage:all:repository: milvusdb/milvustag: v2.4.5pullPolicy: IfNotPresent## Optionally specify an array of imagePullSecrets.## Secrets must be manually created in the namespace.## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/### pullSecrets:#   - myRegistryKeySecretNametools:repository: milvusdb/milvus-config-tooltag: v0.1.2pullPolicy: IfNotPresent# Global node selector
# If set, this will apply to all milvus components
# Individual components can be set to a different node selector
nodeSelector: {}# Global tolerations
# If set, this will apply to all milvus components
# Individual components can be set to a different tolerations
tolerations: []# Global affinity
# If set, this will apply to all milvus components
# Individual components can be set to a different affinity
affinity: {}# Global labels and annotations
# If set, this will apply to all milvus components
labels: {}
annotations: {}# Extra configs for milvus.yaml
# If set, this config will merge into milvus.yaml
# Please follow the config structure in the milvus.yaml
# at https://github.com/milvus-io/milvus/blob/master/configs/milvus.yaml
# Note: this config will be the top priority which will override the config
# in the image and helm chart.
extraConfigFiles:user.yaml: |+#    For example enable rest http for milvus proxy#    proxy:#      http:#        enabled: true#      maxUserNum: 100#      maxRoleNum: 10##  Enable tlsMode and set the tls cert and key#  tls:#    serverPemPath: /etc/milvus/certs/tls.crt#    serverKeyPath: /etc/milvus/certs/tls.key#   common:#     security:#       tlsMode: 1## Expose the Milvus service to be accessed from outside the cluster (LoadBalancer service).
## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it.
## ref: http://kubernetes.io/docs/user-guide/services/
##
service:type: NodePortport: 19530portName: milvusnodePort: ""annotations: {}labels: {}## List of IP addresses at which the Milvus service is available## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips##externalIPs: []#   - externalIp1# LoadBalancerSourcesRange is a list of allowed CIDR values, which are combined with ServicePort to# set allowed inbound rules on the security group assigned to the master load balancerloadBalancerSourceRanges:- 0.0.0.0/0# Optionally assign a known public LB IP# loadBalancerIP: 1.2.3.4ingress:enabled: falseannotations:# Annotation example: set nginx ingress type# kubernetes.io/ingress.class: nginxnginx.ingress.kubernetes.io/backend-protocol: GRPCnginx.ingress.kubernetes.io/listen-ports-ssl: '[19530]'nginx.ingress.kubernetes.io/proxy-body-size: 4mnginx.ingress.kubernetes.io/ssl-redirect: "true"labels: {}rules:- host: "milvus-example.local"path: "/"pathType: "Prefix"# - host: "milvus-example2.local"#   path: "/otherpath"#   pathType: "Prefix"tls: []#  - secretName: chart-example-tls#    hosts:#      - milvus-example.localserviceAccount:create: falsename:annotations:labels:metrics:enabled: trueserviceMonitor:# Set this to `true` to create ServiceMonitor for Prometheus operatorenabled: falseinterval: "30s"scrapeTimeout: "10s"# Additional labels that can be used so ServiceMonitor will be discovered by PrometheusadditionalLabels: {}livenessProbe:enabled: trueinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:enabled: trueinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5log:level: "info"file:maxSize: 300    # MBmaxAge: 10    # daymaxBackups: 20format: "text"    # text/jsonpersistence:mountPath: "/milvus/logs"## If true, create/use a Persistent Volume Claim## If false, use emptyDir##enabled: falseannotations:helm.sh/resource-policy: keeppersistentVolumeClaim:existingClaim: ""## Milvus Logs Persistent Volume Storage Class## If defined, storageClassName: <storageClass>## If set to "-", storageClassName: "", which disables dynamic provisioning## If undefined (the default) or set to null, no storageClassName spec is##   set, choosing the default provisioner.## ReadWriteMany access mode required for milvus cluster.##storageClass:accessModes: ReadWriteManysize: 10GisubPath: ""## Heaptrack traces all memory allocations and annotates these events with stack traces.
## See more: https://github.com/KDE/heaptrack
## Enable heaptrack in production is not recommended.
heaptrack:image:repository: milvusdb/heaptracktag: v0.1.0pullPolicy: IfNotPresentstandalone:replicas: 1  # Run standalone mode with replication disabledresources: {}# Set local storage size in resources# resources:#   limits:#     ephemeral-storage: 100GinodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falsedisk:enabled: truesize:enabled: false  # Enable local storage size limitprofiling:enabled: false  # Enable live profiling## Default message queue for milvus standalone## Supported value: rocksmq, natsmq, pulsar and kafkamessageQueue: rocksmqpersistence:mountPath: "/var/lib/milvus"## If true, alertmanager will create/use a Persistent Volume Claim## If false, use emptyDir##enabled: trueannotations:helm.sh/resource-policy: keeppersistentVolumeClaim:existingClaim: ""## Milvus Persistent Volume Storage Class## If defined, storageClassName: <storageClass>## If set to "-", storageClassName: "", which disables dynamic provisioning## If undefined (the default) or set to null, no storageClassName spec is##   set, choosing the default provisioner.##storageClass: "csi-driver-s3"accessModes: ReadWriteOncesize: 50GisubPath: ""proxy:enabled: true# You can set the number of replicas to -1 to remove the replicas field in case you want to use HPAreplicas: 1resources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilinghttp:enabled: true  # whether to enable http rest serverdebugMode:enabled: false# Mount a TLS secret into proxy podtls:enabled: false
## when enabling proxy.tls, all items below should be uncommented and the key and crt values should be populated.
#    enabled: true
#    secretName: milvus-tls
## expecting base64 encoded values here: i.e. $(cat tls.crt | base64 -w 0) and $(cat tls.key | base64 -w 0)
#    key: LS0tLS1CRUdJTiBQU--REDUCT
#    crt: LS0tLS1CRUdJTiBDR--REDUCT
#  volumes:
#  - secret:
#      secretName: milvus-tls
#    name: milvus-tls
#  volumeMounts:
#  - mountPath: /etc/milvus/certs/
#    name: milvus-tls# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}rootCoordinator:enabled: true# You can set the number of replicas greater than 1, only if enable active standbyreplicas: 1  # Run Root Coordinator mode with replication disabledresources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilingactiveStandby:enabled: true  # Enable active-standby when you set multiple replicas for root coordinator# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}service:port: 53100annotations: {}labels: {}clusterIP: ""queryCoordinator:enabled: true# You can set the number of replicas greater than 1, only if enable active standbyreplicas: 1  # Run Query Coordinator mode with replication disabledresources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilingactiveStandby:enabled: true  # Enable active-standby when you set multiple replicas for query coordinator# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}service:port: 19531annotations: {}labels: {}clusterIP: ""queryNode:enabled: true# You can set the number of replicas to -1 to remove the replicas field in case you want to use HPAreplicas: 1resources: {}# Set local storage size in resources# resources:#   limits:#     ephemeral-storage: 100GinodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falsedisk:enabled: true  # Enable querynode load disk index, and search on disk indexsize:enabled: false  # Enable local storage size limitprofiling:enabled: false  # Enable live profiling# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}indexCoordinator:enabled: true# You can set the number of replicas greater than 1, only if enable active standbyreplicas: 1   # Run Index Coordinator mode with replication disabledresources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilingactiveStandby:enabled: true  # Enable active-standby when you set multiple replicas for index coordinator# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}service:port: 31000annotations: {}labels: {}clusterIP: ""indexNode:enabled: true# You can set the number of replicas to -1 to remove the replicas field in case you want to use HPAreplicas: 1resources: {}# Set local storage size in resources# limits:#    ephemeral-storage: 100GinodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilingdisk:enabled: true  # Enable index node build disk vector indexsize:enabled: false  # Enable local storage size limit# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}dataCoordinator:enabled: true# You can set the number of replicas greater than 1, only if enable active standbyreplicas: 1           # Run Data Coordinator mode with replication disabledresources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilingactiveStandby:enabled: true  # Enable active-standby when you set multiple replicas for data coordinator# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}service:port: 13333annotations: {}labels: {}clusterIP: ""dataNode:enabled: true# You can set the number of replicas to -1 to remove the replicas field in case you want to use HPAreplicas: 1resources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profiling# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}## mixCoordinator contains all coord
## If you want to use mixcoord, enable this and disable all of other coords
mixCoordinator:enabled: false# You can set the number of replicas greater than 1, only if enable active standbyreplicas: 1           # Run Mixture Coordinator mode with replication disabledresources: {}nodeSelector: {}affinity: {}tolerations: []extraEnv: []heaptrack:enabled: falseprofiling:enabled: false  # Enable live profilingactiveStandby:enabled: true  # Enable active-standby when you set multiple replicas for Mixture coordinator# Deployment strategy, default is RollingUpdate# Ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#rolling-update-deploymentstrategy: {}service:annotations: {}labels: {}clusterIP: ""attu:enabled: truename: attuimage:repository: zilliz/attutag: v2.3.10pullPolicy: IfNotPresentservice:annotations: {}labels: {}type: NodePortport: 3000# loadBalancerIP: ""resources: {}podLabels: {}ingress:enabled: falseannotations: {}# Annotation example: set nginx ingress type# kubernetes.io/ingress.class: nginxlabels: {}hosts:- milvus-attu.localtls: []#  - secretName: chart-attu-tls#    hosts:#      - milvus-attu.local## Configuration values for the minio dependency
## ref: https://github.com/zilliztech/milvus-helm/blob/master/charts/minio/README.md
##minio:enabled: falsename: miniomode: distributedimage:tag: "RELEASE.2023-03-20T20-16-18Z"pullPolicy: IfNotPresentaccessKey: minioadminsecretKey: minioadminexistingSecret: ""bucketName: "milvus-bucket"rootPath: fileuseIAM: falseiamEndpoint: ""region: ""useVirtualHost: falsepodDisruptionBudget:enabled: falseresources:requests:memory: 2Giservice:type: ClusterIPport: 9000persistence:enabled: trueexistingClaim: ""storageClass: "csi-driver-s3"accessMode: ReadWriteOncesize: 500GilivenessProbe:enabled: trueinitialDelaySeconds: 5periodSeconds: 5timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:enabled: trueinitialDelaySeconds: 5periodSeconds: 5timeoutSeconds: 1successThreshold: 1failureThreshold: 5startupProbe:enabled: trueinitialDelaySeconds: 0periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 60## Configuration values for the etcd dependency
## ref: https://artifacthub.io/packages/helm/bitnami/etcd
##etcd:enabled: falsename: etcdreplicaCount: 3pdb:create: falseimage:repository: "milvusdb/etcd"tag: "3.5.5-r4"pullPolicy: IfNotPresentservice:type: ClusterIPport: 2379peerPort: 2380auth:rbac:enabled: falsepersistence:enabled: truestorageClass: "csi-driver-s3"accessMode: ReadWriteOncesize: 10Gi## Change default timeout periods to mitigate zoobie probe processlivenessProbe:enabled: truetimeoutSeconds: 10readinessProbe:enabled: trueperiodSeconds: 20timeoutSeconds: 10## Enable auto compaction## compaction by every 1000 revision##autoCompactionMode: revisionautoCompactionRetention: "1000"## Increase default quota to 4G##extraEnvVars:- name: ETCD_QUOTA_BACKEND_BYTESvalue: "4294967296"- name: ETCD_HEARTBEAT_INTERVALvalue: "500"- name: ETCD_ELECTION_TIMEOUTvalue: "2500"## Configuration values for the pulsar dependency
## ref: https://github.com/apache/pulsar-helm-chart
##pulsar:enabled: falsename: pulsarfullnameOverride: ""persistence: truemaxMessageSize: "5242880"  # 5 * 1024 * 1024 Bytes, Maximum size of each message in pulsar.rbac:enabled: falsepsp: falselimit_to_namespace: trueaffinity:anti_affinity: false## enableAntiAffinity: nocomponents:zookeeper: truebookkeeper: true# bookkeeper - autorecoveryautorecovery: truebroker: truefunctions: falseproxy: truetoolset: falsepulsar_manager: falsemonitoring:prometheus: falsegrafana: falsenode_exporter: falsealert_manager: falseimages:broker:repository: apachepulsar/pulsarpullPolicy: IfNotPresenttag: 2.8.2autorecovery:repository: apachepulsar/pulsartag: 2.8.2pullPolicy: IfNotPresentzookeeper:repository: apachepulsar/pulsarpullPolicy: IfNotPresenttag: 2.8.2bookie:repository: apachepulsar/pulsarpullPolicy: IfNotPresenttag: 2.8.2proxy:repository: apachepulsar/pulsarpullPolicy: IfNotPresenttag: 2.8.2pulsar_manager:repository: apachepulsar/pulsar-managerpullPolicy: IfNotPresenttag: v0.1.0zookeeper:resources:requests:memory: 1024Micpu: 0.3configData:PULSAR_MEM: >-Xms1024m-Xmx1024mPULSAR_GC: >-Dcom.sun.management.jmxremote-Djute.maxbuffer=10485760-XX:+ParallelRefProcEnabled-XX:+UnlockExperimentalVMOptions-XX:+DoEscapeAnalysis-XX:+DisableExplicitGC-XX:+PerfDisableSharedMem-Dzookeeper.forceSync=nopdb:usePolicy: falsebookkeeper:replicaCount: 3volumes:journal:name: journalsize: 100Giledgers:name: ledgerssize: 200Giresources:requests:memory: 2048Micpu: 1configData:PULSAR_MEM: >-Xms4096m-Xmx4096m-XX:MaxDirectMemorySize=8192mPULSAR_GC: >-Dio.netty.leakDetectionLevel=disabled-Dio.netty.recycler.linkCapacity=1024-XX:+UseG1GC -XX:MaxGCPauseMillis=10-XX:+ParallelRefProcEnabled-XX:+UnlockExperimentalVMOptions-XX:+DoEscapeAnalysis-XX:ParallelGCThreads=32-XX:ConcGCThreads=32-XX:G1NewSizePercent=50-XX:+DisableExplicitGC-XX:-ResizePLAB-XX:+ExitOnOutOfMemoryError-XX:+PerfDisableSharedMem-XX:+PrintGCDetailsnettyMaxFrameSizeBytes: "104867840"pdb:usePolicy: falsebroker:component: brokerpodMonitor:enabled: falsereplicaCount: 1resources:requests:memory: 4096Micpu: 1.5configData:PULSAR_MEM: >-Xms4096m-Xmx4096m-XX:MaxDirectMemorySize=8192mPULSAR_GC: >-Dio.netty.leakDetectionLevel=disabled-Dio.netty.recycler.linkCapacity=1024-XX:+ParallelRefProcEnabled-XX:+UnlockExperimentalVMOptions-XX:+DoEscapeAnalysis-XX:ParallelGCThreads=32-XX:ConcGCThreads=32-XX:G1NewSizePercent=50-XX:+DisableExplicitGC-XX:-ResizePLAB-XX:+ExitOnOutOfMemoryErrormaxMessageSize: "104857600"defaultRetentionTimeInMinutes: "10080"defaultRetentionSizeInMB: "-1"backlogQuotaDefaultLimitGB: "8"ttlDurationDefaultInSeconds: "259200"subscriptionExpirationTimeMinutes: "3"backlogQuotaDefaultRetentionPolicy: producer_exceptionpdb:usePolicy: falseautorecovery:resources:requests:memory: 512Micpu: 1proxy:replicaCount: 1podMonitor:enabled: falseresources:requests:memory: 2048Micpu: 1service:type: ClusterIPports:pulsar: 6650configData:PULSAR_MEM: >-Xms2048m -Xmx2048mPULSAR_GC: >-XX:MaxDirectMemorySize=2048mhttpNumThreads: "100"pdb:usePolicy: falsepulsar_manager:service:type: ClusterIPpulsar_metadata:component: pulsar-initimage:# the image used for running `pulsar-cluster-initialize` jobrepository: apachepulsar/pulsartag: 2.8.2## Configuration values for the kafka dependency
## ref: https://artifacthub.io/packages/helm/bitnami/kafka
##kafka:enabled: falsename: kafkareplicaCount: 3image:repository: bitnami/kafkatag: 3.1.0-debian-10-r52## Increase graceful termination for kafka graceful shutdownterminationGracePeriodSeconds: "90"pdb:create: false## Enable startup probe to prevent pod restart during recoveringstartupProbe:enabled: true## Kafka Java Heap sizeheapOpts: "-Xmx4096m -Xms4096m"maxMessageBytes: _10485760defaultReplicationFactor: 3offsetsTopicReplicationFactor: 3## Only enable time based log retentionlogRetentionHours: 168logRetentionBytes: _-1extraEnvVars:- name: KAFKA_CFG_MAX_PARTITION_FETCH_BYTESvalue: "5242880"- name: KAFKA_CFG_MAX_REQUEST_SIZEvalue: "5242880"- name: KAFKA_CFG_REPLICA_FETCH_MAX_BYTESvalue: "10485760"- name: KAFKA_CFG_FETCH_MESSAGE_MAX_BYTESvalue: "5242880"- name: KAFKA_CFG_LOG_ROLL_HOURSvalue: "24"persistence:enabled: truestorageClass:accessMode: ReadWriteOncesize: 300Gimetrics:## Prometheus Kafka exporter: exposes complimentary metrics to JMX exporterkafka:enabled: falseimage:repository: bitnami/kafka-exportertag: 1.4.2-debian-10-r182## Prometheus JMX exporter: exposes the majority of Kafkas metricsjmx:enabled: falseimage:repository: bitnami/jmx-exportertag: 0.16.1-debian-10-r245## To enable serviceMonitor, you must enable either kafka exporter or jmx exporter.## And you can enable them bothserviceMonitor:enabled: falseservice:type: ClusterIPports:client: 9092zookeeper:enabled: truereplicaCount: 3###################################
# External S3
# - these configs are only used when `externalS3.enabled` is true
###################################
externalS3:enabled: truehost: "172.20.1.124"port: "9000"accessKey: "minioadmin"secretKey: "minioadmin"useSSL: falsebucketName: "milvus-dev"rootPath: ""useIAM: falsecloudProvider: "aws"iamEndpoint: ""region: ""useVirtualHost: false###################################
# GCS Gateway
# - these configs are only used when `minio.gcsgateway.enabled` is true
###################################
externalGcs:bucketName: ""###################################
# External etcd
# - these configs are only used when `externalEtcd.enabled` is true
###################################
externalEtcd:enabled: true## the endpoints of the external etcd##endpoints:- xxxx:23790###################################
# External pulsar
# - these configs are only used when `externalPulsar.enabled` is true
###################################
externalPulsar:enabled: truehost: "xxx"port: 30012maxMessageSize: "5242880"  # 5 * 1024 * 1024 Bytes, Maximum size of each message in pulsar.tenant: "xx"namespace: "xxx"authPlugin: "org.apache.pulsar.client.impl.auth.AuthenticationToken"authParams: token:"xxx"###################################
# External kafka
# - these configs are only used when `externalKafka.enabled` is true
# - note that the following are just examples, you should confirm the
#   value of brokerList and mechanisms according to the actual external
#   Kafka configuration. E.g. If you select the AWS MSK, the configuration
#   should look something like this:
#   externalKafka:
#     enabled: true
#     brokerList: "xxxx:9096"
#     securityProtocol: SASL_SSL
#     sasl:
#       mechanisms: SCRAM-SHA-512
#       password: "xxx"
#       username: "xxx"
###################################
externalKafka:enabled: falsebrokerList: localhost:9092securityProtocol: SASL_SSLsasl:mechanisms: PLAINusername: ""password: ""

k8s可执行文件milvus_manifest.yaml

---
# Source: milvus/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:name: my-release-milvus
data:default.yaml: |+# Copyright (C) 2019-2021 Zilliz. All rights reserved.## Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance# with the License. You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software distributed under the License# is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express# or implied. See the License for the specific language governing permissions and limitations under the License.etcd:endpoints:- xxxx:23790metastore:type: etcdminio:address: xxxxport: 9000accessKeyID: minioadminsecretAccessKey: minioadminuseSSL: falsebucketName: milvus-devrootPath:useIAM: falsecloudProvider: awsiamEndpoint:region:useVirtualHost: falsemq:type: pulsarmessageQueue: pulsarpulsar:address: xxxport: 6650maxMessageSize: 5242880tenant: "my-tenant"namespace: my-namespacerootCoord:address: my-release-milvus-rootcoordport: 53100enableActiveStandby: true  # Enable rootcoord active-standbyproxy:port: 19530internalPort: 19529queryCoord:address: my-release-milvus-querycoordport: 19531enableActiveStandby: true  # Enable querycoord active-standbyqueryNode:port: 21123enableDisk: true # Enable querynode load disk index, and search on disk indexindexCoord:address: my-release-milvus-indexcoordport: 31000enableActiveStandby: true  # Enable indexcoord active-standbyindexNode:port: 21121enableDisk: true # Enable index node build disk vector indexdataCoord:address: my-release-milvus-datacoordport: 13333enableActiveStandby: true  # Enable datacoord active-standbydataNode:port: 21124log:level: infofile:rootPath: ""maxSize: 300maxAge: 10maxBackups: 20format: textuser.yaml: |-#    For example enable rest http for milvus proxy#    proxy:#      http:#        enabled: true#      maxUserNum: 100#      maxRoleNum: 10##  Enable tlsMode and set the tls cert and key#  tls:#    serverPemPath: /etc/milvus/certs/tls.crt#    serverKeyPath: /etc/milvus/certs/tls.key#   common:#     security:#       tlsMode: 1
---
# Source: milvus/templates/attu-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-attulabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "attu"
spec:type: NodePortports:- name: attuprotocol: TCPport: 3000targetPort: 3000selector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "attu"
---
# Source: milvus/templates/datacoord-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-datacoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "datacoord"
spec:type: ClusterIPports:- name: datacoordport: 13333protocol: TCPtargetPort: datacoord- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "datacoord"
---
# Source: milvus/templates/datanode-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-datanodelabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "datanode"
spec:type: ClusterIPclusterIP: Noneports:- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "datanode"
---
# Source: milvus/templates/indexcoord-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-indexcoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "indexcoord"
spec:type: ClusterIPports:- name: indexcoordport: 31000protocol: TCPtargetPort: indexcoord- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "indexcoord"
---
# Source: milvus/templates/indexnode-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-indexnodelabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "indexnode"
spec:type: ClusterIPclusterIP: Noneports:- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "indexnode"
---
# Source: milvus/templates/querycoord-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-querycoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "querycoord"
spec:type: ClusterIPports:- name: querycoordport: 19531protocol: TCPtargetPort: querycoord- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "querycoord"
---
# Source: milvus/templates/querynode-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-querynodelabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "querynode"
spec:type: ClusterIPclusterIP: Noneports:- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "querynode"
---
# Source: milvus/templates/rootcoord-svc.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvus-rootcoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "rootcoord"
spec:type: ClusterIPports:- name: rootcoordport: 53100protocol: TCPtargetPort: rootcoord- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "rootcoord"
---
# Source: milvus/templates/service.yaml
apiVersion: v1
kind: Service
metadata:name: my-release-milvuslabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "proxy"
spec:type: NodePortports:- name: milvusport: 19530protocol: TCPtargetPort: milvus- name: metricsprotocol: TCPport: 9091targetPort: metricsselector:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "proxy"
---
# Source: milvus/templates/attu-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-attulabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "attu"spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "attu"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "attu"spec:containers:- name: attuimage: zilliz/attu:v2.3.10imagePullPolicy: IfNotPresentports:- name: attucontainerPort: 3000protocol: TCPenv:- name: MILVUS_URLvalue: http://my-release-milvus:19530resources:{}
---
# Source: milvus/templates/datacoord-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-datacoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "datacoord"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "datacoord"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "datacoord"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: datacoordimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "datacoord" ]env:ports:- name: datacoordcontainerPort: 13333protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: toolsvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}
---
# Source: milvus/templates/datanode-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-datanodelabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "datanode"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "datanode"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "datanode"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: datanodeimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "datanode" ]env:ports:- name: datanodecontainerPort: 21124protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: toolsvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}
---
# Source: milvus/templates/indexcoord-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-indexcoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "indexcoord"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "indexcoord"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "indexcoord"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: indexcoordimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "indexcoord" ]env:ports:- name: indexcoordcontainerPort: 31000protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: toolsvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}
---
# Source: milvus/templates/indexnode-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-indexnodelabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "indexnode"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "indexnode"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "indexnode"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: indexnodeimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "indexnode" ]env:ports:- name: indexnodecontainerPort: 21121protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: tools- mountPath: /var/lib/milvus/dataname: diskvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}- name: diskemptyDir: {}
---
# Source: milvus/templates/proxy-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-proxylabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "proxy"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "proxy"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "proxy"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: proxyimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "proxy" ]env:ports:- name: milvuscontainerPort: 19530protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: toolsvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}
---
# Source: milvus/templates/querycoord-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-querycoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "querycoord"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "querycoord"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "querycoord"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: querycoordimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "querycoord" ]env:ports:- name: querycoordcontainerPort: 19531protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: toolsvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}
---
# Source: milvus/templates/querynode-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-querynodelabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "querynode"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "querynode"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "querynode"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: querynodeimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "querynode" ]env:ports:- name: querynodecontainerPort: 21123protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: tools- mountPath: /var/lib/milvus/dataname: diskvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}- name: diskemptyDir: {}
---
# Source: milvus/templates/rootcoord-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:name: my-release-milvus-rootcoordlabels:helm.sh/chart: milvus-4.1.34app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releaseapp.kubernetes.io/version: "2.4.5"app.kubernetes.io/managed-by: Helmcomponent: "rootcoord"annotations:spec:replicas: 1selector:matchLabels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "rootcoord"template:metadata:labels:app.kubernetes.io/name: milvusapp.kubernetes.io/instance: my-releasecomponent: "rootcoord"annotations:checksum/config: 4d919a6f7279f31d3f04198e9626ab7a0dec59a9e2d63b9b0758840233e77b8fspec:serviceAccountName: defaultinitContainers:- name: configcommand:- /cp- /run-helm.sh,/merge- /milvus/tools/run-helm.sh,/milvus/tools/mergeimage: "milvusdb/milvus-config-tool:v0.1.2"imagePullPolicy: IfNotPresentvolumeMounts:- mountPath: /milvus/toolsname: toolscontainers:- name: rootcoordimage: "milvusdb/milvus:v2.4.5"imagePullPolicy: IfNotPresentargs: [ "/milvus/tools/run-helm.sh", "milvus", "run", "rootcoord" ]env:ports:- name: rootcoordcontainerPort: 53100protocol: TCP- name: metricscontainerPort: 9091protocol: TCPlivenessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 30timeoutSeconds: 5successThreshold: 1failureThreshold: 5readinessProbe:httpGet:path: /healthzport: metricsinitialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5successThreshold: 1failureThreshold: 5resources:{}volumeMounts:- name: milvus-configmountPath: /milvus/configs/default.yamlsubPath: default.yamlreadOnly: true- name: milvus-configmountPath: /milvus/configs/user.yamlsubPath: user.yamlreadOnly: true- mountPath: /milvus/toolsname: toolsvolumes:- name: milvus-configconfigMap:name: my-release-milvus- name: toolsemptyDir: {}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/bicheng/46688.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

[C/C++入门][ifelse]11、三位数找最大值(比大小以及多个数找最大值)

比较数的大小 在学习数学的早期阶段&#xff0c;孩子们会学会如何比较两个数的大小。他们通常通过观察数的数字组成来判断哪个数更大。例如&#xff0c;当比较数字34和56时&#xff0c;如果左边第一位的数字不同&#xff0c;那么具有较大数字的数就是较大的数。在这个例子中&a…

nginx生成自签名SSL证书配置HTTPS

一、安装nginx nginx必须有"--with-http_ssl_module"模块 查看nginx安装的模块&#xff1a; rootecs-7398:/usr/local/nginx# cd /usr/local/nginx/ rootecs-7398:/usr/local/nginx# ./sbin/nginx -V nginx version: nginx/1.20.2 built by gcc 9.4.0 (Ubuntu 9.4.0…

机器学习与神经网络之间的关系 --九五小庞

机器学习与神经网络之间存在紧密而复杂的关系。简而言之&#xff0c;神经网络是机器学习领域中的一个重要分支&#xff0c;尤其是深度学习的核心组成部分。下面详细解释它们之间的关系&#xff1a; 机器学习概述 机器学习是一门涉及让计算机系统从数据中自动学习和改进的学科。…

k8s secret-从环境变量里去读和从yaml文件里读取secret有什么区别?

从环境变量和YAML文件中读取Kubernetes Secret的区别主要体现在使用方式、动态更新能力以及管理便捷性上。以下是详细的区别说明&#xff1a; 1. **使用方式**&#xff1a; - **环境变量方式**&#xff1a;Kubernetes允许将Secret作为环境变量注入到Pod的容器中。这种方式的好处…

Android Studio - adb.exe已停止运作的解决方案

adb.exe 是Android Debug Bridge 的缩写&#xff0c;它是Android SDK 中的一个调试工具&#xff0c;允许开发者通过命令行界面与设备进行交互&#xff0c;执行各种操作&#xff0c;如运行设备的shell、管理模拟器或设备的端口映射、在计算机和设备之间上传/下载文件、将本地APK…

微服务经纬:Eureka驱动的分布式服务网格配置全解

微服务经纬&#xff1a;Eureka驱动的分布式服务网格配置全解 在微服务架构的宏伟蓝图中&#xff0c;服务网格&#xff08;Service Mesh&#xff09;作为微服务间通信的独立层&#xff0c;承担着流量管理、服务发现、故障恢复等关键任务。Eureka&#xff0c;Netflix开源的服务发…

我的GeneFace++部署与运行之旅

文章目录 引言项目背景概述重要性分析结论 环境准备1. 安装CUDA2. 安装Python依赖3. 准备3DMM模型&#xff08;BFM2009&#xff09;及其他数据 运行官方 Demo训练自己的视频数据准备训练推理测试 遇到的问题与解决方案问题一&#xff1a;cuda 安装完发现版本不对问题二&#xf…

C语言 底层逻辑详细阐述结构体 #结构体的声明 #结构体的初始化 #结构体成员访问 #结构体传参

文章目录 前言 一、结构体的基础知识 二、结构体的初始化 1、结构体类型声明&#xff1a; 2、结构体成员的类型 3、结构体变量的初始化&#xff1a; 三、结构体成员访问 四、结构体传参 总结 前言 基于自我理解的角度来讲结构体&#xff1b; 一、结构体的基础知识 结构是一些…

网络安全-网络设备及其配置1

1.路由器 路由器的作用 路由器是连接多个网络的设备&#xff0c;主要功能是数据包的转发和路由选择。路由器通过分析目标IP地址&#xff0c;将数据包从一个网络转发到另一个网络&#xff0c;确保不同网络之间的通信。它在家庭、企业和互联网服务提供商&#xff08;ISP&#x…

vue3入门特性

Vue 3 是一个用于构建用户界面的渐进式 JavaScript 框架&#xff1a;核心思想是通过声明式的方式来描述 UI 组件&#xff0c;并通过响应式系统来自动更新 UI。Vue 3 引入了许多新特性和改进&#xff0c;包括组合式 API&#xff08;Composition API&#xff09;、更好的性能和 T…

NSSCTF中24网安培训day2中web题目

[SWPUCTF 2021 新生赛]ez_unserialize 这道题目考察php反序列化的知识点 打开题目&#xff0c;发现没有提示&#xff0c;我们试着用御剑扫描目录文件&#xff0c;发现存在robots.txt的文件 接着访问这个文件&#xff0c;发现是一段php反序列化代码&#xff0c;我们需要进行序…

论文翻译:通过云计算对联网多智能体系统进行预测控制

通过云计算对联网多智能体系统进行预测控制 文章目录 通过云计算对联网多智能体系统进行预测控制摘要前言通过云计算实现联网的多智能体控制系统网络化多智能体系统的云预测控制器设计云预测控制系统的稳定性和一致性分析例子结论 摘要 本文研究了基于云计算的网络化多智能体预…

【常见开源库的二次开发】基于openssl的加密与解密——Base58比特币钱包地址——算法分析(三)

目录&#xff1a; 目录&#xff1a; 一、base58(58进制) 1.1 什么是base58&#xff1f; 1.2 辗转相除法 1.3 base58输出字节数&#xff1a; 二、源码分析&#xff1a; 2.1源代码&#xff1a; 2.2 算法思路介绍&#xff1a; 2.2.1 Base58编码过程&#xff1a; 2.1.2 Base58解码过…

Leetcode—146. LRU 缓存【中等】(shared_ptr、unordered_map、list)

2024每日刷题&#xff08;143&#xff09; Leetcode—146. LRU 缓存 先验知识 list & unordered_map 实现代码 struct Node{int key;int value;Node(int key, int value): key(key), value(value) {} };class LRUCache { public:LRUCache(int capacity): m_capacity(capa…

实战案例:用百度千帆大模型API开发智能五子棋

前随着人工智能技术的迅猛发展&#xff0c;各种智能应用层出不穷。五子棋作为一款经典的棋类游戏&#xff0c;拥有广泛的爱好者。将人工智能技术与五子棋结合&#xff0c;不仅能提升游戏的趣味性和挑战性&#xff0c;还能展现AI在复杂决策问题上的强大能力。在本篇文章中&#…

Elasticsearch:将Logstash日志存到elasticsearch中

配置Logstash # cat syslog02.conf #filename:syslog02.conf #注意这个是要用#号注释掉 input{file{path > ["/var/log/*.log"]} } output{elasticsearch {#建议将搜索引擎不要和应用部署到一台服务器&#xff0c;我们介绍就同台服务器hosts > ["192.168…

habase集群安装

解压到/opt/softs目录 tar -zxvf hbase-2.4.11-bin.tar.gz -C /opt/softs/ 改名 mv hbase-2.4.11/ hbase2.4.11 配置环境变量 修改/etc/profile vim /etc/profile 添加 #HBASE_HOME export HBASE_HOME/opt/softs/hbase2.4.11 export PATH$PATH:$HBASE_HOME/bin 修改其中的…

怎么把自己写的组件发布到npm官方仓库??

一.注册npm账号 npm官网 1.注册npm 账号 2.登陆 3.登陆成功 二.搭建一个vue 项目 具体步骤参考liu.z Z 博客 或者初始化一个vue项目 vue create XXX &#xff08;工程名字&#xff09;运行代码 npm run serve三.组件封装 1.在src文件下建一个package文件&#xff0…

深度学习计算机视觉中, 多尺度特征和上下文特征的区别是?

在深度学习和计算机视觉中&#xff0c;多尺度特征和上下文特征都是用来捕捉和理解图像中复杂模式和关系的重要概念&#xff0c;但它们的侧重点有所不同。 多尺度特征 (Multi-scale Features) 多尺度特征是指在不同尺度上对图像进行特征提取&#xff0c;以捕捉不同尺度的物体特…

借助 Aspose.Words,在 C# 中将 Word 转换为 JPG

有时我们需要将 Word 文档转换为图片&#xff0c;因为 DOC 或 DOCX 文件在不同设备上的显示可能会有所不同&#xff0c;但图像&#xff08;例如 JPG 格式&#xff09;在任何地方看起来都一样。 Aspose.Words 是一种高级Word文档处理API&#xff0c;用于执行各种文档管理和操作…