kubernetes/cluster/addons/fluentd-elasticsearch

#发文福利#

一、前言

kubernetes 1.23搭建EFK所用到的yaml文件,本帖均来自kubernetes官方,且没做修改。

https://github.com/kubernetes/kubernetes/tree/release-1.23/cluster/addons/fluentd-elasticsearch

二、EFK 原版yaml

1、create-logging-namespace.yaml

kind: Namespace
apiVersion: v1
metadata:name: logginglabels:k8s-app: loggingkubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcile

2、es-service.yaml

apiVersion: v1
kind: Service
metadata:name: elasticsearch-loggingnamespace: logginglabels:k8s-app: elasticsearch-loggingkubernetes.io/cluster-service: "true"addonmanager.kubernetes.io/mode: Reconcilekubernetes.io/name: "Elasticsearch"
spec:clusterIP: Noneports:- name: dbport: 9200protocol: TCPtargetPort: 9200- name: transportport: 9300protocol: TCPtargetPort: 9300publishNotReadyAddresses: trueselector:k8s-app: elasticsearch-loggingsessionAffinity: Nonetype: ClusterIP

3、es-statefulset.yaml

# RBAC authn and authz
apiVersion: v1
kind: ServiceAccount
metadata:name: elasticsearch-loggingnamespace: logginglabels:k8s-app: elasticsearch-loggingaddonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: elasticsearch-logginglabels:k8s-app: elasticsearch-loggingaddonmanager.kubernetes.io/mode: Reconcile
rules:- apiGroups:- ""resources:- "services"- "namespaces"- "endpoints"verbs:- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: elasticsearch-logginglabels:k8s-app: elasticsearch-loggingaddonmanager.kubernetes.io/mode: Reconcile
subjects:- kind: ServiceAccountname: elasticsearch-loggingnamespace: loggingapiGroup: ""
roleRef:kind: ClusterRolename: elasticsearch-loggingapiGroup: ""
---
# Elasticsearch deployment itself
apiVersion: apps/v1
kind: StatefulSet
metadata:name: elasticsearch-loggingnamespace: logginglabels:k8s-app: elasticsearch-loggingversion: v7.10.2addonmanager.kubernetes.io/mode: Reconcile
spec:serviceName: elasticsearch-loggingreplicas: 2selector:matchLabels:k8s-app: elasticsearch-loggingversion: v7.10.2template:metadata:labels:k8s-app: elasticsearch-loggingversion: v7.10.2spec:serviceAccountName: elasticsearch-loggingcontainers:- image: quay.io/fluentd_elasticsearch/elasticsearch:v7.10.2name: elasticsearch-loggingimagePullPolicy: Alwaysresources:# need more cpu upon initialization, therefore burstable classlimits:cpu: 1000mmemory: 3Girequests:cpu: 100mmemory: 3Giports:- containerPort: 9200name: dbprotocol: TCP- containerPort: 9300name: transportprotocol: TCPlivenessProbe:tcpSocket:port: transportinitialDelaySeconds: 5timeoutSeconds: 10readinessProbe:tcpSocket:port: transportinitialDelaySeconds: 5timeoutSeconds: 10volumeMounts:- name: elasticsearch-loggingmountPath: /dataenv:- name: "NAMESPACE"valueFrom:fieldRef:fieldPath: metadata.namespace- name: "MINIMUM_MASTER_NODES"value: "1"volumes:- name: elasticsearch-loggingemptyDir: {}# Elasticsearch requires vm.max_map_count to be at least 262144.# If your OS already sets up this number to a higher value, feel free# to remove this init container.initContainers:- image: alpine:3.6command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]name: elasticsearch-logging-initsecurityContext:privileged: true

4、fluentd-es-configmap.yaml

kind: ConfigMap
apiVersion: v1
metadata:name: fluentd-es-config-v0.2.1namespace: logginglabels:addonmanager.kubernetes.io/mode: Reconcile
data:system.conf: |-<system>root_dir /tmp/fluentd-buffers/</system>containers.input.conf: |-# This configuration file for Fluentd / td-agent is used# to watch changes to Docker log files. The kubelet creates symlinks that# capture the pod name, namespace, container name & Docker container ID# to the docker logs for pods in the /var/log/containers directory on the host.# If running this fluentd configuration in a Docker container, the /var/log# directory should be mounted in the container.## These logs are then submitted to Elasticsearch which assumes the# installation of the fluent-plugin-elasticsearch & the# fluent-plugin-kubernetes_metadata_filter plugins.# See https://github.com/uken/fluent-plugin-elasticsearch &# https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for# more information about the plugins.## Example# =======# A line in the Docker log file might look like this JSON:## {"log":"2014/09/25 21:15:03 Got request with path wombat\n",#  "stream":"stderr",#   "time":"2014-09-25T21:15:03.499185026Z"}## The time_format specification below makes sure we properly# parse the time format produced by Docker. This will be# submitted to Elasticsearch and should appear like:# $ curl 'http://elasticsearch-logging:9200/_search?pretty'# ...# {#      "_index" : "logstash-2014.09.25",#      "_type" : "fluentd",#      "_id" : "VBrbor2QTuGpsQyTCdfzqA",#      "_score" : 1.0,#      "_source":{"log":"2014/09/25 22:45:50 Got request with path wombat\n",#                 "stream":"stderr","tag":"docker.container.all",#                 "@timestamp":"2014-09-25T22:45:50+00:00"}#    },# ...## The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log# record & add labels to the log record if properly configured. This enables users# to filter & search logs on any metadata.# For example a Docker container's logs might be in the directory:##  /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b## and in the file:##  997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log## where 997599971ee6... is the Docker ID of the running container.# The Kubernetes kubelet makes a symbolic link to this file on the host machine# in the /var/log/containers directory which includes the pod name and the Kubernetes# container name:##    synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log#    ->#    /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log## The /var/log directory on the host is mapped to the /var/log directory in the container# running this instance of Fluentd and we end up collecting the file:##   /var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log## This results in the tag:##  var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log## The Kubernetes fluentd plugin is used to extract the namespace, pod name & container name# which are added to the log message as a kubernetes field object & the Docker container ID# is also added under the docker field object.# The final tag is:##   kubernetes.var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log## And the final log record look like:## {#   "log":"2014/09/25 21:15:03 Got request with path wombat\n",#   "stream":"stderr",#   "time":"2014-09-25T21:15:03.499185026Z",#   "kubernetes": {#     "namespace": "default",#     "pod_name": "synthetic-logger-0.25lps-pod",#     "container_name": "synth-lgr"#   },#   "docker": {#     "container_id": "997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b"#   }# }## This makes it easier for users to search for logs by pod name or by# the name of the Kubernetes container regardless of how many times the# Kubernetes pod has been restarted (resulting in a several Docker container IDs).# Json Log Example:# {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}# CRI Log Example:# 2016-02-17T00:04:05.931087621Z stdout F [info:2016-02-16T16:04:05.930-08:00] Some log text here<source>@id fluentd-containers.log@type tailpath /var/log/containers/*.logpos_file /var/log/es-containers.log.postag raw.kubernetes.*read_from_head true<parse>@type multi_format<pattern>format jsontime_key timetime_format %Y-%m-%dT%H:%M:%S.%NZ</pattern><pattern>format /^(?<time>.+) (?<stream>stdout|stderr) [^ ]* (?<log>.*)$/time_format %Y-%m-%dT%H:%M:%S.%N%:z</pattern></parse></source># Detect exceptions in the log output and forward them as one log entry.<match raw.kubernetes.**>@id raw.kubernetes@type detect_exceptionsremove_tag_prefix rawmessage logstream streammultiline_flush_interval 5max_bytes 500000max_lines 1000</match># Concatenate multi-line logs<filter **>@id filter_concat@type concatkey messagemultiline_end_regexp /\n$/separator ""</filter># Enriches records with Kubernetes metadata<filter kubernetes.**>@id filter_kubernetes_metadata@type kubernetes_metadata</filter># Fixes json fields in Elasticsearch<filter kubernetes.**>@id filter_parser@type parserkey_name logreserve_data trueremove_key_name_field true<parse>@type multi_format<pattern>format json</pattern><pattern>format none</pattern></parse></filter>system.input.conf: |-# Example:# 2015-12-21 23:17:22,066 [salt.state       ][INFO    ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081<source>@id minion@type tailformat /^(?<time>[^ ]* [^ ,]*)[^\[]*\[[^\]]*\]\[(?<severity>[^ \]]*) *\] (?<message>.*)$/time_format %Y-%m-%d %H:%M:%Spath /var/log/salt/minionpos_file /var/log/salt.postag salt</source># Example:# Dec 21 23:17:22 gke-foo-1-1-4b5cbd14-node-4eoj startupscript: Finished running startup script /var/run/google.startup.script<source>@id startupscript.log@type tailformat syslogpath /var/log/startupscript.logpos_file /var/log/es-startupscript.log.postag startupscript</source># Examples:# time="2016-02-04T06:51:03.053580605Z" level=info msg="GET /containers/json"# time="2016-02-04T07:53:57.505612354Z" level=error msg="HTTP Error" err="No such image: -f" statusCode=404# TODO(random-liu): Remove this after cri container runtime rolls out.<source>@id docker.log@type tailformat /^time="(?<time>[^"]*)" level=(?<severity>[^ ]*) msg="(?<message>[^"]*)"( err="(?<error>[^"]*)")?( statusCode=($<status_code>\d+))?/path /var/log/docker.logpos_file /var/log/es-docker.log.postag docker</source># Example:# 2016/02/04 06:52:38 filePurge: successfully removed file /var/etcd/data/member/wal/00000000000006d0-00000000010a23d1.wal<source>@id etcd.log@type tail# Not parsing this, because it doesn't have anything particularly useful to# parse out of it (like severities).format nonepath /var/log/etcd.logpos_file /var/log/es-etcd.log.postag etcd</source># Multi-line parsing is required for all the kube logs because very large log# statements, such as those that include entire object bodies, get split into# multiple lines by glog.# Example:# I0204 07:32:30.020537    3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537]<source>@id kubelet.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/kubelet.logpos_file /var/log/es-kubelet.log.postag kubelet</source># Example:# I1118 21:26:53.975789       6 proxier.go:1096] Port "nodePort for kube-system/default-http-backend:http" (:31429/tcp) was open before and is still needed<source>@id kube-proxy.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/kube-proxy.logpos_file /var/log/es-kube-proxy.log.postag kube-proxy</source># Example:# I0204 07:00:19.604280       5 handlers.go:131] GET /api/v1/nodes: (1.624207ms) 200 [[kube-controller-manager/v1.1.3 (linux/amd64) kubernetes/6a81b50] 127.0.0.1:38266]<source>@id kube-apiserver.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/kube-apiserver.logpos_file /var/log/es-kube-apiserver.log.postag kube-apiserver</source># Example:# I0204 06:55:31.872680       5 servicecontroller.go:277] LB already exists and doesn't need update for service kube-system/kube-ui<source>@id kube-controller-manager.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/kube-controller-manager.logpos_file /var/log/es-kube-controller-manager.log.postag kube-controller-manager</source># Example:# W0204 06:49:18.239674       7 reflector.go:245] pkg/scheduler/factory/factory.go:193: watch of *api.Service ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [2578313/2577886]) [2579312]<source>@id kube-scheduler.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/kube-scheduler.logpos_file /var/log/es-kube-scheduler.log.postag kube-scheduler</source># Example:# I0603 15:31:05.793605       6 cluster_manager.go:230] Reading config from path /etc/gce.conf<source>@id glbc.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/glbc.logpos_file /var/log/es-glbc.log.postag glbc</source># Example:# I0603 15:31:05.793605       6 cluster_manager.go:230] Reading config from path /etc/gce.conf<source>@id cluster-autoscaler.log@type tailformat multilinemultiline_flush_interval 5sformat_firstline /^\w\d{4}/format1 /^(?<severity>\w)(?<time>\d{4} [^\s]*)\s+(?<pid>\d+)\s+(?<source>[^ \]]+)\] (?<message>.*)/time_format %m%d %H:%M:%S.%Npath /var/log/cluster-autoscaler.logpos_file /var/log/es-cluster-autoscaler.log.postag cluster-autoscaler</source># Logs from systemd-journal for interesting services.# TODO(random-liu): Remove this after cri container runtime rolls out.<source>@id journald-docker@type systemdmatches [{ "_SYSTEMD_UNIT": "docker.service" }]<storage>@type localpersistent truepath /var/log/journald-docker.pos</storage>read_from_head truetag docker</source><source>@id journald-container-runtime@type systemdmatches [{ "_SYSTEMD_UNIT": "{{ fluentd_container_runtime_service }}.service" }]<storage>@type localpersistent truepath /var/log/journald-container-runtime.pos</storage>read_from_head truetag container-runtime</source><source>@id journald-kubelet@type systemdmatches [{ "_SYSTEMD_UNIT": "kubelet.service" }]<storage>@type localpersistent truepath /var/log/journald-kubelet.pos</storage>read_from_head truetag kubelet</source><source>@id journald-node-problem-detector@type systemdmatches [{ "_SYSTEMD_UNIT": "node-problem-detector.service" }]<storage>@type localpersistent truepath /var/log/journald-node-problem-detector.pos</storage>read_from_head truetag node-problem-detector</source><source>@id kernel@type systemdmatches [{ "_TRANSPORT": "kernel" }]<storage>@type localpersistent truepath /var/log/kernel.pos</storage><entry>fields_strip_underscores truefields_lowercase true</entry>read_from_head truetag kernel</source>forward.input.conf: |-# Takes the messages sent over TCP<source>@id forward@type forward</source>monitoring.conf: |-# Prometheus Exporter Plugin# input plugin that exports metrics<source>@id prometheus@type prometheus</source><source>@id monitor_agent@type monitor_agent</source># input plugin that collects metrics from MonitorAgent<source>@id prometheus_monitor@type prometheus_monitor<labels>host ${hostname}</labels></source># input plugin that collects metrics for output plugin<source>@id prometheus_output_monitor@type prometheus_output_monitor<labels>host ${hostname}</labels></source># input plugin that collects metrics for in_tail plugin<source>@id prometheus_tail_monitor@type prometheus_tail_monitor<labels>host ${hostname}</labels></source>output.conf: |-<match **>@id elasticsearch@type elasticsearch@log_level infotype_name _docinclude_tag_key truehost elasticsearch-loggingport 9200logstash_format true<buffer>@type filepath /var/log/fluentd-buffers/kubernetes.system.bufferflush_mode intervalretry_type exponential_backoffflush_thread_count 2flush_interval 5sretry_foreverretry_max_interval 30chunk_limit_size 2Mtotal_limit_size 500Moverflow_action block</buffer></match>

5、fluentd-es-ds.yaml

apiVersion: v1
kind: ServiceAccount
metadata:name: fluentd-esnamespace: logginglabels:k8s-app: fluentd-esaddonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: fluentd-eslabels:k8s-app: fluentd-esaddonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:- ""resources:- "namespaces"- "pods"verbs:- "get"- "watch"- "list"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:name: fluentd-eslabels:k8s-app: fluentd-esaddonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccountname: fluentd-esnamespace: loggingapiGroup: ""
roleRef:kind: ClusterRolename: fluentd-esapiGroup: ""
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: fluentd-es-v3.1.1namespace: logginglabels:k8s-app: fluentd-esversion: v3.1.1addonmanager.kubernetes.io/mode: Reconcile
spec:selector:matchLabels:k8s-app: fluentd-esversion: v3.1.1template:metadata:labels:k8s-app: fluentd-esversion: v3.1.1spec:securityContext:seccompProfile:type: RuntimeDefaultpriorityClassName: system-node-criticalserviceAccountName: fluentd-escontainers:- name: fluentd-esimage: quay.io/fluentd_elasticsearch/fluentd:v3.1.0env:- name: FLUENTD_ARGSvalue: --no-supervisor -qresources:limits:memory: 500Mirequests:cpu: 100mmemory: 200MivolumeMounts:- name: varlogmountPath: /var/log- name: varlibdockercontainersmountPath: /var/lib/docker/containersreadOnly: true- name: config-volumemountPath: /etc/fluent/config.dports:- containerPort: 24231name: prometheusprotocol: TCPlivenessProbe:tcpSocket:port: prometheusinitialDelaySeconds: 5timeoutSeconds: 10readinessProbe:tcpSocket:port: prometheusinitialDelaySeconds: 5timeoutSeconds: 10terminationGracePeriodSeconds: 30volumes:- name: varloghostPath:path: /var/log- name: varlibdockercontainershostPath:path: /var/lib/docker/containers- name: config-volumeconfigMap:name: fluentd-es-config-v0.2.1

6、kibana-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:name: kibana-loggingnamespace: logginglabels:k8s-app: kibana-loggingaddonmanager.kubernetes.io/mode: Reconcile
spec:replicas: 1selector:matchLabels:k8s-app: kibana-loggingtemplate:metadata:labels:k8s-app: kibana-loggingspec:securityContext:seccompProfile:type: RuntimeDefaultcontainers:- name: kibana-loggingimage: docker.elastic.co/kibana/kibana-oss:7.10.2resources:# need more cpu upon initialization, therefore burstable classlimits:cpu: 1000mrequests:cpu: 100menv:- name: ELASTICSEARCH_HOSTSvalue: http://elasticsearch-logging:9200- name: SERVER_NAMEvalue: kibana-logging- name: SERVER_BASEPATHvalue: /api/v1/namespaces/logging/services/kibana-logging/proxy- name: SERVER_REWRITEBASEPATHvalue: "false"ports:- containerPort: 5601name: uiprotocol: TCPlivenessProbe:httpGet:path: /api/statusport: uiinitialDelaySeconds: 5timeoutSeconds: 10readinessProbe:httpGet:path: /api/statusport: uiinitialDelaySeconds: 5timeoutSeconds: 10

7、kibana-service.yaml

apiVersion: apps/v1
kind: Deployment
metadata:name: kibana-loggingnamespace: logginglabels:k8s-app: kibana-loggingaddonmanager.kubernetes.io/mode: Reconcile
spec:replicas: 1selector:matchLabels:k8s-app: kibana-loggingtemplate:metadata:labels:k8s-app: kibana-loggingspec:securityContext:seccompProfile:type: RuntimeDefaultcontainers:- name: kibana-loggingimage: docker.elastic.co/kibana/kibana-oss:7.10.2resources:# need more cpu upon initialization, therefore burstable classlimits:cpu: 1000mrequests:cpu: 100menv:- name: ELASTICSEARCH_HOSTSvalue: http://elasticsearch-logging:9200- name: SERVER_NAMEvalue: kibana-logging- name: SERVER_BASEPATHvalue: /api/v1/namespaces/logging/services/kibana-logging/proxy- name: SERVER_REWRITEBASEPATHvalue: "false"ports:- containerPort: 5601name: uiprotocol: TCPlivenessProbe:httpGet:path: /api/statusport: uiinitialDelaySeconds: 5timeoutSeconds: 10readinessProbe:httpGet:path: /api/statusport: uiinitialDelaySeconds: 5timeoutSeconds: 10

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/67800.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

机器学习笔记之最优化理论与方法(六)无约束优化问题——最优性条件

机器学习笔记之最优化理论与方法——无约束优化问题[最优性条件] 引言无约束优化问题无约束优化问题最优解的定义 无约束优化问题的最优性条件无约束优化问题的充要条件无约束优化问题的必要条件无约束优化问题的充分条件 引言 本节将介绍无约束优化问题&#xff0c;主要介绍无…

css让元素保持等比例宽高

使用新属性 aspect-ratio: 16/9; 代码示例 <style>div {width: 60%;/* 等比例宽高 */aspect-ratio: 16/9;background-color: red;margin: auto;}</style> </head><body><div></div> </body>示例 aspect-ratio兼容性

python-wordcloud词云

导入模块 from wordcloud import WordCloud import jieba import imageio import matplotlib.pyplot as plt from PIL import ImageGrab import numpy as npwordcloud以空格为分隔符号&#xff0c;来将文本分隔成单词 PIL pillow模块 img imageio.imread(image.png)这行代码…

软件测试—测试用例的设计

软件测试—测试用例的设计 测试用例是什么&#xff1f; 首先&#xff0c;测试用例&#xff08;Test Case&#xff09;是为了实施测试而向被测试系统提供的一组集合。这组集合包括&#xff1a;测试环境、操作步骤、测试数据、预期结果等要素。 好的测试用例的特征 一个好的测试…

MySQL表的内连和外连

文章目录 MySQL表的内连和外连1. 内连接(1) 显示SMITH的名字和部门名称 2. 外连接2.1 左外连接(1) 查询所有学生的成绩&#xff0c;如果这个学生没有成绩&#xff0c;也要将学生的个人信息显示出来 2.2 右外连接(1) 对stu表和exam表联合查询&#xff0c;把所有的成绩都显示出来…

软件设计师学习笔记8-操作系统+进程

目录 1.操作系统 1.1操作系统层次图 1.2操作系统的作用 1.3操作系统的任务 2.特殊的操作系统 3.进程 3.1进程的概念 3.2进程与程序 3.3进程与线程 3.4进程的状态 3.4.1三态模型 3.4.2基于三态模型的五态模型 1.操作系统 1.1操作系统层次图 该图片来自希赛软考 1.…

zookeeper教程

zookeeper教程 zookeeper简介zookeeper的特点及数据模型zookeeper下载安装zookeeper客户端命令zookeeper配置文件zookeeper服务器常用命令zookeeper可视化管理工具zkuizookeeper集群环境搭建zookeeper选举机制使用Java原生api操作zookeeper使用java zkclient库操作zookeeper使用…

【Apollo学习笔记】——规划模块TASK之PIECEWISE_JERK_SPEED_OPTIMIZER

文章目录 前言PIECEWISE_JERK_SPEED_OPTIMIZER功能简介PIECEWISE_JERK_SPEED_OPTIMIZER相关配置PIECEWISE_JERK_SPEED_OPTIMIZER流程QP问题的标准类型定义&#xff1a;优化变量设计目标函数约束条件相关矩阵二次项系数矩阵 H H H一次项系数向量 q q q设定OSQP求解参数 Process设…

MybatisPlus 核心功能 条件构造器 自定义SQL Service接口 静态工具

MybatisPlus 快速入门 常见注解 配置_软工菜鸡的博客-CSDN博客 2.核心功能 刚才的案例中都是以id为条件的简单CRUD&#xff0c;一些复杂条件的SQL语句就要用到一些更高级的功能了。 2.1.条件构造器 除了新增以外&#xff0c;修改、删除、查询的SQL语句都需要指定where条件。因此…

深入理解联邦学习——联邦学习的定义

分类目录&#xff1a;《深入理解联邦学习》总目录 假设有两个不同的企业 A A A和 B B B&#xff0c;它们拥有不同的数据。比如&#xff0c;企业 A A A有用户特征数据&#xff0c;而企业 B B B有产品特征数据和标注数据。这两个企业按照GDPR准则是不能粗暴地把双方数据加以合并的…

孙哥Spring源码第17集

第17集 refresh()-invokeBeanFactoryPostProcessor -一-invokeBeanFactoryPostProcessor的分析过程 【视频来源于&#xff1a;B站up主孙帅suns Spring源码视频】 1、什么是解析顶级注解&#xff1f; PropertySource CompeontScan Configuration Component ImportResour…

垃圾回收 - 复制算法

GC复制算法是Marvin L.Minsky在1963年研究出来的算法。说简单点&#xff0c;就是只把某个空间的活动对象复制到其它空间&#xff0c;把原空间里的所有对象都回收掉。这是一个大胆的想法。在此&#xff0c;我们将复制活动对象的原空间称为From空间&#xff0c;将粘贴活动对象的新…

VSCode 配置 C 语言编程环境

目录 一、下载 mingw64 二、配置环境变量 三、三个配置文件 四、格式化代码 1、安装插件 2、保存时自动格式化 3、左 { 不换行 上了两年大学&#xff0c;都还没花心思去搭建 C 语言编程环境&#xff0c;惭愧&#xff0c;惭愧。 一、下载 mingw64 mingw64 是著名的 C/C…

SpringBoot整合WebSocket

流程分析 Websocket客户端与Websocket服务器端 前端浏览器和后端服务器的连接通信 HTTP与Websocket对比 服务器端编码 1.引入pom依赖 <!--webSocket--> <dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-sta…

【内网穿透】使用Nodejs搭建简单的HTTP服务器 ,并实现公网远程访问

目录 前言 1.安装Node.js环境 2.创建node.js服务 3. 访问node.js 服务 4.内网穿透 4.1 安装配置cpolar内网穿透 4.2 创建隧道映射本地端口 5.固定公网地址 前言 Node.js 是能够在服务器端运行 JavaScript 的开放源代码、跨平台运行环境。Node.js 由 OpenJS Foundation…

mysql(九)mysql主从复制

目录 前言概述提出问题主从复制的用途工作流程 主从复制的配置创建复制账号配置主库和从库启动主从复制从另一个服务器开始主从复制主从复制时推荐的配置sync_binloginnodb_flush_logs_at_trx_commitinnodb_support_xa1innodb_safe_binlog 主从复制的原理基于语句复制优点&…

视频监控/视频汇聚/视频云存储EasyCVR平台接入国标GB协议后出现断流情况,该如何解决?

视频监控汇聚平台EasyCVR可拓展性强、视频能力灵活、部署轻快&#xff0c;可支持的主流标准协议有国标GB28181、RTSP/Onvif、RTMP等&#xff0c;以及支持厂家私有协议与SDK接入&#xff0c;包括海康Ehome、海大宇等设备的SDK等。安防监控平台EasyCVR既具备传统安防视频监控的能…

LeetCode 热题 100——无重复字符的最长子串(滑动窗口)

题目链接 力扣&#xff08;LeetCode&#xff09;官网 - 全球极客挚爱的技术成长平台 题目解析 从s字符串中&#xff0c;去找出连续的子串&#xff0c;使该子串中没有重复字符&#xff0c;返回它的最长长度。 暴力枚举 依次以第一个、第二个、第三个等等为起点去遍历字符串&a…

华为数通方向HCIP-DataCom H12-821题库(单选题:261-280)

第261题 以下关于IPv6过渡技术的描述,正确的是哪些项? A、转换技术的原理是将IPv6的头部改写成IPv4的头部,或者将IPv4的头部改写成IPv6的头部 B、使用隧道技术,能够将IPv4封装在IPv6隧道中实现互通,但是隧道的端点需要支持双栈技术 C、转换技术适用于纯IPv4网络与纯IPv…

SegNeXt学习记录(一):配置环境 测试代码

安装配置MMSegmentation环境 为了验证 MMSegmentation 和所需的环境是否安装正确&#xff0c;我们可以运行示例 python 代码来初始化分段器并推断演示图像&#xff1a; from mmseg.apis import inference_segmentor, init_segmentor import mmcvconfig_file configs/pspnet/…