Kibana 本地安装使用

一 Kibana简介

1.1 Kibana 是一种数据可视化工具,通常需要结合Elasticsearch使用:

  • Elasticsearch 是一个实时分布式搜索和分析引擎。

  • Logstash 为用户提供数据采集、转换、优化和输出的能力。

  • Kibana 是一种数据可视化工具,为 Elasticsearch 提供了强大的可视化界面。

  • Filebeat 是一个轻量级的日志采集器,从各种来源收集和转发日志数据

  • Elasticsearch分词插件,IK分词器是一个用于中文文本处理的分词工具

二 下载和配置Elasticsearch

2.1 下载和解压 Elasticsearch

https://www.elastic.co/cn/downloads/elasticsearch

2.2 配置 Elasticsearch,打开压缩目录,找到 elasticsearch.yml

配置以下内容

network.host: 127.0.0.1 

http.port: 9200

discovery.seed_hosts: ["127.0.0.1"]

http.host: 0.0.0.

完整配置文件 

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
# cluster.name: rbdc-elk-stack
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
# node.name: LAPTOP-G67LH5N6
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# path.data: /data/elasticsearch
#
# Path to log files:
#
# path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: 127.0.0.1
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["127.0.0.1"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Allow wildcard deletion of indices:
#
#action.destructive_requires_name: false#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically      
# generated to configure Elasticsearch security features on 21-08-2024 06:51:43
#
# --------------------------------------------------------------------------------# Enable security features
xpack.security.enabled: truexpack.security.enrollment.enabled: true# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:enabled: falsekeystore.path: certs/http.p12# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:enabled: falseverification_mode: certificatekeystore.path: certs/transport.p12truststore.path: certs/transport.p12
# Create a new cluster with the current node only
# Additional nodes can still join the cluster later
cluster.initial_master_nodes: ["LAPTOP-G67LH5N6"]# Allow HTTP API connections from anywhere
# Connections are encrypted and require user authentication
http.host: 0.0.0.0# Allow other nodes to join the cluster from anywhere
# Connections are encrypted and mutually authenticated
#transport.host: 0.0.0.0#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

2.3 配置Elasticsearch

 由于Elasticsearch禁止用超级用户启动,所以可以用子用户(kibana_system)启动

如果忘记密码可以用指令修改密码

bin/elasticsearch-reset-password -u kibana_system

如果是token启动可以用以下指令获取新的token

bin/elasticsearch-create-enrollment-token --scope kibana

 2.4 启动

bin/elasticsearch

# 后台启动

/bin/elasticsearch -d

第一次启动会返回超级用户的密码和token,最好保存下

启动成功日志

2.5 访问:http://localhost:9200/ ,输入密码后出现以下页面表示启动成功

下载和配置kibana

3.1 下载和解压 Kibana

https://www.elastic.co/cn/downloads/kibana

3.2 配置 kibana,打开压缩目录,找到 kibana.yml

配置以下内容

server.port: 5601

server.host: "0.0.0.0"

server.maxPayload: 1048576

elasticsearch.hosts: ["http://127.0.0.1:9200"]

elasticsearch.username: "kibana_system"
elasticsearch.password: "vsrW2gnx+gcSLiGT9f8e" 

i18n.locale: "zh-CN"

完整配置文件 

# For more configuration options see the configuration guide for Kibana in
# https://www.elastic.co/guide/index.html
i18n.locale: "zh-CN"
# =================== System: Kibana Server ===================
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "0.0.0.0"# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
# server.basePath: "/kibana"# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# Defaults to `false`.
#server.rewriteBasePath: false# Specifies the public URL at which Kibana is available for end users. If
# `server.basePath` is configured this URL should end with the same basePath.
#server.publicBaseUrl: ""# The maximum payload size in bytes for incoming server requests.
server.maxPayload: 1048576# The Kibana server's name. This is used for display purposes.
# server.name: "LAPTOP-G67LH5N6"#国际华-中文
#il8n.locale: "zh-CN"# =================== System: Kibana Server (Optional) ===================
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
# server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key# =================== System: Elasticsearch ===================
# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://127.0.0.1:9200"]# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana_system"
elasticsearch.password: "vsrW2gnx+gcSLiGT9f8e"# Kibana can also authenticate to Elasticsearch via "service account tokens".
# Service account tokens are Bearer style tokens that replace the traditional username/password based configuration.
# Use this token instead of a username/password.
#elasticsearch.serviceAccountToken: "eyJ2ZXIiOiI4LjE0LjAiLCJhZHIiOlsiMTAuMTEwLjEwLjU6OTIwMCJdLCJmZ3IiOiI2ZTRhZDc1ZmYxNWZkOWZkYjYxYzExOTZjZjY1YzY0YTNlMjc2MTE2MjNmYTc5MjJmNjYxMzBhNjMzZjY2M2IyIiwia2V5Ijoib183MGZaRUJzRFVFSXhTaW93RTg6VFFDMzNzM1VRRXVhRXN4cXF4Vy1sZyJ9"# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
elasticsearch.pingTimeout: 100000# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
elasticsearch.requestTimeout: 100000# The maximum number of sockets that can be used for communications with elasticsearch.
# Defaults to `Infinity`.
#elasticsearch.maxSockets: 1024# Specifies whether Kibana should use compression for communications with elasticsearch
# Defaults to `false`.
#elasticsearch.compression: false# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
elasticsearch.shardTimeout: 100000# =================== System: Elasticsearch (Optional) ===================
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key# Enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
# elasticsearch.ssl.certificateAuthorities: ["/opt/module/elasticsearch-8.1.0/config/certs/elasticsearch-ca.pem"]# To disregard the validity of SSL certificates, change this setting's value to 'none'.
# elasticsearch.ssl.verificationMode: none# =================== System: Logging ===================
# Set the value of this setting to off to suppress all logging output, or to debug to log everything. Defaults to 'info'
#logging.root.level: debug# Enables you to specify a file where Kibana stores log output.
#logging.appenders.default:
#  type: file
#  fileName: /var/logs/kibana.log
#  layout:
#    type: json# Example with size based log rotation
#logging.appenders.default:
#  type: rolling-file
#  fileName: /var/logs/kibana.log
#  policy:
#    type: size-limit
#    size: 256mb
#  strategy:
#    type: numeric
#    max: 10
#  layout:
#    type: json# Logs queries sent to Elasticsearch.
#logging.loggers:
#  - name: elasticsearch.query
#    level: debug# Logs http responses.
#logging.loggers:
#  - name: http.server.response
#    level: debug# Logs system usage information.
#logging.loggers:
#  - name: metrics.ops
#    level: debug# Enables debug logging on the browser (dev console)
#logging.browser.root:
#  level: debug# =================== System: Other ===================
# The path where Kibana stores persistent data not saved in Elasticsearch. Defaults to data
#path.data: data# Specifies the path where Kibana creates the process ID file.
#pid.file: /run/kibana/kibana.pid# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000ms.
#ops.interval: 5000# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English (default) "en", Chinese "zh-CN", Japanese "ja-JP", French "fr-FR".
#i18n.locale: "en"# =================== Frequently used (Optional)===================# =================== Saved Objects: Migrations ===================
# Saved object migrations run at startup. If you run into migration-related issues, you might need to adjust these settings.# The number of documents migrated at a time.
# If Kibana can't start up or upgrade due to an Elasticsearch `circuit_breaking_exception`,
# use a smaller batchSize value to reduce the memory pressure. Defaults to 1000 objects per batch.
#migrations.batchSize: 1000# The maximum payload size for indexing batches of upgraded saved objects.
# To avoid migrations failing due to a 413 Request Entity Too Large response from Elasticsearch.
# This value should be lower than or equal to your Elasticsearch cluster’s `http.max_content_length`
# configuration option. Default: 100mb
#migrations.maxBatchSizeBytes: 100mb# The number of times to retry temporary migration failures. Increase the setting
# if migrations fail frequently with a message such as `Unable to complete the [...] step after
# 15 attempts, terminating`. Defaults to 15
#migrations.retryAttempts: 15# =================== Search Autocomplete ===================
# Time in milliseconds to wait for autocomplete suggestions from Elasticsearch.
# This value must be a whole number greater than zero. Defaults to 1000ms
unifiedSearch.autocomplete.valueSuggestions.timeout: 100000# Maximum number of documents loaded by each shard to generate autocomplete suggestions.
# This value must be a whole number greater than zero. Defaults to 100_000
#unifiedSearch.autocomplete.valueSuggestions.terminateAfter: 100000

3.3 启动 Kibana

.\bin\kibana.bat

# 后台启动

.\bin\kibana.bat -d

启动日志

3.4 访问:http://localhost:5601/

提示你去登录:

如果账户不对会提示没有权限访问页面

换账号登录超级账户可以:我这个是elastic账户

点击Explore  on my owm查看数据

3.5 首先要有集群索引,可以在开发者控制台增加索引 

示例:

# Welcome to the Dev Tools Console!
#
# You can use Console to explore the Elasticsearch API. See the Elasticsearch API reference to learn more:
# https://www.elastic.co/guide/en/elasticsearch/reference/current/rest-apis.html
#
# Here are a few examples to get you started.# Create an index
PUT /tang# Add a document to my-index
POST /tang/_doc
{"id": "park_rocky-mountain","title": "Rocky Mountain","description": "Bisected north to south by the Continental Divide, this portion of the Rockies has ecosystems varying from over 150 riparian lakes to montane and subalpine forests to treeless alpine tundra."
}# Perform a search in my-index
GET /tang/_search?q="rocky mountain"# delete a document in my-index
DELETE /tang/_doc/LqRAp5EBDLwLf65W04j

 进入发现面板就能看到刚才添加的数据

3.6 或者通过Management 面板管理索引

四 Logstash部署与使用

4.1 下载和解压 logstash

https://www.elastic.co/cn/downloads/logstash

4.2 打开压缩目录

4.3 运行hello world , 标准输入输出作为input和output来启动,没有filter:

./bin/logstash -e 'input { stdin {} } output { stdout {} }'

或格式化输出

./bin/logstash -e 'input{stdin{}} output{stdout{codec=>rubydebug}}'

启动成功后此时命令窗口停留在等待输入状态,键盘键入任意字符:

PS D:\Downloads\logstash-8.15.0> ./bin/logstash -e 'input { stdin {} } output { stdout {} }'
"Using bundled JDK: D:\Downloads\logstash-8.15.0\jdk\bin\java.exe"
Sending Logstash logs to D:/Downloads/logstash-8.15.0/logs which is now configured via log4j2.properties
[2024-08-31T14:15:54,752][INFO ][logstash.runner          ] Log4j configuration path used is: D:\Downloads\logstash-8.15.0\config\log4j2.properties
[2024-08-31T14:15:54,756][WARN ][logstash.runner          ] The use of JAVA_HOME has been deprecated. Logstash 8.0 and later ignores JAVA_HOME and uses the bundled JDK. Running Logstash with the bundled JDK is recommended. The bundled JDK has been verified to work with each specific version of Logstash, and generally provides best performance and reliability. If you have compelling reasons for using your own JDK (organizational-specific compliance requirements, for example), you can configure LS_JAVA_HOME to use that version instead.
[2024-08-31T14:15:54,756][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.15.0", "jruby.version"=>"jruby 9.4.8.0 (3.1.4) 2024-07-02 4d41e55a67 OpenJDK 64-Bit Server VM 21.0.4+7-LTS on 21.0.4+7-LTS +indy +jit [x86_64-mswin32]"}
[2024-08-31T14:15:54,759][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2024-08-31T14:15:54,763][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[2024-08-31T14:15:54,763][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[2024-08-31T14:15:54,794][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2024-08-31T14:15:56,716][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-08-31T14:15:56,783][INFO ][org.reflections.Reflections] Reflections took 99 ms to scan 1 urls, producing 138 keys and 481 values
[2024-08-31T14:15:57,152][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-08-31T14:15:57,173][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["config string"], :thread=>"#<Thread:0x39137235 D:/Downloads/logstash-8.15.0/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-08-31T14:15:57,693][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.52}
[2024-08-31T14:15:57,769][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
The stdin plugin is now waiting for input:
[2024-08-31T14:15:57,778][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

可以看到会返回以下结果: 

hello world
{"@version" => "1","message" => "hello world\r","@timestamp" => 2024-08-31T06:16:19.422258900Z,"event" => {"original" => "hello world\r"},"host" => {"hostname" => "LAPTOP-G67LH5N6"}
}

4.4 Logstash的事件(logstash将数据流中等每一条数据称之为一个event)处理流水线有三个主要角色完成:inputs –> filters –> outputs:

  • inpust:必须,负责产生事件(Inputs generate events),常用:File、syslog、redis、beats(如:Filebeats)
  • filters:可选,负责数据处理与转换(filters modify them),常用:grok、mutate、drop、clone、geoip
  • outpus:必须,负责数据输出(outputs ship them elsewhere),常用:elasticsearch、file、graphite、statsd
  • 其中inputs和outputs支持codecs(coder&decoder)在1.3.0 版之前,logstash 只支持纯文本形式输入,然后以过滤器处理它。但现在,我们可以在输入 期处理不同类型的数据,所以完整的数据流程应该是:input | decode | filter | encode | output;codec 的引入,使得 logstash 可以更好更方便的与其他有自定义数据格式的运维产品共存,比如:graphite、fluent、netflow、collectd,以及使用 msgpack、json、edn 等通用数据格式的其他产品等

4.5 常用输入事件

4.5.0 标准输入(Stdin)

  • 最简单和基础的插件
  • input {stdin {add_field => {"key" => "value"}codec => "plain"tags => ["add"]type => "std"}
    }

4.5.1 File读取插件:

  • 文件读取插件主要用来抓取文件的变化信息,将变化信息封装成Event进程处理或者传递
  • inputfile {path => ["/var/log/*.log", "/var/log/message"]type => "system"start_position => "beginning"}
    }

 4.5.2 Beats监听插件

  • Beats插件用于建立监听服务,接收Filebeat或者其他beat发送的Events;
  • input {beats {port => 5044}
    }

4.5.3 TCP监听插件

  • TCP插件有两种工作模式,“Client”和“Server”,分别用于发送网络数据和监听网络数据。
  • tcp {port => 41414
    }

4.5.4 Redis读取插件

  • 用于读取Redis中缓存的数据信息。
  • input {redis {host => "127.0.0.1"port => 6379data_type => "list"key => "logstash-list"}
    }

4.5.5 Syslog监听插件 

  • 监听操作系统syslog信息
  • syslog {
    }

4.6 常用过滤插件(Filter plugin)

4.6.1 grok正则捕获

  • grok 是Logstash中将非结构化数据解析成结构化数据以便于查询的最好工具,非常适合解析syslog logs,apache log, mysql log,以及一些其他的web log
  • input {file {path => "/var/log/http.log"}
    }
    filter {grok {match => {"message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}"}}
    }

4.6.2 date时间处理插件

  • 该插件用于时间字段的格式转换,比如将“Apr 17 09:32:01”(MMM dd HH:mm:ss)转换为“MM-dd HH:mm:ss”。而且通常情况下,Logstash会为自动给Event打上时间戳,但是这个时间戳是Event的处理时间(主要是input接收数据的时间),和日志记录时间会存在偏差(主要原因是buffer),我们可以使用此插件用日志发生时间替换掉默认是时间戳的值。
  • filter {grok {match => ["message", "%{HTTPDATE:logdate}"]}date {match => ["logdate", "dd/MMM/yyyy:HH:mm:ss Z"]}
    }

4.6.3 mutate数据修改插件

  • mutate 插件是 Logstash另一个重要插件。它提供了丰富的基础类型数据处理能力。可以重命名,删除,替换和修改事件中的字段。 
  • filter {mutate {convert => ["request_time", "float"]}
    }

4.6.4 JSON插件

  • JSON插件用于解码JSON格式的字符串,一般是一堆日志信息中,部分是JSON格式,部分不是的情况下 
  • filter {json {source => "{\"uid\":3081609001,\"type\":\"signal\"}"target => "jsoncontent"}
    }

 4.6.5 elasticsearch查询过滤插件

  • 用于查询Elasticsearch中的事件,可将查询结果应用于当前事件中
  • input {#beats {#  port => 5044#}stdin { }
    }output {elasticsearch {hosts => ["http://127.0.0.1:9200"]index => "logstash-%{+YYYY.MM.dd}"user => "elastic"password => "123456"}
    }

4.7 常用输出插件(Output plugin)

4.7.1 ElasticSearch输出插件

  • 用于将事件信息写入到Elasticsearch中,官方推荐插件,ELK必备插件
  • output {elasticsearch {hosts => ["127.0.0.1:9200"]index => "filebeat-%{type}-%{+yyyy.MM.dd}"template_overwrite => true}
    }

4.7.2 Redis输出插件

  • 用于将Event写入Redis中进行缓存,通常情况下Logstash的Filter处理比较吃系统资源,复杂的Filter处理会非常耗时,如果Event产生速度比较快,可以使用Redis作为buffer使用
  • output {redis {host => "127.0.0.1"port => 6379data_type => "list"key => "logstash-list"}
    }

4.7.3 File输出插件

  • 用于将Event输出到文件内 
  • output {file {path => ...codec => line { format => "custom format: %{message}"}}
    }

4.7.4 TCP插件

  • 通过TCP套接字写入事件。每个事件json都用换行符分隔。可以接受来自客户端的连接,也可以连接到服务器 
  • tcp {host => 127.0.0.1port => 80
    }

4.8 常用编码插件(Codec plugin)

4.8.1 JSON编码插件

  • 直接输入预定义好的 JSON 数据,这样就可以省略掉 filter/grok 配置
  • json '{"@timestamp":"$time_iso8601",''"@version":"1",''"host":"$server_addr",''"client":"$remote_addr",''"size":$body_bytes_sent,''"responsetime":$request_time,''"domain":"$host",''"url":"$uri",''"status":"$status"}';

4.9 简单示例 Logstash输出到 elasticsearch

4.9.1 创建一个名为“logstash-simple.conf”的文件,并将其保存在与logstash相同的目录中。 

4.9.2 编辑以下内容

input {#beats {#  port => 5044#}stdin { }
}output {elasticsearch {hosts => ["http://127.0.0.1:9200"]index => "logstash-%{+YYYY.MM.dd}"user => "elastic"password => "123456"}
}

4.9.3 指定配置启动logstash

./bin/logstash -f config/logstash-sample.conf

启动后会提醒你去输入,我们输入任意内容

然后去刷新kibana,我们可以看到增加了刚输入的数据

解析后会看到ID相关信息

{"_index": "tang","_id": "L6Rfp5EBDLwLf65WKIj5","_version": 1,"_score": 1,"fields": {"@timestamp": ["2024-08-31T07:38:37.042Z"],"event.original": ["hello world\r"],"event.original.keyword": ["hello world\r"],"message.keyword": ["hello world\r"],"@version": ["1"],"host.hostname.keyword": ["LAPTOP-G67LH5N6"],"@version.keyword": ["1"],"host.hostname": ["LAPTOP-G67LH5N6"],"message": ["hello world\r"]}
}

下载和配置Filebeat

5.1 下载filebeat

https://www.elastic.co/cn/downloads/beats/filebeat

5.2  配置filebeat,打开压缩目录,找到 filebeat.yml

配置filebeat.yml,输出日志到elasticsearch

filebeat.inputs:
- type: logenabled: trueencoding: utf-8paths:- D:\Downloads\logs\*.logfields:level: infofilebeat.config.modules:path: ${path.config}/modules.d/*.ymlreload.enabled: falsesetup.template.settings:index.number_of_shards: 1setup.template.name: "tang"
setup.template.pattern: "tang-*"setup.kibana:host: "localhost:5601"username: "dong"password: "Aa123456.."output.elasticsearch:hosts: ["localhost:9200"]rotocol: "http"ssl.verification_mode: "none"username: "elastic"password: "y43xtubPe6-O85aoCK9Y"index: "tang-%{+yyyy.MM.dd}"indices:- index: "tang-%{+yyyy.MM.dd}"processors:- add_host_metadata:when.not.contains.tags: forwarded- add_cloud_metadata: ~- add_docker_metadata: ~- add_kubernetes_metadata: ~

5.3 进去安装目录,输入启动命令

./filebeat -e -c filebeat.yml 

运行成功

5.4 上传日志

在filebeat配置的日志目录,添加log.txt文件,内容如下,保存之后会在Kibana平台看到刚保存的日志信息

{"log.level":"error","message":"{"msg":null,"code":200,"success":true,"data":[{"storeSkuId":"32544","storeSkuName":"11111","salePrice":"2","advertisingWords":null,"img":null,"unitCode":"KG","unitCodeName":"KG","intro":null,"stock":null,"levelOneId":null,"levelOneName":null,"levelTwoId":null,"levelTwoName":null,"levelThreeId":null,"levelThreeName":null,"specification":null,"standardSkuCode":"P03554807","standardSkuId":958,"activityList":[],"activityPrice":null,"authCode":null,"goodsType":2,"skuType":1,"barcode":"1105026100002","allowDecimal":true,"isCustomizableBarcode":true,"showQuantity":50.000,"showSalePrice":null}]}"
}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/diannao/61398.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

基于大数据爬虫数据挖掘技术+Python的网络用户购物行为分析与可视化平台(源码+论文+PPT+部署文档教程等)

#1024程序员节&#xff5c;征文# 博主介绍&#xff1a;CSDN毕设辅导第一人、全网粉丝50W,csdn特邀作者、博客专家、腾讯云社区合作讲师、CSDN新星计划导师、Java领域优质创作者,博客之星、掘金/华为云/阿里云/InfoQ等平台优质作者、专注于Java技术领域和学生毕业项目实战,高校老…

【Android、IOS、Flutter、鸿蒙、ReactNative 】实现 MVP 架构

Android Studio 版本 Android Java MVP 模式 参考 模型层 model public class User {private String email;private String password;public User(String email, String password) {this.email = email;this.password = password;}public String getEmail() {return email;}…

android 使用MediaPlayer实现音乐播放--获取音乐数据

前面已经添加了权限&#xff0c;有权限后可以去数据库读取音乐文件&#xff0c;一般可以获取全部音乐、专辑、歌手、流派等。 1. 获取全部音乐数据 class MusicHelper {companion object {SuppressLint("Range")fun getMusic(context: Context): MutableList<Mu…

Android kotlin之配置kapt编译器插件

配置项目目录下的gradle/libs.versions.toml文件&#xff0c;添加kapt配置项&#xff1a; 在模块目录下build.gradle.kt中增加 plugins {alias(libs.plugins.android.application)alias(libs.plugins.jetbrains.kotlin.android)// 增加该行alias(libs.plugins.jetbrains.kotl…

HarmonyOs DevEco Studio小技巧31--卡片的生命周期与卡片的开发

Form Kit简介 Form Kit&#xff08;卡片开发服务&#xff09;提供一种界面展示形式&#xff0c;可以将应用的重要信息或操作前置到服务卡片&#xff08;以下简称“卡片”&#xff09;&#xff0c;以达到服务直达、减少跳转层级的体验效果。卡片常用于嵌入到其他应用&#xff0…

【GeekBand】C++设计模式笔记11_Builder_构建器

1. “对象创建” 模式 通过 “对象创建” 模式绕开new&#xff0c;来避免对象创建&#xff08;new&#xff09;过程中所导致的紧耦合&#xff08;依赖具体类&#xff09;&#xff0c;从而支持对象创建的稳定。它是接口抽象之后的第一步工作。典型模式 Factory MethodAbstract …

Ubuntu问题 - 显示ubuntu服务器上可用磁盘空间 一条命令df -h

目的 想要放我的 数据集 到新的ubuntu服务器中, 不知道存储空间够不够 开始 使用以下命令直接查看 df -h

.NET 9与C# 13革新:新数据类型与语法糖深度解析

记录&#xff08;Record&#xff09;类型 使用方式&#xff1a; public record Person(string FirstName, string LastName); 适用场景&#xff1a;当需要创建不可变的数据结构&#xff0c;且希望自动生成 GetHashCode 和 Equals 方法时。不适用场景&#xff1a;当数据结构需…

学习笔记030——若依框架中定时任务的使用

定时任务是软件开发中经常使用一个功能。 Java定时任务广泛应用于各种需要定时执行或周期性执行任务的场景&#xff0c;如&#xff1a; 数据备份&#xff1a;定期备份数据库中的数据&#xff0c;确保数据的安全性和可靠性。数据同步&#xff1a;如果有多个数据源需要同步数据…

出海第一步:搞定业务系统的多区域部署

出海的企业越来越多&#xff0c;他们不约而同开始在全球范围内部署应用程序。这样做的原因有很多&#xff0c;例如降低延迟&#xff0c;改善用户体验&#xff1b;满足一些国家或地区的数据隐私法规与合规要求&#xff1b;通过在全球范围内部署应用程序来提高容灾能力和可用性&a…

2024强化学习的结构化剪枝模型RL-Pruner原理及实践

[2024] RL-Pruner: Structured Pruning Using Reinforcement Learning for CNN Compression and Acceleration 目录 [2024] RL-Pruner: Structured Pruning Using Reinforcement Learning for CNN Compression and Acceleration一、论文说明二、原理三、实验与分析1、环境配置在…

【SpringCloud详细教程】-02-微服务环境搭建

精品专题&#xff1a; 01.《C语言从不挂科到高绩点》课程详细笔记 https://blog.csdn.net/yueyehuguang/category_12753294.html?spm1001.2014.3001.5482 02. 《SpringBoot详细教程》课程详细笔记 https://blog.csdn.net/yueyehuguang/category_12789841.html?spm1001.20…

BOM的详细讲解

BOM概述 BOM简介 BOM&#xff08;browser Object&#xff09;即浏览器对象模型&#xff0c;它提供了独立于内容而与浏览器窗口进行交互的对象&#xff0c;其核心对象是window。 BOM由一系列的对象构成&#xff0c;并且每个对象都提供了很多方法与属性 BOM缺乏标准&#xff…

Jenkins下载安装、构建部署到linux远程启动运行

Jenkins详细教程 Winodws下载安装Jenkins一、Jenkins配置Plugins插件管理1、汉化插件2、Maven插件3、重启Jenkins&#xff1a;Restart Safely插件4、文件传输&#xff1a;Publish Over SSH5、gitee插件6、清理插件&#xff1a;workspace cleanup system系统配置1、Gitee配置2、…

数据分析——Python绘制实时的动态折线图

最近在做视觉应用开发&#xff0c;有个需求需要实时获取当前识别到的位姿点位是否有突变&#xff0c;从而确认是否是视觉算法的问题&#xff0c;发现Python的Matplotlib进行绘制比较方便。 目录 1.数据绘制2.绘制实时的动态折线图3.保存实时数据到CSV文件中 import matplotlib.…

Unity 使用 ExcelDataReader 读取Excel表

文章目录 1.下载NuGet包2.通过NuGet包获取dll3.将dll复制unity Plugins文件夹下4.代码获取Excel表内容 1.下载NuGet包 通过NuGet下载&#xff1a; ExcelDataReaderExcelDataReader.DataSet离线下载方法 2.通过NuGet包获取dll 根据编译时程序集找到dll位置&#xff0c;找到与…

【vmware+ubuntu16.04】ROS学习_博物馆仿真克隆ROS-Academy-for-Beginners软件包处理依赖报错问题

首先安装git 进入终端&#xff0c;输入sudo apt-get install git 安装后&#xff0c;创建一个工作空间名为tutorial_ws&#xff0c; 输入 mkdir tutorial_ws#创建工作空间 cd tutorial_ws#进入 mkdir src cd src git clone https://github.com/DroidAITech/ROS-Academy-for-Be…

九、FOC原理详解

1、FOC简介 FOC&#xff08;field-oriented control&#xff09;为磁场定向控制&#xff0c;又称为矢量控制&#xff08;vectorcontrol&#xff09;&#xff0c;是目前无刷直流电机&#xff08;BLDC&#xff09;和永磁同步电机&#xff08;PMSM&#xff09;高效控制的最佳选择…

Linux的指令(三)

1.grep指令 功能&#xff1a; 在文件中搜索字符串&#xff0c;将找到的行打印出来 -i&#xff1a;忽略大小写的不同&#xff0c;所以大小写视为一样 -n&#xff1a;顺便输出行号 -v:反向选择&#xff0c;就是显示出没有你输入要搜索内容的内容 代码示例&#xff1a; roo…

2025蓝桥杯(单片机)备赛--扩展外设之DS1302的使用(九)

1.DS1302数据手册的使用 a. DS1302 features: 工作电压&#xff1a;2V-5.5V 通信协议&#xff1a;3线接口&#xff08;CE、IO、SCLK&#xff09; 计时&#xff1a;秒、分、小时、月日期、月、星期、年&#xff08;闰年补偿器期至2100年&#xff09; b.原理图接线说明&#xff…