ELK+Kafka+Zookeeper日志收集系统

环境准备

节点IP节点规划主机名
192.168.112.3Elasticsearch + Kibana + Logstash + Zookeeper + Kafka + Nginxelk-node1
192.168.112.3Elasticsearch + Logstash + Zookeeper + Kafkaelk-node2
192.168.112.3Elasticsearch + Logstash + Zookeeper + Kafka + Nginxelk-node3

基础环境

systemctl disable firewalld --now && setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
mv /etc/yum.repos.d/CentOS-* /tmp/
curl -o /etc/yum.repos.d/centos.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum install -y vim net-tools wget unzip

修改主机名

[root@localhost ~]# hostnamectl set-hostname elk-node1
[root@localhost ~]# bash[root@localhost ~]# hostnamectl set-hostname elk-node2
[root@localhost ~]# bash[root@localhost ~]# hostnamectl set-hostname elk-node3
[root@localhost ~]# bash

配置映射

[root@elk-node1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.112.3 elk-node1
192.168.112.4 elk-node2
192.168.112.5 elk-node3

Elasticserach部署

安装Elasticserach

三台主机都需安装java及elasticserach

[root@elk-node1 ~]# yum install -y java-1.8.0-*[root@elk-node1 ~]# wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.0.0.rpm[root@elk-node1 ~]# rpm -ivh elasticsearch-6.0.0.rpm
### 参数含义:i表示安装,v表示显示安装过程,h表示显示进度

启动报错

### 二进制安装
[root@elk-node1 ~]# ln -s /opt/jdk1.8.0_391/bin/java /usr/bin/java

Elasticserach配置

elk1节点配置
[root@elk-node1 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v ^# | grep -v ^$
cluster.name: ELK
node.name: elk-node-1
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.112.3
http.port: 9200
discovery.zen.ping.unicast.hosts: ["elk-node1", "elk-node2","elk-node3"]
elk2节点配置
[root@elk-node2 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v ^# | grep -v ^$
cluster.name: ELK
node.name: elk-node2
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.112.4
http.port: 9200
discovery.zen.ping.unicast.hosts: ["elk-node1", "elk-node2","elk-node3"]
elk3节点配置
[root@elk-node3 ~]# cat /etc/elasticsearch/elasticsearch.yml | grep -v ^# | grep -v ^$
cluster.name: ELK
node.name: elk-node3
node.master: true
node.data: true
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.112.5
http.port: 9200
discovery.zen.ping.unicast.hosts: ["elk-node1", "elk-node2","elk-node3"]

启动服务

[root@elk-node1 ~]# systemctl daemon-reload
[root@elk-node1 ~]# systemctl enable elasticsearch --now
Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.

检测进程和端口

[root@elk-node1 ~]# ps -ef | grep elasticsearch
elastic+  12663      1 99 22:28 ?        00:00:11 /bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -server -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
root      12720   1822  0 22:28 pts/0    00:00:00 grep --color=auto elasticsearch
[root@elk-node1 ~]# netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1021/sshd           
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1175/master         
tcp6       0      0 192.168.112.3:9200      :::*                    LISTEN      12663/java          
tcp6       0      0 192.168.112.3:9300      :::*                    LISTEN      12663/java          
tcp6       0      0 :::22                   :::*                    LISTEN      1021/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1175/master

检测集群状态

[root@elk-node1 ~]# curl 'elk-node1:9200/_cluster/health?pretty'
{"cluster_name" : "ELK",   		//集群名称"status" : "green",   				//集群健康状态,green为健康,yellow或者red则是集群有问题"timed_out" : false   				//是否超时,"number_of_nodes" : 3,   			//集群中节点数"number_of_data_nodes" : 3,   //集群中data节点数量"active_primary_shards" : 0,"active_shards" : 0,"relocating_shards" : 0,"initializing_shards" : 0,"unassigned_shards" : 0,"delayed_unassigned_shards" : 0,"number_of_pending_tasks" : 0,"number_of_in_flight_fetch" : 0,"task_max_waiting_in_queue_millis" : 0,"active_shards_percent_as_number" : 100.0
}

Kibana部署

安装Kibana

[root@elk-node1 ~]# wget https://artifacts.elastic.co/downloads/kibana/kibana-6.0.0-x86_64.rpm[root@elk-node1 ~]# rpm -ivh kibana-6.0.0-x86_64.rpm

Kibana配置

添加nginx源

[root@elk-node1 ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name = nginx repo
baseurl = https://nginx.org/packages/mainline/centos/7/$basearch/
gpgcheck = 0
enabled = 1

安装nginx

[root@elk-node1 ~]# yum install -y nginx

启动服务

[root@elk-node1 ~]# systemctl enable nginx --now
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

配置nginx负载均衡

[root@elk-node1 ~]# cat /etc/nginx/nginx.confuser  nginx;
worker_processes  auto;error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;events {worker_connections  1024;
}http {include       /etc/nginx/mime.types;default_type  application/octet-stream;log_format  main  '$remote_addr - $remote_user [$time_local] "$request" ''$status $body_bytes_sent "$http_referer" ''"$http_user_agent" "$http_x_forwarded_for"';access_log  /var/log/nginx/access.log  main;sendfile        on;#tcp_nopush     on;keepalive_timeout  65;#gzip  on;upstream elasticsearch {zone elasticsearch 64K;server elk-node1:9200;server elk-node2:9200;server elk-node3:9200;}server {listen 80;server_name 192.168.112.3;location / {proxy_pass http://elasticsearch;proxy_redirect off;proxy_set_header Host $host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;}access_log /var/log/es_access.log;}include /etc/nginx/conf.d/*.conf;
}

重启服务

[root@elk-node1 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@elk-node1 ~]# nginx -s reload
[root@elk-node1 ~]# systemctl restart nginx

Kibana配置

[root@elk-node1 ~]# cat /etc/kibana/kibana.yml | grep -v ^#
server.port: 5601
server.host: 192.168.112.3
elasticsearch.url: "http://192.168.112.3:80"

启动服务

[root@elk-node1 ~]# systemctl enable kibana --now
Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service.
[root@elk-node1 ~]# ps -ef | grep kibana
kibana    13384      1 32 06:02 ?        00:00:02 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
root      13396   1822  0 06:03 pts/0    00:00:00 grep --color=auto kibana

浏览器访问

image-20240110220420210

Zoopeeper集群部署

安装Zoopeeper

[root@elk-node1 ~]# tar -zxvf apache-zookeeper-3.8.3-bin.tar.gz -C /usr/local/
[root@elk-node1 ~]# mv /usr/local/apache-zookeeper-3.8.3-bin/ /usr/local/zookeeper
[root@elk-node1 ~]# cp /usr/local/zookeeper/conf/zoo_sample.cfg /usr/local/zookeeper/conf/zoo.cfg

配置环境变量

[root@elk-node1 ~]# cat >> /etc/profile << EOF
export ZOOKEEPER_HOME=/usr/local/zookeeper
export PATH=$ZOOKEEPER_HOME/bin:$PATH
EOF[root@elk-node1 ~]# source /etc/profile[root@elk-node1 ~]# scp /etc/profile 192.168.112.4:/etc/profile
[root@elk-node1 ~]# scp /etc/profile 192.168.112.5:/etc/profile[root@elk-node2 ~]# source /etc/profile
[root@elk-node3 ~]# source /etc/profile

配置zoopeeper

[root@elk-node1 ~]# cat /usr/local/zookeeper/conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000   #### zookeeper 之间心跳间隔2秒
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10    ### LF初始通信时限
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5     ### LF同步通信时限
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/tmp/zookeeper  ### zookeeper保存数据的目录
dataLogDir=/usr/local/zookeeper/logs ### zookeeper保存日志文件的目录
# the port at which the clients will connect
clientPort=2181         ### 客户端连接 zookeeper 服务器的端口
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
autopurge.purgeInterval=1server.1=elk-node1:2888:3888
server.2=elk-node2:2888:3888
server.3=elk-node3:2888:3888## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpHost=0.0.0.0
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true

配置节点标识

[root@elk-node1 ~]# scp /usr/local/zookeeper/conf/zoo.cfg 192.168.112.4:/usr/local/zookeeper/conf/zoo.cfg
[root@elk-node1 ~]# scp /usr/local/zookeeper/conf/zoo.cfg 192.168.112.5:/usr/local/zookeeper/conf/zoo.cfg[root@elk-node1 ~]# mkdir /tmp/zookeeper
[root@elk-node1 ~]# echo "1" > /tmp/zookeeper/myid[root@elk-node2 ~]# mkdir /tmp/zookeeper
[root@elk-node2 ~]# echo "2" > /tmp/zookeeper/myid[root@elk-node3 ~]# mkdir /tmp/zookeeper
[root@elk-node3 ~]# echo "3" > /tmp/zookeeper/myid

启动服务

三个节点都需要启动否测报错

[root@elk-node1 ~]# zkServer.sh start
/bin/java
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

查看服务状态

[root@elk-node1 ~]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

Kafka集群部署

安装Kafka

[root@elk-node1 ~]# tar -zxvf kafka_2.12-3.6.1.tgz -C /usr/local/
[root@elk-node1 ~]# mv /usr/local/kafka_2.12-3.6.1/ /usr/local/kafka
[root@elk-node1 ~]# cp /usr/local/kafka/config/server.properties{,.bak}
[root@elk-node1 ~]# scp kafka_2.12-3.6.1.tgz 192.168.112.4:/root
[root@elk-node1 ~]# scp kafka_2.12-3.6.1.tgz 192.168.112.5:/root

配置环境变量

[root@elk-node1 ~]# cat >> /etc/profile << EOF
export KAFKA_HOME=/usr/local/kafka
export PATH=$KAFKA_HOME/bin:$PATH
EOF[root@elk-node1 ~]# source /etc/profile
[root@elk-node1 ~]# echo $KAFKA_HOME
/usr/local/kafka[root@elk-node1 ~]# scp /etc/profile 192.168.112.4:/etc/profile
[root@elk-node1 ~]# scp /etc/profile 192.168.112.5:/etc/profile[root@elk-node2 ~]# source /etc/profile
[root@elk-node3 ~]# source /etc/profile

配置Kafka

[root@elk-node1 ~]# grep -v "^#" /usr/local/kafka/config/server.properties.bak > /usr/local/kafka/config/server.properties
[root@elk-node1 ~]# vim /usr/local/kafka/config/server.properties
# 每一个broker在集群中的唯一表示,要求是正数
broker.id=1# 监控的kafka端口
listenters=PLAINTEXT://192.168.112.3:9092# broker处理消息的最大线程数,一般情况下不需要去修改
num.network.threads=3# broker处理磁盘IO的线程数,数值应该大于你的硬盘数
num.io.threads=8# socket的发送缓冲区
socket.send.buffer.bytes=102400# socket的接受缓冲区
socket.receive.buffer.bytes=102400# socket请求的最大字节数
socket.request.max.bytes=104857600# kafka数据的存放地址,多个地址用逗号分割,多个目录分布在不同磁盘上可以提高读写性能 /tmp/kafka-log,/tmp/kafka-log2
log.dirs=/usr/local/kafka/kafka-logs# 设置partitions的个数
num.partitions=1num.recovery.threads.per.data.dir=1offsets.topic.replication.factor=1transaction.state.log.replication.factor=1transaction.state.log.min.isr=1# 数据文件保留多长时间,此处为168h,粒度还可设置为分钟,或按照文件大小
log.retention.hours=168# topic的分区是以一堆segment文件存储的,这个控制每个segment的大小,会被topic创建时的指定参数覆盖
log.retention.check.interval.ms=300000# zookeeper集群地址
zookeeper.connect=elk-node1:2181,elk-node2:2181,elk-node3:2181# kafka连接zookeeper的超时时间
zookeeper.connection.timeout.ms=6000group.initial.rebalance.delay.ms=0
[root@elk-node1 ~]# scp /usr/local/kafka/config/server.properties 192.168.112.4:/usr/local/kafka/config/server.properties
[root@elk-node1 ~]# scp /usr/local/kafka/config/server.properties 192.168.112.5:/usr/local/kafka/config/server.properties########	修改节点broker.id
# 每一个broker在集群中的唯一表示,要求是正数
broker.id=1
broker.id=2
broker.id=3

启动Kafka

三个节点都需要启动

### 启动
[root@elk-node1 ~]# kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties
### 关闭
[root@elk-node1 ~]# kafka-server-stop.shps:注: kafka节点默认需要的内存为1G,在⼯作中可能会调⼤该参数,可修改kafka-server-start.sh的配置项。找到KAFKA_HEAP_OPTS配置项,例如修改为:export KAFKA_HEAP_OPTS="-Xmx2G -Xms2G"

测试Kafka

[root@elk-node1 ~]# jps 
24099 QuorumPeerMain
48614 Jps
47384 Kafka
23258 Elasticsearch
创建Topic
`在kf1(Broker)上创建测试Tpoic:test-ken,这⾥我们指定了3个副本Broker、test-ken有2个分区`[root@elk-node1 ~]# kafka-topics.sh --create --bootstrap-server elk-node1:9092 --replication-factor 3 --partitions 2 --topic test-ken
Created topic test-ken.在创建Topic时不允许使⽤"_."之类的符号 选项解释:
--create:创建新的Topic
--bootstrap-server:指定要哪台Kafka服务器上创建Topic,主机加端⼝,指定的主机地址⼀ 定要和配置⽂件中的listeners⼀致
--zookeeper:指定要哪台zookeeper服务器上创建Topic,主机加端⼝,指定的主机地址⼀定要 和配置⽂件中的listeners⼀致
--replication-factor:创建Topic中的每个分区(partition)中的复制因⼦数量,即为Topic
的副本数量,建议和Broker节点数量⼀致,如果复制因⼦超出Broker节点将⽆法创建
--partitions:创建该Topic中的分区(partition)数量
--topic:指定Topic名称
查看Topic

Topic在kf1上创建后也会同步到集群中另外两个副本Broker:kf2、kf3,通过以下命令列出指定Broker的topic信息

[root@elk-node1 ~]# kafka-topics.sh --list --bootstrap-server elk-node1:9092 
test-ken[root@elk-node1 ~]# kafka-topics.sh --list --bootstrap-server elk-node2:9092 __consumer_offsets
__consumer_offsets
test-ken
查看Topic详情
[root@elk-node3 ~]# kafka-topics.sh --describe --bootstrap-server elk-node1:9092 --topic test-ken
Topic: test-ken TopicId: CMsPBF2XQySuUyr9ekEf7Q PartitionCount: 2       ReplicationFactor: 3    Configs: Topic: test-ken Partition: 0    Leader: 3       Replicas: 3,2,1 Isr: 3,2,1Topic: test-ken Partition: 1    Leader: 1       Replicas: 1,3,2 Isr: 1,3,2`Topic:kafka_data`# topic名称
`PartitionCount: 2`# 分⽚数量
`ReplicationFactor: 3`# Topic副本数量
发送消息
向Broker(id=1)的Topic=test-ken发送消息[root@elk-node1 ~]# kafka-console-producer.sh --broker-list elk-node1:9092 --topic test-ken
>this is test   
>bye--broker-list:指定使⽤哪台broker来⽣产消息
--topic:指定要往哪个Topic中⽣产消息
验证接收消息
### 消费者:
### 从开始位置消费(所有节点均能收到)### elk-node1测试
[root@elk-node1 ~]# kafka-console-consumer.sh --bootstrap-server elk-node2:9092 --topic test-ken --from-beginning 
this is test
byeProcessed a total of 2 messages### elk-node2测试
[root@elk-node2 ~]# kafka-console-consumer.sh --bootstrap-server elk-node1:9092 --topic test-ken --from-beginning     
this is test
byeProcessed a total of 2 messages
### 消费者组:
### ⼀个Consumer group,多个consumer进程,数量⼩于等于partition分区的数量
### test-ken只有2个分区,只能有两个消费者consumer进程去轮询消费消息[root@elk-node1 ~]# kafka-console-consumer.sh --bootstrap-server elk-node1:9092 --topic test-ken --group testgroup_ken
删除Topic
[root@elk-node1 ~]# kafka-topics.sh --delete --bootstrap-server elk-node1:9092 --topic test-ken
查看删除信息
[root@elk-node3 ~]# kafka-topics.sh --describe --bootstrap-server elk-node1:9092 --topic test-ken
Error while executing topic command : Topic 'test-ken' does not exist as expected
[2024-01-13 15:14:10,659] ERROR java.lang.IllegalArgumentException: Topic 'test-ken' does not exist as expectedat kafka.admin.TopicCommand$.kafka$admin$TopicCommand$$ensureTopicExists(TopicCommand.scala:400)at kafka.admin.TopicCommand$TopicService.describeTopic(TopicCommand.scala:312)at kafka.admin.TopicCommand$.main(TopicCommand.scala:63)at kafka.admin.TopicCommand.main(TopicCommand.scala)(kafka.admin.TopicCommand$)

Zookeeper的作用

1、broker在zk中注册
kafka的每个broker(相当于⼀个节点,相当于⼀个机器)在启动时,都会在zk中注册,告诉zkb
rokerid,在整个的集群中,broker.id/brokers/ids,当节点失效时,zk就会删除该节点,就 很⽅便的监控整个集群broker的变化,及时调整负载均衡。WatchedEvent state:SyncConnected type:None path:null
[zk: elk-node1:2181(CONNECTED) 0] ls /brokers 
[ids, seqid, topics]
[zk: elk-node1:2181(CONNECTED) 1] ls /brokers/ids
[1, 2, 3]
[zk: elk-node1:2181(CONNECTED) 2]
2、topic在zk中注册
kafka中可以定义很多个topic,每个topic⼜被分为很多个分区。⼀般情况下,每个分区独⽴在 存在⼀个broker上,所有的这些topicbroker的对应关系都有zk进⾏维护刚才已经删除了Topic再次创建
[root@elk-node1 ~]# kafka-topics.sh --create --bootstrap-server elk-node1:9092 --replication-factor 3 --partitions 2 --topic test-ken
Created topic test-ken.WatchedEvent state:SyncConnected type:None path:null
[zk: elk-node1:2181(CONNECTED) 0] ls /brokers/topics/test-ken/partitions
[0, 1]
3、consumer(消费者)在zk中注册
注意:从kafka-0.9版本及以后,kafka的消费者组和offset信息就不存zookeeper了,⽽是存到
broker服务器上。 所以,如果你为某个消费者指定了⼀个消费者组名称(group.id),那么,⼀旦这个消费者启动, 这个消费者组名和它要消费的那个topicoffset信息就会被记录在broker服务器上。,但是zook
eeper其实并不适合进⾏⼤批量的读写操作,尤其是写操作。因此kafka提供了另⼀种解决⽅案:增 加__consumeroffsets topic,将offset信息写⼊这个topic[zk: elk-node1:2181(CONNECTED) 0] ls /brokers/topics
[__consumer_offsets, test-ken]
[zk: elk-node1:2181(CONNECTED) 1] ls /brokers/topics/__consumer_offsets/partitions
[0, 1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 3, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 4, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 5, 6, 7, 8, 9]

Beats采集⽇志部署

安装Beats

[root@elk-node1 ~]# scp filebeat-6.0.0-x86_64.rpm 192.168.112.4:/root
[root@elk-node1 ~]# scp filebeat-6.0.0-x86_64.rpm 192.168.112.5:/root[root@elk-node1 ~]# rpm -ivh filebeat-6.0.0-x86_64.rpm 
warning: filebeat-6.0.0-x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...1:filebeat-6.0.0-1                 ################################# [100%]

Beats配置

elk-node1节点
### 编辑配置⽂件
[root@elk-node1 ~]# > /etc/filebeat/filebeat.yml
[root@elk-node1 ~]# vim vim /etc/filebeat/filebeat.yml
filebeat.prospectors:
- type: logenabled: truepaths:- /var/log/es_access.log	### 此处可⾃⾏改为想要监听的⽇志⽂件output.kafka:enabled: truehosts: ["elk-node1:9092","elk-node2:9092","elk-node3:9092"]topic: "es_access"		### 对应zookeeper⽣成的topickeep_alive: 10s
elk-node2节点
[root@elk-node2 ~]# > /etc/filebeat/filebeat.yml
[root@elk-node2 ~]# vim /etc/filebeat/filebeat.yml 
filebeat.prospectors:
- type: logenabled: truepaths:- /var/log/vmware-network.logoutput.kafka:enabled: truehosts: ["elk-node1:9092","elk-node2:9092","elk-node3:9092"]topic: "vmware-network"keep_alive: 10s
elk-node3节点
[root@elk-node3 ~]# > /etc/filebeat/filebeat.yml
[root@elk-node3 ~]# vim /etc/filebeat/filebeat.yml
filebeat.prospectors:
- type: logenabled: truepaths:- /var/log/access.logoutput.kafka:enabled: truehosts: ["elk-node1:9092","elk-node2:9092","elk-node3:9092"]topic: "access"keep_alive: 10s

启动服务

[root@elk-node1 ~]# systemctl enable filebeat --now
Created symlink from /etc/systemd/system/multi-user.target.wants/filebeat.service to /usr/lib/systemd/system/filebeat.service.[root@elk-node1 ~]# systemctl status filebeat       
● filebeat.service - filebeatLoaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)Active: active (running) since Sat 2024-01-13 15:43:19 CST; 6s agoDocs: https://www.elastic.co/guide/en/beats/filebeat/current/index.htmlMain PID: 55537 (filebeat)CGroup: /system.slice/filebeat.service└─55537 /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat...Jan 13 15:43:19 elk-node1 systemd[1]: Started filebeat

Logstash部署

安装Logstash

[root@elk-node1 ~]# wget https://artifacts.elastic.co/downloads/logstash/logstash-6.0.0.rpm[root@elk-node1 ~]# rpm -ivh logstash-6.0.0.rpm 
warning: logstash-6.0.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...1:logstash-1:6.0.0-1               ################################# [100%]
Using provided startup.options file: /etc/logstash/startup.options
Successfully created system startup script for Logstash

配置Logstash

elk-node1节点
### 配置/etc/logstash/logstash.yml,修改增加如下
[root@elk-node1 ~]# grep -v '^#' /etc/logstash/logstash.yml 
http.host: "192.168.112.3"
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d/*.conf
path.logs: /var/log/logstash
elk-node2节点
### 配置logstash收集es_access的⽇志
[root@elk-node2 ~]# cat /etc/logstash/conf.d/es_access.conf
# Settings file in YAML
input {kafka {bootstrap_servers => "elk-node1:9092,elk-node2:9092,elk-node3:9092"group_id => "logstash"auto_offset_reset => "earliest"decorate_events => truetopics => ["es_access"]type => "messages"}
}output {if [type] == "messages" {elasticsearch {hosts => ["elk-node1:9200","elk-node2:9200","elk-node3:9200"]index => "es_access-%{+YYYY.MM.dd}"}}
}
### 配置logstash收集vmware的⽇志
[root@elk-node2 ~]# cat /etc/logstash/conf.d/vmware.conf
# Settings file in YAML
input {kafka {bootstrap_servers => "elk-node1:9092,elk-node2:9092,elk-node3:9092"group_id => "logstash"auto_offset_reset => "earliest"decorate_events => truetopics => ["vmware"]type => "messages"}
}output {if [type] == "messages" {elasticsearch {hosts => ["elk-node1:9200","elk-node2:9200","elk-node3:9200"]index => "vmware-%{+YYYY.MM.dd}"}}
}
### 配置logstash收集nginx的⽇志
[root@elk-node2 ~]# cat /etc/logstash/conf.d/nginx.conf
# Settings file in YAML
input {kafka {bootstrap_servers => "elk-node1:9092,elk-node2:9092,elk-node3:9092"group_id => "logstash"auto_offset_reset => "earliest"decorate_events => truetopics => ["nginx"]type => "messages"}
}output {if [type] == "messages" {elasticsearch {hosts => ["elk-node1:9200","elk-node2:9200","elk-node3:9200"]index => "nginx-%{+YYYY.MM.dd}"}}
}

检查配置文件是否有误

[root@elk-node2 ~]# ln -s /usr/share/logstash/bin/logstash /usr/bin/### 检查es_access
[root@elk-node2 ~]# logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/es_access.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK### 检查vmware
[root@elk-node2 ~]# logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/vmware.conf --config.test_and_exit   
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK### 检查nginx
[root@elk-node2 ~]# logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK### 为ok则代表没问题### 参数解释:--path.settings : ⽤于指定logstash的配置⽂件所在的⽬录-f : 指定需要被检测的配置⽂件的路径--config.test_and_exit : 指定检测完之后就退出,不然就会直接启动了

启动Logstash

三个节点需要启动

### 检查配置⽂件没有问题后,启动Logstash服务
[root@elk-node2 ~]# systemctl enable logstash --now
Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.### 查看进程
[root@elk-node2 ~]# ps -ef | grep logstash
logstash  17845      1  0 17:32 ?        00:00:00 /bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb --path.settings /etc/logstash### 查看端口
[root@elk-node2 ~]# netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1151/master         
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1020/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1151/master         
tcp6       0      0 :::9092                 :::*                    LISTEN      15757/java          
tcp6       0      0 :::2181                 :::*                    LISTEN      14812/java          
tcp6       0      0 :::40039                :::*                    LISTEN      15757/java          
tcp6       0      0 :::42696                :::*                    LISTEN      14812/java          
tcp6       0      0 192.168.112.4:3888      :::*                    LISTEN      14812/java          
tcp6       0      0 :::8080                 :::*                    LISTEN      14812/java          
tcp6       0      0 192.168.112.4:9200      :::*                    LISTEN      13070/java          
tcp6       0      0 192.168.112.4:9300      :::*                    LISTEN      13070/java          
tcp6       0      0 :::22                   :::*                    LISTEN      1020/sshd
启动报错解决
[root@elk-node2 ~]# systemctl start logstash
Failed to start logstash.service: Unit not found.[root@elk-node2 ~]# sudo /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
which: no java in (/sbin:/bin:/usr/sbin:/usr/bin)
could not find java; set JAVA_HOME or ensure java is in PATH[root@elk-node2 ~]# ln -s /opt/jdk1.8.0_391/bin/java /usr/bin/java[root@elk-node2 ~]# sudo /usr/share/logstash/bin/system-install /etc/logstash/startup.options systemd
Using provided startup.options file: /etc/logstash/startup.options
Manually creating startup for specified platform: systemd
Successfully created system startup script for Logstash

如果启动服务后,有进程但是没有9600端口,是因为权限问题,之前我们以root的身份在终端启动过logstash,所以产生的相关文件的属组属主都是root,解决方法如下

[root@elk-node2 ~]# cat /var/log/logstash/logstash-plain.log | grep que
[2024-01-13T17:23:56,589][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
[2024-01-13T17:23:56,589][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}[root@elk-node2 ~]# ll /var/lib/logstash/
total 0
drwxr-xr-x. 2 root root 6 Jan 13 17:23 dead_letter_queue
drwxr-xr-x. 2 root root 6 Jan 13 17:23 queue
### 修改/var/lib/logstash/⽬录的所属组为logstash,并重启服务
[root@elk-node2 ~]# chown -R logstash /var/lib/logstash/
[root@elk-node2 ~]# ll /var/lib/logstash/               
total 0
drwxr-xr-x. 2 logstash root 6 Jan 13 17:23 dead_letter_queue
drwxr-xr-x. 2 logstash root 6 Jan 13 17:23 queue[root@elk-node2 ~]# systemctl restart logstash[root@elk-node2 ~]# netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1151/master         
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1020/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      1151/master         
tcp6       0      0 127.0.0.1:9600          :::*                    LISTEN      18707/java          
tcp6       0      0 :::9092                 :::*                    LISTEN      15757/java          
tcp6       0      0 :::2181                 :::*                    LISTEN      14812/java          
tcp6       0      0 :::40039                :::*                    LISTEN      15757/java          
tcp6       0      0 :::42696                :::*                    LISTEN      14812/java          
tcp6       0      0 192.168.112.4:3888      :::*                    LISTEN      14812/java          
tcp6       0      0 :::8080                 :::*                    LISTEN      14812/java          
tcp6       0      0 192.168.112.4:9200      :::*                    LISTEN      13070/java          
tcp6       0      0 192.168.112.4:9300      :::*                    LISTEN      13070/java          
tcp6       0      0 :::22                   :::*                    LISTEN      1020/sshd

Kibana查看日志

[root@elk-node1 ~]# curl 'elk-node1:9200/_cat/indices?v'
health status index                uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana              sQtNJsqNQ3mW4Bs62m5hpQ   1   1          1            0     26.1kb           13kb
green  open   nginx-2024.01.13     KVTsisxoRGKs60LYwdlbVA   5   1        424            0    517.9kb        258.9kb
green  open   vmware-2024.01.13    S_uEeLq6TluD4fajPGAz-g   5   1        424            0    549.8kb        274.9kb
green  open   es_access-2024.01.13 -743RqwoQMOBhBOlkOdVWg   5   1        424            0    540.5kb        270.2kb

Web界⾯配置

浏览器访问192.168.112.3:5601,到Kibana上配置索引

此处的 Index pattern 使用 curl 'elk-node1:9200/_cat/indices?v'获取index

image-20240113181152349

image-20240113181518653

⽣产部署⽅案

在⼀个⽣产集群中我们可以对这些节点进⾏划分。

建议集群中设置3台以上的节点作为master节点【

node.master: true node.data: false】

这些节点只负责成为主节点,维护整个集群的状态。

再根据数据量设置⼀批data节点【

node.master: false node.data: true】

这些节点只负责存储数据,后期提供建⽴索引和查询索引的服务,这样的话如果⽤户请求⽐较频繁,这

些节点的压⼒也会⽐较⼤

所以在集群中建议再设置⼀批client节点【

node.master: false node.data:false】

这些节点只负责处理⽤户请求,实现请求转发,负载均衡等功能。

master节点:普通服务器即可(CPU 内存 消耗⼀般)

data节点:主要消耗磁盘,内存

client节点:普通服务器即可(如果要进⾏分组聚合操作的话,建议这个节点内存也分配多⼀点)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/pingmian/567.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

存储过程的使用(一)

目录 不带参数的存储过程 创建一个存储过程&#xff0c;向数据表 dept 中插入一条记录 带 IN 参数的存储过程 在存储过程中接受来自外部的数值&#xff0c;在存储过程中判断该数值是否大于零并显示 输入一个编号&#xff0c;查询数据表emp中是否有这个编号&#xff0c;如果…

Ubuntu日常配置

目录 修改网络配置 xshell连不上怎么办 解析域名失败 永久修改DNS方法 临时修改DNS方法 修改网络配置 1、先ifconfig确认本机IP地址&#xff08;刚装的机子没有ifconfig&#xff0c;先apt install net-tools&#xff09; 2、22.04版本的ubuntu网络配置在netplan目录下&…

全面讲解基于大型语言模型的智能Agent:发展历程、架构与基于Langchain的实现demo

在大型语言模型&#xff08;LLM&#xff09;的时代&#xff0c;基于大型语言模型的智能Agen在过去一年中取得了显著进展。 本文主要介绍基于大型语言模型的智能Agent&#xff0c;目录如下&#xff1a; Agent技术的起源。人工智能Agent技术的发展历程。基于LLM的Agent架构。基…

重构国内游戏账号登录系统的思考和实践

本期作者 背景 账号登录系统&#xff0c;作为游戏发行平台最重要的应用之一&#xff0c;在当前的发行平台的应用架构中&#xff0c;主要承载的是用户的账号注册、登录、实名、防沉迷、隐私合规、风控等职责。合规作为企业经营的生命线&#xff0c;同时&#xff0c;账号登录作为…

python爬虫之爬取携程景点评价(5)

一、景点部分评价爬取 【携程攻略】携程旅游攻略,自助游,自驾游,出游,自由行攻略指南 (ctrip.com) import requests from bs4 import BeautifulSoupif __name__ __main__:url https://m.ctrip.com/webapp/you/commentWeb/commentList?seo0&businessId22176&busines…

视觉slam14讲-大纲-持续更新

视觉slam入门太难 数学理论编程知识计算机视觉知识 缺一不可&#xff0c;大家一起加油

【RAG 论文】面向知识库检索进行大模型增强的框架 —— KnowledGPT

论文&#xff1a;KnowledGPT: Enhancing Large Language Models with Retrieval and Storage Access on Knowledge Bases ⭐⭐⭐⭐ 复旦肖仰华团队工作 论文速读 KnowledGPT 提出了一个通过检索知识库来增强大模型生成的 RAG 框架。 在知识库中&#xff0c;存储着三类形式的知…

跟TED演讲学英文:How AI could empower any business by Andrew Ng

How AI could empower any business Link: https://www.ted.com/talks/andrew_ng_how_ai_could_empower_any_business Speaker: Andrew Ng Date: April 2022 文章目录 How AI could empower any businessIntroductionVocabularyTranscriptSummary后记 Introduction Expensiv…

ROS 2边学边练(29)-- 使用替换机制

前言 启动文件用于启动节点、服务和执行流程。这组操作可能有影响其行为的参数。替换机制可以在参数中使用&#xff0c;以便在描述可重复使用的启动文件时提供更大的灵活性。替换是仅在执行启动描述期间评估的变量&#xff0c;可用于获取特定信息&#xff0c;如启动配置、环境变…

解决Ubuntu安装NVIDIA显卡驱动导致的黑屏问题

前言 本文是在经历了3天内5次重装Ubuntu系统后写下的&#xff0c;根本原因就是这篇文章的主题——安装NVIDIA显卡驱动&#xff01;写下本文是为了让自己今后不再出同样类型的错误&#xff0c;同时&#xff0c;给其他出现同样问题的人一些启发&#xff01; 本文实例的电脑配置如…

推荐一款websocket接口测试工具

网址&#xff1a;Websocket在线测试-Websocket接口测试-Websocket模拟请求工具 http://www.jsons.cn/websocket/ 很简单输入以ws开后的网址就可以了 这个网址是你后台设置的 如果连接成功会砸提示框内显示相关字样&#xff0c;反之则不行

(十八)C++自制植物大战僵尸游戏的游戏暂停实现

植物大战僵尸游戏开发教程专栏地址http://t.csdnimg.cn/uzrnw 游戏暂停 当玩家遇到突发事件&#xff0c;可以通过暂停功能暂停游戏&#xff0c;以便及时处理问题。在激烈的游戏中&#xff0c;玩家可能需要暂停游戏来进行策略调整。此外&#xff0c;长时间的游戏对战可能会让玩…

「探索C语言内存:动态内存管理解析」

&#x1f320;先赞后看&#xff0c;不足指正!&#x1f320; &#x1f388;这将对我有很大的帮助&#xff01;&#x1f388; &#x1f4dd;所属专栏&#xff1a;C语言知识 &#x1f4dd;阿哇旭的主页&#xff1a;Awas-Home page 目录 引言 1. 静态内存 2. 动态内存 2.1 动态内…

超越现实的展览体验,VR全景展厅重新定义艺术与产品展示

随着数字化时代的到来&#xff0c;VR全景展厅成为了企业和创作者展示作品与产品的新兴选择。通过结合先进的虚拟现实技术&#xff0c;VR全景展厅不仅能够提供身临其境的观展体验&#xff0c;而且还拓展了传统展示方式的界限。 一、虚拟现实技术的融合之美 1、高度沉浸的观展体验…

本地项目如何设置https——2024-04-19

问题&#xff1a;由于项目引用了html5-qrcode插件&#xff0c;但是该插件在本地移动端调试时只能使用https访问&#xff0c;所有原本的本地地址是http&#xff0c;就需要改成https以方便调试。 解决方法&#xff1a;使用本地https证书 1&#xff09;从项目文件下打开cmd逐步输…

vulfocus靶场tomcat-cve_2017_12615 文件上传

7.0.0-7.0.81 影响版本 Windows上的Apache Tomcat如果开启PUT方法(默认关闭)&#xff0c;则存在此漏洞&#xff0c;攻击者可以利用该漏洞上传JSP文件&#xff0c;从而导致远程代码执行。 Tomcat 是一个小型的轻量级应用服务器&#xff0c;在中小型系统和并发访问用户不是很多…

基于达梦数据库开发-C#篇

文章目录 前言一、相关准备二、主要代码1.引入达梦类库2.连接达梦数据库3.DmCommand方式获取达梦数据库信息4.DmDataAdapter方式获取达梦数据库信息 总结 前言 达梦数据库是国产的新一代大型通用关系型数据库&#xff0c;全面支持 SQL 标准和主流编程语言接口/开发框架。其中.…

OpenHarmony实战开发-如何利用panel实现底部面板内嵌套列表。

介绍 本示例主要介绍了利用panel实现底部面板内嵌套列表&#xff0c;分阶段滑动效果场景。 效果图预览 使用说明 点击底部“展开”&#xff0c;弹出panel面板。在panel半展开时&#xff0c;手指向上滑动panel高度充满页面&#xff0c;手指向下滑动panel隐藏。在panel完全展开…

浏览器工作原理与实践--浏览上下文组:如何计算Chrome中渲染进程的个数

经常有朋友问到如何计算Chrome中渲染进程个数的问题&#xff0c;那么今天就来完整地解答这个问题。 在前面“04 | 导航流程”这一讲中我们介绍过了&#xff0c;在默认情况下&#xff0c;如果打开一个标签页&#xff0c;那么浏览器会默认为其创建一个渲染进程。不过我们在“04 |…

Echarts-知识图谱

Echarts-知识图谱 demo地址 打开CodePen 效果 思路 1. 生成根节点 2. 根据子节点距离与根节点的角度关系&#xff0c;生成子节点坐标&#xff0c;进而生成子节点 3. 从子节点上按角度生成对应的子节点 4. 递归将根节点与每一层级子节点连线核心代码 定义节点配置 functio…