docker安装elk6.7.1-搜集java日志

docker安装elk6.7.1-搜集java日志

如果对运维课程感兴趣,可以在b站上、A站或csdn上搜索我的账号: 运维实战课程,可以关注我,学习更多免费的运维实战技术视频

0.规划

192.168.171.130    tomcat日志+filebeat

192.168.171.131    tomcat日志+filebeat

192.168.171.128    redis

192.168.171.129    logstash

192.168.171.128    es1

192.168.171.129    es2

192.168.171.132    kibana

1.docker安装es集群-6.7.1 和head插件(在192.168.171.128-es1和192.168.171.129-es2)

在192.168.171.128上安装es6.7.1和es6.7.1-head插件:

1)安装docker19.03.2:

[root@localhost ~]# docker info

.......

Server Version: 19.03.2

[root@localhost ~]# sysctl -w vm.max_map_count=262144  #设置elasticsearch用户拥有的内存权限太小,至少需要262144

[root@localhost ~]# sysctl -a |grep vm.max_map_count    #查看

vm.max_map_count = 262144

[root@localhost ~]# vim /etc/sysctl.conf

vm.max_map_count=262144

2)安装es6.7.1:

上传相关es的压缩包到/data目录:

[root@localhost ~]# cd /data/

[root@localhost data]# ls es-6.7.1.tar.gz

es-6.7.1.tar.gz

[root@localhost data]# tar -zxf es-6.7.1.tar.gz

[root@localhost data]# cd es-6.7.1

[root@localhost es-6.7.1]# ls

config  image  scripts

[root@localhost es-6.7.1]# ls config/

es.yml

[root@localhost es-6.7.1]# ls image/

elasticsearch_6.7.1.tar

[root@localhost es-6.7.1]# ls scripts/

run_es_6.7.1.sh

[root@localhost es-6.7.1]# docker load -i image/elasticsearch_6.7.1.tar

[root@localhost es-6.7.1]# docker images |grep elasticsearch

elasticsearch        6.7.1               e2667f5db289        11 months ago       812MB

[root@localhost es-6.7.1]# cat config/es.yml

cluster.name: elasticsearch-cluster

node.name: es-node1

network.host: 0.0.0.0

network.publish_host: 192.168.171.128

http.port: 9200

transport.tcp.port: 9300

http.cors.enabled: true

http.cors.allow-origin: "*"

node.master: true

node.data: true

discovery.zen.ping.unicast.hosts: ["192.168.171.128:9300","192.168.171.129:9300"]

discovery.zen.minimum_master_nodes: 1

#cluster.name 集群的名称,可以自定义名字,但两个es必须一样,就是通过是不是同一个名称判断是不是一个集群

#node.name 本机的节点名,可自定义,没必要必须hosts解析或配置该主机名

#下面两个是默认基础上新加的,允许跨域访问

#http.cors.enabled: true

#http.cors.allow-origin: '*'

##注意:容器里有两个端口,9200是:ES节点和外部通讯使用,9300是:ES节点之间通讯使用

[root@localhost es-6.7.1]# cat scripts/run_es_6.7.1.sh

#!/bin/bash

docker run -e ES_JAVA_OPTS="-Xms1024m -Xmx1024m" -d --net=host --restart=always -v /data/es-6.7.1/config/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/es6.7.1_data:/usr/share/elasticsearch/data -v /data/es6.7.1_logs:/usr/share/elasticsearch/logs  --name es6.7.1 elasticsearch:6.7.1

#注意:容器里有两个端口,9200是:ES节点和外部通讯使用,9300是:ES节点之间通讯使用

[root@localhost es-6.7.1]# mkdir /data/es6.7.1_data

[root@localhost es-6.7.1]# mkdir /data/es6.7.1_logs

[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_data/     #需要es用户能写入,否则无法映射

[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_logs/     #需要es用户能写入,否则无法映射

[root@localhost es-6.7.1]# sh scripts/run_es_6.7.1.sh

[root@localhost es-6.7.1]# docker ps

CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS               NAMES

988abe7eedac        elasticsearch:6.7.1   "/usr/local/bin/dock…"   23 seconds ago      Up 19 seconds                           es6.7.1

[root@localhost es-6.7.1]# netstat -anput |grep 9200

tcp6       0      0 :::9200                 :::*                    LISTEN      16196/java          

[root@localhost es-6.7.1]# netstat -anput |grep 9300

tcp6       0      0 :::9300                 :::*                    LISTEN      16196/java          

[root@localhost es-6.7.1]# cd

浏览器访问es服务:​​​​​​http://192.168.171.128:9200/

3)安装es6.7.1-head插件:

上传相关es-head插件的压缩包到/data目录

[root@localhost ~]# cd /data/

[root@localhost data]# ls es-6.7.1-head.tar.gz

es-6.7.1-head.tar.gz

[root@localhost data]# tar -zxf es-6.7.1-head.tar.gz

[root@localhost data]# cd es-6.7.1-head

[root@localhost es-6.7.1-head]# ls

conf  image  scripts

[root@localhost es-6.7.1-head]# ls conf/

app.js  Gruntfile.js

[root@localhost es-6.7.1-head]# ls image/

elasticsearch-head_6.7.1.tar

[root@localhost es-6.7.1-head]# ls scripts/

run_es-head.sh

[root@localhost es-6.7.1-head]# docker load -i image/elasticsearch-head_6.7.1.tar

[root@localhost es-6.7.1-head]# docker images

REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE

elasticsearch        6.7.1               e2667f5db289        11 months ago       812MB

elasticsearch-head   6.7.1               b19a5c98e43b        3 years ago         824MB

[root@localhost es-6.7.1-head]# vim conf/app.js

.....

this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.171.128:9200"; #修改为本机ip

....

[root@localhost es-6.7.1-head]# vim conf/Gruntfile.js

....

                connect: {

                        server: {

                                options: {

                                        hostname: '*',    #添加

                                        port: 9100,

                                        base: '.',

                                        keepalive: true

                                }

                        }

....

[root@localhost es-6.7.1-head]# cat scripts/run_es-head.sh

#!/bin/bash

docker run -d --name es-head-6.7.1 --net=host --restart=always -v /data/es-6.7.1-head/conf/Gruntfile.js:/usr/src/app/Gruntfile.js -v /data/es-6.7.1-head/conf/app.js:/usr/src/app/_site/app.js elasticsearch-head:6.7.1

#容器端口是9100,是es的管理端口

[root@localhost es-6.7.1-head]# sh scripts/run_es-head.sh 

[root@localhost es-6.7.1-head]# docker ps

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES

c46189c3338b        elasticsearch-head:6.7.1   "/bin/sh -c 'grunt s…"   42 seconds ago      Up 37 seconds                           es-head-6.7.1

988abe7eedac        elasticsearch:6.7.1        "/usr/local/bin/dock…"   9 minutes ago       Up 9 minutes                            es6.7.1

[root@localhost es-6.7.1-head]# netstat -anput |grep 9100

tcp6       0      0 :::9100                 :::*                    LISTEN      16840/grunt         

浏览器访问es-head插件:http://192.168.171.128:9100/ 

在192.168.171.129上安装es6.7.1和es6.7.1-head插件:

1)安装docker19.03.2:

[root@localhost ~]# docker info

Client:

 Debug Mode: false

Server:

 Containers: 2

  Running: 2

  Paused: 0

  Stopped: 0

 Images: 2

 Server Version: 19.03.2

[root@localhost ~]# sysctl -w vm.max_map_count=262144  #设置elasticsearch用户拥有的内存权限太小,至少需要262144

[root@localhost ~]# sysctl -a |grep vm.max_map_count    #查看

vm.max_map_count = 262144

[root@localhost ~]# vim /etc/sysctl.conf

vm.max_map_count=262144

2)安装es6.7.1:

上传相关es的压缩包到/data目录:

[root@localhost ~]# cd /data/

[root@localhost data]# ls es-6.7.1.tar.gz

es-6.7.1.tar.gz

[root@localhost data]# tar -zxf es-6.7.1.tar.gz

[root@localhost data]# cd es-6.7.1

[root@localhost es-6.7.1]# ls

config  image  scripts

[root@localhost es-6.7.1]# ls config/

es.yml

[root@localhost es-6.7.1]# ls image/

elasticsearch_6.7.1.tar

[root@localhost es-6.7.1]# ls scripts/

run_es_6.7.1.sh

[root@localhost es-6.7.1]# docker load -i image/elasticsearch_6.7.1.tar

[root@localhost es-6.7.1]# docker images |grep elasticsearch

elasticsearch        6.7.1               e2667f5db289        11 months ago       812MB

[root@localhost es-6.7.1]# vim config/es.yml

cluster.name: elasticsearch-cluster

node.name: es-node2

network.host: 0.0.0.0

network.publish_host: 192.168.171.129

http.port: 9200

transport.tcp.port: 9300

http.cors.enabled: true

http.cors.allow-origin: "*"

node.master: true

node.data: true

discovery.zen.ping.unicast.hosts: ["192.168.171.128:9300","192.168.171.129:9300"]

discovery.zen.minimum_master_nodes: 1

#cluster.name 集群的名称,可以自定义名字,但两个es必须一样,就是通过是不是同一个名称判断是不是一个集群

#node.name 本机的节点名,可自定义,没必要必须hosts解析或配置该主机名

#下面两个是默认基础上新加的,允许跨域访问

#http.cors.enabled: true

#http.cors.allow-origin: '*'

##注意:容器里有两个端口,9200是:ES节点和外部通讯使用,9300是:ES节点之间通讯使用

[root@localhost es-6.7.1]# cat scripts/run_es_6.7.1.sh

#!/bin/bash

docker run -e ES_JAVA_OPTS="-Xms1024m -Xmx1024m" -d --net=host --restart=always -v /data/es-6.7.1/config/es.yml:/usr/share/elasticsearch/config/elasticsearch.yml -v /data/es6.7.1_data:/usr/share/elasticsearch/data -v /data/es6.7.1_logs:/usr/share/elasticsearch/logs  --name es6.7.1 elasticsearch:6.7.1

#注意:容器里有两个端口,9200是:ES节点和外部通讯使用,9300是:ES节点之间通讯使用

[root@localhost es-6.7.1]# mkdir /data/es6.7.1_data

[root@localhost es-6.7.1]# mkdir /data/es6.7.1_logs

[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_data/     #需要es用户能写入,否则无法映射

[root@localhost es-6.7.1]# chmod -R 777 /data/es6.7.1_logs/     #需要es用户能写入,否则无法映射

[root@localhost es-6.7.1]# sh scripts/run_es_6.7.1.sh

[root@localhost es-6.7.1]# docker ps

CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS               NAMES

a3b0a0187db8        elasticsearch:6.7.1   "/usr/local/bin/dock…"   9 seconds ago       Up 7 seconds                            es6.7.1

[root@localhost es-6.7.1]# netstat -anput |grep 9200

tcp6       0      0 :::9200                 :::*                    LISTEN      14171/java          

[root@localhost es-6.7.1]# netstat -anput |grep 9300

tcp6       0      0 :::9300                 :::*                    LISTEN      14171/java          

[root@localhost es-6.7.1]# cd

浏览器访问es服务:http://192.168.171.129:9200/

3)安装es6.7.1-head插件:

上传相关es-head插件的压缩包到/data目录

[root@localhost ~]# cd /data/

[root@localhost data]# ls es-6.7.1-head.tar.gz

es-6.7.1-head.tar.gz

[root@localhost data]# tar -zxf es-6.7.1-head.tar.gz

[root@localhost data]# cd es-6.7.1-head

[root@localhost es-6.7.1-head]# ls

conf  image  scripts

[root@localhost es-6.7.1-head]# ls conf/

app.js  Gruntfile.js

[root@localhost es-6.7.1-head]# ls image/

elasticsearch-head_6.7.1.tar

[root@localhost es-6.7.1-head]# ls scripts/

run_es-head.sh

[root@localhost es-6.7.1-head]# docker load -i image/elasticsearch-head_6.7.1.tar

[root@localhost es-6.7.1-head]# docker images

REPOSITORY           TAG                 IMAGE ID            CREATED             SIZE

elasticsearch        6.7.1               e2667f5db289        11 months ago       812MB

elasticsearch-head   6.7.1               b19a5c98e43b        3 years ago         824MB

[root@localhost es-6.7.1-head]# vim conf/app.js

.....

this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.171.129:9200"; #修改为本机ip

....

[root@localhost es-6.7.1-head]# vim conf/Gruntfile.js

....

                connect: {

                        server: {

                                options: {

                                        hostname: '*',    #添加

                                        port: 9100,

                                        base: '.',

                                        keepalive: true

                                }

                        }

....

[root@localhost es-6.7.1-head]# cat scripts/run_es-head.sh

#!/bin/bash

docker run -d --name es-head-6.7.1 --net=host --restart=always -v /data/es-6.7.1-head/conf/Gruntfile.js:/usr/src/app/Gruntfile.js -v /data/es-6.7.1-head/conf/app.js:/usr/src/app/_site/app.js elasticsearch-head:6.7.1

#容器端口是9100,是es的管理端口

[root@localhost es-6.7.1-head]# sh scripts/run_es-head.sh 

[root@localhost es-6.7.1-head]# docker ps

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES

f4f5c967754b        elasticsearch-head:6.7.1   "/bin/sh -c 'grunt s…"   12 seconds ago      Up 7 seconds                            es-head-6.7.1

a3b0a0187db8        elasticsearch:6.7.1        "/usr/local/bin/dock…"   7 minutes ago       Up 7 minutes                            es6.7.1

[root@localhost es-6.7.1-head]# netstat -anput |grep 9100

tcp6       0      0 :::9100                 :::*                    LISTEN      14838/grunt         

浏览器访问es-head插件:http://192.168.171.129:9100/ 

同样在机器192.168.171.128的head插件也能查看到状态,因为插件管理工具都是一样的,如下:

http://192.168.171.128:9100/

2.docker安装redis4.0.10(在192.168.171.128上)

上传redis4.0.10镜像:

[root@localhost ~]# ls redis_4.0.10.tar

redis_4.0.10.tar

[root@localhost ~]# docker load -i redis_4.0.10.tar

[root@localhost ~]# docker images |grep redis

gmprd.baiwang-inner.com/redis   4.0.10              f713a14c7f9b        13 months ago       425MB

[root@localhost ~]# mkdir -p /data/redis/conf         #创建配置文件目录

[root@localhost ~]# vim /data/redis/conf/redis.conf    #自定义配置文件

protected-mode no

port 6379

bind 0.0.0.0

tcp-backlog 511

timeout 0

tcp-keepalive 300

supervised no

pidfile "/usr/local/redis/redis_6379.pid"

loglevel notice

logfile "/opt/redis/logs/redis.log"

databases 16

save 900 1

save 300 10

save 60 10000

stop-writes-on-bgsave-error yes

rdbcompression yes

rdbchecksum yes

dbfilename "dump.rdb"

dir "/"

slave-serve-stale-data yes

slave-read-only yes

repl-diskless-sync no

repl-diskless-sync-delay 5

repl-disable-tcp-nodelay no

slave-priority 100

requirepass 123456

appendonly yes

dir "/opt/redis/data"

logfile "/opt/redis/logs/redis.log"

appendfilename "appendonly.aof"

appendfsync everysec

no-appendfsync-on-rewrite no

auto-aof-rewrite-percentage 100

auto-aof-rewrite-min-size 64mb

aof-load-truncated yes

lua-time-limit 5000

slowlog-log-slower-than 10000

slowlog-max-len 128

latency-monitor-threshold 0

notify-keyspace-events ""

hash-max-ziplist-entries 512

hash-max-ziplist-value 64

list-max-ziplist-size -2

list-compress-depth 0

set-max-intset-entries 512

zset-max-ziplist-entries 128

zset-max-ziplist-value 64

hll-sparse-max-bytes 3000

activerehashing yes

client-output-buffer-limit normal 0 0 0

client-output-buffer-limit slave 256mb 64mb 60

client-output-buffer-limit pubsub 32mb 8mb 60

hz 10

aof-rewrite-incremental-fsync yes

maxclients 4064

#appendonly yes 是开启数据持久化

#dir "/opt/redis/data"  #持久化到的容器里的目录

#logfile "/opt/redis/logs/redis.log" #持久化到的容器里的目录,此处写的必须是文件路径,目录路径不行

[root@localhost ~]# docker run -d --net=host --restart=always --name=redis4.0.10 -v /data/redis/conf/redis.conf:/opt/redis/conf/redis.conf -v /data/redis_data:/opt/redis/data -v /data/redis_logs:/opt/redis/logs gmprd.baiwang-inner.com/redis:4.0.10

[root@localhost ~]# docker ps |grep redis

735fb213ee41        gmprd.baiwang-inner.com/redis:4.0.10   "redis-server /opt/r…"   9 seconds ago       Up 8 seconds                            redis4.0.10

[root@localhost ~]# netstat -anput |grep 6379

tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      16988/redis-server  

[root@localhost ~]# ls /data/redis_data/

appendonly.aof

[root@localhost ~]# ls /data/redis_logs/

redis.log

[root@localhost ~]# docker exec -it redis4.0.10 bash

[root@localhost /]# redis-cli -a 123456

127.0.0.1:6379> set k1 v1

OK

127.0.0.1:6379> keys *

1) "k1"

127.0.0.1:6379> get k1

"v1"

127.0.0.1:6379> quit

[root@localhost /]# exit

3.docker安装tomcat(不安装,仅创建模拟tomcat和其他java日志)和filebeat6.7.1 (192.168.171.130和192.168.171.131)

在192.168.171.130上:

模拟创建各类java日志,将各类java日志用filebeat写入redis中,在用logstash以多行匹配模式,写入es中:

注意:下面日志不能提前生成,需要先启动filebeat开始收集后,在vim编写下面的日志,否则filebeat不能读取已经有的日志.

a)创建模拟tomcat日志:

[root@localhost ~]# mkdir /data/java-logs

[root@localhost ~]# mkdir /data/java-logs/{tomcat_logs,es_logs,message_logs}

[root@localhost ~]# vim /data/java-logs/tomcat_logs/catalina.out

2020-03-09 13:07:48|ERROR|org.springframework.web.context.ContextLoader:351|Context initialization failed

org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/schema/aop]

Offending resource: URL [file:/usr/local/apache-tomcat-8.0.32/webapps/ROOT/WEB-INF/classes/applicationContext.xml]

at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:70) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:80) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.error(BeanDefinitionParserDelegate.java:301) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1408) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1401) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:168) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.doRegisterBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:138) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:94) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:508) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:392) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:125) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:94) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:129) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:609) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:510) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:444) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:326) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) [spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4812) [catalina.jar:8.0.32]

at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5255) [catalina.jar:8.0.32]

at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) [catalina.jar:8.0.32]

at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725) [catalina.jar:8.0.32]

at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701) [catalina.jar:8.0.32]

at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717) [catalina.jar:8.0.32]

at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1091) [catalina.jar:8.0.32]

at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1830) [catalina.jar:8.0.32]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_144]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_144]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144]

at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]

13-Oct-2020 13:07:48.990 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file

13-Oct-2020 13:07:48.991 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal Context [] startup failed due to previous errors

2020-03-09 13:07:48|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy

2020-03-09 13:09:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy error test1

2020-03-09 13:10:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy error test2

2020-03-09 13:11:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2020]; root of context hierarchy error test3

b)制造系统日志(将/var/log/messages部分弄出来)  系统日志

[root@localhost ~]# vim /data/java-logs/message_logs/messages

Mar 09 14:19:06 localhost systemd: Removed slice system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.

Mar 09 14:19:06 localhost systemd: Stopping system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.

Mar 09 14:19:06 localhost systemd: Stopped target Network is Online.

Mar 09 14:19:06 localhost systemd: Stopping Network is Online.

Mar 09 14:19:06 localhost systemd: Stopping Authorization Manager...

Mar 09 14:20:38 localhost kernel: Initializing cgroup subsys cpuset

Mar 09 14:20:38 localhost kernel: Initializing cgroup subsys cpu

Mar 09 14:20:38 localhost kernel: Initializing cgroup subsys cpuacct

Mar 09 14:20:38 localhost kernel: Linux version 3.10.0-693.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Tue Aug 22 21:09:27 UTC 2017

Mar 09 14:20:38 localhost kernel: Command line: BOOT_IMAGE=/vmlinuz-3.10.0-693.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8

c)制造es日志:

[root@localhost ~]# vim /data/java-logs/es_logs/es_log

[2020-03-09T21:44:58,440][ERROR][o.e.b.Bootstrap          ] Exception

java.lang.RuntimeException: can not run elasticsearch as root

        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:035) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) [elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) [elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) [elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) [elasticsearch-cli-6.2.4.jar:6.2.4]

        at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) [elasticsearch-6.2.4.jar:6.2.4]

[2020-03-09T21:44:58,549][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]

org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root

        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:095) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) ~[elasticsearch-cli-6.2.4.jar:6.2.4]

        at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.4.jar:6.2.4]

Caused by: java.lang.RuntimeException: can not run elasticsearch as root

        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:035) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) ~[elasticsearch-6.2.4.jar:6.2.4]

        ... 6 more

[2020-03-09T21:46:32,174][INFO ][o.e.n.Node               ] [] initializing ...

[2020-03-09T21:46:32,467][INFO ][o.e.e.NodeEnvironment    ] [koccs5f] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [48gb], net total_space [49.9gb], types [rootfs]

[2020-03-09T21:46:32,468][INFO ][o.e.e.NodeEnvironment    ] [koccs5f] heap size [0315.6mb], compressed ordinary object pointers [true]

d)制造tomcat访问日志

[root@localhost ~]# vim /data/java-logs/tomcat_logs/localhost_access_log.2020-03-09.txt 

192.168.171.1 - - [09/Mar/2020:09:07:59 +0800] "GET /favicon.ico HTTP/1.1" 404 -

Caused by: java.lang.RuntimeException: can not run elasticsearch as root

        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]

192.168.171.2 - - [09/Mar/2020:09:07:59 +0800] "GET / HTTP/1.1" 404 -

192.168.171.1 - - [09/Mar/2020:15:09:12 +0800] "GET / HTTP/1.1" 200 11250

Caused by: java.lang.RuntimeException: can not run elasticsearch as root

        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives

192.168.171.2 - - [09/Mar/2020:15:09:12 +0800] "GET /tomcat.png HTTP/1.1" 200 5103

192.168.171.3 - - [09/Mar/2020:15:09:12 +0800] "GET /tomcat.css HTTP/1.1" 200 5576

192.168.171.5 - - [09/Mar/2020:15:09:09 +0800] "GET /bg-nav.png HTTP/1.1" 200 1401

192.168.171.1 - - [09/Mar/2020:15:09:09 +0800] "GET /bg-upper.png HTTP/1.1" 200 3103

安装filebeat6.7.1:

[root@localhost ~]# cd /data/

[root@localhost data]# ls filebeat6.7.1.tar.gz

filebeat6.7.1.tar.gz

[root@localhost data]# tar -zxf filebeat6.7.1.tar.gz

[root@localhost data]# cd filebeat6.7.1

[root@localhost filebeat6.7.1]# ls

conf  image  scripts

[root@localhost filebeat6.7.1]# ls conf/

filebeat.yml  filebeat.yml.bak

[root@localhost filebeat6.7.1]# ls image/

filebeat_6.7.1.tar

[root@localhost filebeat6.7.1]# ls scripts/

run_filebeat6.7.1.sh

[root@localhost filebeat6.7.1]# docker load -i image/filebeat_6.7.1.tar 

[root@localhost filebeat6.7.1]# docker images |grep filebeat

docker.elastic.co/beats/filebeat   6.7.1               04fcff75b160        11 months ago       279MB

[root@localhost filebeat6.7.1]# cat conf/filebeat.yml

filebeat.inputs:

#下面为添加:——————————————

#系统日志:

- type: log

  enabled: true

  paths:

    - /usr/share/filebeat/logs/message_logs/messages

  fields:

    log_source: system-171.130

#tomcat的catalina日志:

- type: log

  enabled: true

  paths:

    - /usr/share/filebeat/logs/tomcat_logs/catalina.out

  fields:

    log_source: catalina-log-171.130

  multiline.pattern: '^[0-9]{4}-(((0[13578]|(10|12))-(0[1-9]|[1-2][0-9]|3[0-1]))|(02-(0[1-9]|[1-2][0-9]))|((0[469]|11)-(0[1-9]|[1-2][0-9]|30)))'

  multiline.negate: true

  multiline.match: after

# 上面正则是匹配日期开头正则,类似:2004-02-29开头的

# log_source: xxx 表示: 因为存入redis的只有一个索引名,logstash对多种类型日志无法区分,定义该项可以让logstash以此来判断日志来源,当是这种类型日志,输出相应的索引名存入es,当时另一种类型日志,输出相应索引名存入es

#es日志:

- type: log

  enabled: true

  paths:

    - /usr/share/filebeat/logs/es_logs/es_log

  fields:

    log_source: es-log-171.130

  multiline.pattern: '^\['

  multiline.negate: true

  multiline.match: after

#上面正则是是匹配以[开头的,\表示转义.

#tomcat的访问日志:

- type: log

  enabled: true

  paths:

    - /usr/share/filebeat/logs/tomcat_logs/localhost_access_log.2020-03-09.txt

  fields:

    log_source: tomcat-access-log-171.130

  multiline.pattern: '^((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})(\.((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})){3}'

  multiline.negate: true

  multiline.match: after

#上面为添加:—————————————————————

filebeat.config.modules:

  path: ${path.config}/modules.d/*.yml

  reload.enabled: false

setup.template.settings:

  index.number_of_shards: 3

setup.kibana:

#下面是直接写入es中:

#output.elasticsearch:

#  hosts: ["192.168.171.128:9200"]

#下面是写入redis中:

#下面的filebeat-common是自定的key,要和logstash中从redis里对应的key要要一致,多个节点的nginx的都可以该key写入,但需要定义log_source以作为区分,logstash读取的时候以区分的标志来分开存放索引到es中

output.redis:

  hosts: ["192.168.171.128"]

  port: 6379

  password: "123456"

  key: "filebeat-common"

  db: 0

  datatype: list

processors:

  - add_host_metadata: ~

  - add_cloud_metadata: ~

#注意:因为默认情况下,宿主机日志路径和容器内日志路径是不一致的,所以配置文件里配置的路径如果是宿主机日志路径,容器里则找不到

##所以采取措施是:配置文件里配置成容器里的日志路径,再把宿主机的日志目录和容器日志目录做一个映射就可以了

#/usr/share/filebeat/logs/*.log 是容器里的日志路径

[root@localhost filebeat6.7.1]# cat scripts/run_filebeat6.7.1.sh 

#!/bin/bash

docker run -d --name filebeat6.7.1 --net=host --restart=always --user=root -v /data/filebeat6.7.1/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /data/java-logs:/usr/share/filebeat/logs  docker.elastic.co/beats/filebeat:6.7.1

#注意:因为默认情况下,宿主机日志路径和容器内日志路径是不一致的,所以配置文件里配置的路径如果是宿主机日志路径,容器里则找不到

#所以采取措施是:配置文件里配置成容器里的日志路径,再把宿主机的日志目录和容器日志目录做一个映射就可以了

[root@localhost filebeat6.7.1]# sh scripts/run_filebeat6.7.1.sh  #运行后则开始收集日志到redis

[root@localhost filebeat6.7.1]# docker ps |grep filebeat

1f2bbd450e7e        docker.elastic.co/beats/filebeat:6.7.1   "/usr/local/bin/dock…"   8 seconds ago       Up 7 seconds                            filebeat6.7.1

[root@localhost filebeat6.7.1]# cd

在192.168.171.131上:

模拟创建各类java日志,将各类java日志用filebeat写入redis中,在用logstash以多行匹配模式,写入es中:

注意:下面日志不能提前生成,需要先启动filebeat开始收集后,在vim编写下面的日志,否则filebeat不能读取已经有的日志.

a)创建模拟tomcat日志:

[root@localhost ~]# mkdir /data/java-logs

[root@localhost ~]# mkdir /data/java-logs/{tomcat_logs,es_logs,message_logs}

[root@localhost ~]# vim /data/java-logs/tomcat_logs/catalina.out

2050-05-09 13:07:48|ERROR|org.springframework.web.context.ContextLoader:351|Context initialization failed

org.springframework.beans.factory.parsing.BeanDefinitionParsingException: Configuration problem: Unable to locate Spring NamespaceHandler for XML schema namespace [http://www.springframework.org/schema/aop]

Offending resource: URL [file:/usr/local/apache-tomcat-8.0.32/webapps/ROOT/WEB-INF/classes/applicationContext.xml]

at org.springframework.beans.factory.parsing.FailFastProblemReporter.error(FailFastProblemReporter.java:70) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:85) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.parsing.ReaderContext.error(ReaderContext.java:80) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.error(BeanDefinitionParserDelegate.java:301) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1408) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.BeanDefinitionParserDelegate.parseCustomElement(BeanDefinitionParserDelegate.java:1401) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.parseBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:168) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.doRegisterBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:138) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.DefaultBeanDefinitionDocumentReader.registerBeanDefinitions(DefaultBeanDefinitionDocumentReader.java:94) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.registerBeanDefinitions(XmlBeanDefinitionReader.java:508) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.doLoadBeanDefinitions(XmlBeanDefinitionReader.java:392) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:336) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBeanDefinitions(XmlBeanDefinitionReader.java:304) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:181) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:217) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.beans.factory.support.AbstractBeanDefinitionReader.loadBeanDefinitions(AbstractBeanDefinitionReader.java:188) ~[spring-beans-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:125) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.support.XmlWebApplicationContext.loadBeanDefinitions(XmlWebApplicationContext.java:94) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.context.support.AbstractRefreshableApplicationContext.refreshBeanFactory(AbstractRefreshableApplicationContext.java:129) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.obtainFreshBeanFactory(AbstractApplicationContext.java:609) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:510) ~[spring-context-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:444) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:326) ~[spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:107) [spring-web-4.2.6.RELEASE.jar:4.2.6.RELEASE]

at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4812) [catalina.jar:8.0.32]

at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5255) [catalina.jar:8.0.32]

at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:147) [catalina.jar:8.0.32]

at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:725) [catalina.jar:8.0.32]

at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:701) [catalina.jar:8.0.32]

at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:717) [catalina.jar:8.0.32]

at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1091) [catalina.jar:8.0.32]

at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1830) [catalina.jar:8.0.32]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_144]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_144]

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_144]

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_144]

at java.lang.Thread.run(Thread.java:748) [na:1.8.0_144]

13-Oct-2050 13:07:48.990 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal One or more listeners failed to start. Full details will be found in the appropriate container log file

13-Oct-2050 13:07:48.991 SEVERE [localhost-startStop-3] org.apache.catalina.core.StandardContext.startInternal Context [] startup failed due to previous errors

2050-05-09 13:07:48|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy

2050-05-09 13:09:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy error test1

2050-05-09 13:10:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy error test2

2050-05-09 13:11:41|INFO|org.springframework.context.support.AbstractApplicationContext:960|Closing Root WebApplicationContext: startup date [Sun Oct 13 13:07:43 CST 2050]; root of context hierarchy error test3

b)制造系统日志(将/var/log/messages部分弄出来)  系统日志

[root@localhost ~]# vim /data/java-logs/message_logs/messages

Mar 50 50:50:06 localhost systemd: Removed slice system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.

Mar 50 50:50:06 localhost systemd: Stopping system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice.

Mar 50 50:50:06 localhost systemd: Stopped target Network is Online.

Mar 50 50:50:06 localhost systemd: Stopping Network is Online.

Mar 50 50:50:06 localhost systemd: Stopping Authorization Manager...

Mar 50 50:20:38 localhost kernel: Initializing cgroup subsys cpuset

Mar 50 50:20:38 localhost kernel: Initializing cgroup subsys cpu

Mar 50 50:20:38 localhost kernel: Initializing cgroup subsys cpuacct

Mar 50 50:20:38 localhost kernel: Linux version 3.10.0-693.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Tue Aug 22 21:50:27 UTC 2050

Mar 50 50:20:38 localhost kernel: Command line: BOOT_IMAGE=/vmlinuz-3.10.0-693.el7.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8

c)制造es日志:

[root@localhost ~]# vim /data/java-logs/es_logs/es_log

[2050-50-09T21:44:58,440][ERROR][o.e.b.Bootstrap          ] Exception

java.lang.RuntimeException: can not run elasticsearch as root

        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:505) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) [elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) [elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) [elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) [elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) [elasticsearch-cli-6.2.4.jar:6.2.4]

        at org.elasticsearch.cli.Command.main(Command.java:90) [elasticsearch-cli-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) [elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) [elasticsearch-6.2.4.jar:6.2.4]

[2050-50-09T21:44:58,549][WARN ][o.e.b.ElasticsearchUncaughtExceptionHandler] [] uncaught exception in thread [main]

org.elasticsearch.bootstrap.StartupException: java.lang.RuntimeException: can not run elasticsearch as root

        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:095) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:109) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:094) ~[elasticsearch-cli-6.2.4.jar:6.2.4]

        at org.elasticsearch.cli.Command.main(Command.java:90) ~[elasticsearch-cli-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:85) ~[elasticsearch-6.2.4.jar:6.2.4]

Caused by: java.lang.RuntimeException: can not run elasticsearch as root

        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:505) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:323) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:091) ~[elasticsearch-6.2.4.jar:6.2.4]

        ... 6 more

[2050-50-09T21:46:32,174][INFO ][o.e.n.Node               ] [] initializing ...

[2050-50-09T21:46:32,467][INFO ][o.e.e.NodeEnvironment    ] [koccs5f] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [48gb], net total_space [49.9gb], types [rootfs]

[2050-50-09T21:46:32,468][INFO ][o.e.e.NodeEnvironment    ] [koccs5f] heap size [5015.6mb], compressed ordinary object pointers [true]

d)制造tomcat访问日志

[root@localhost ~]# vim /data/java-logs/tomcat_logs/localhost_access_log.2050-50-09.txt 

192.168.150.1 - - [09/Mar/2050:09:07:59 +0800] "GET /favicon.ico HTTP/1.1" 404 -

Caused by: java.lang.RuntimeException: can not run elasticsearch as root

        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:105) ~[elasticsearch-6.2.4.jar:6.2.4]

        at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:172) ~[elasticsearch-6.2.4.jar:6.2.4]

192.168.150.2 - - [09/Mar/2050:09:07:59 +0800] "GET / HTTP/1.1" 404 -

192.168.150.1 - - [09/Mar/2050:15:09:12 +0800] "GET / HTTP/1.1" 200 11250

Caused by: java.lang.RuntimeException: can not run elasticsearch as root

        at org.elasticsearch.bootstrap.Bootstrap.initializeNatives

192.168.150.2 - - [09/Mar/2050:15:09:12 +0800] "GET /tomcat.png HTTP/1.1" 200 5103

192.168.150.3 - - [09/Mar/2050:15:09:12 +0800] "GET /tomcat.css HTTP/1.1" 200 5576

192.168.150.5 - - [09/Mar/2050:15:09:09 +0800] "GET /bg-nav.png HTTP/1.1" 200 1401

192.168.150.1 - - [09/Mar/2050:15:09:09 +0800] "GET /bg-upper.png HTTP/1.1" 200 3103

安装filebeat6.7.1:

[root@localhost ~]# cd /data/

[root@localhost data]# ls filebeat6.7.1.tar.gz

filebeat6.7.1.tar.gz

[root@localhost data]# tar -zxf filebeat6.7.1.tar.gz

[root@localhost data]# cd filebeat6.7.1

[root@localhost filebeat6.7.1]# ls

conf  image  scripts

[root@localhost filebeat6.7.1]# ls conf/

filebeat.yml  filebeat.yml.bak

[root@localhost filebeat6.7.1]# ls image/

filebeat_6.7.1.tar

[root@localhost filebeat6.7.1]# ls scripts/

run_filebeat6.7.1.sh

[root@localhost filebeat6.7.1]# docker load -i image/filebeat_6.7.1.tar 

[root@localhost filebeat6.7.1]# docker images |grep filebeat

docker.elastic.co/beats/filebeat   6.7.1               04fcff75b160        11 months ago       279MB

[root@localhost filebeat6.7.1]# cat conf/filebeat.yml

filebeat.inputs:

#下面为添加:——————————————

#系统日志:

- type: log

  enabled: true

  paths:

    - /usr/share/filebeat/logs/message_logs/messages

  fields:

    log_source: system-171.131

#tomcat的catalina日志:

- type: log

  enabled: true

  paths:

    - /usr/share/filebeat/logs/tomcat_logs/catalina.out

  fields:

    log_source: catalina-log-171.131

  multiline.pattern: '^[0-9]{4}-(((0[13578]|(10|12))-(0[1-9]|[1-2][0-9]|3[0-1]))|(02-(0[1-9]|[1-2][0-9]))|((0[469]|11)-(0[1-9]|[1-2][0-9]|30)))'

  multiline.negate: true

  multiline.match: after

# 上面正则是匹配日期开头正则,类似:2004-02-29开头的

# log_source: xxx 表示: 因为存入redis的只有一个索引名,logstash对多种类型日志无法区分,定义该项可以让logstash以此来判断日志来源,当是这种类型日志,输出相应的索引名存入es,当时另一种类型日志,输出相应索引名存入es

#es日志:

- type: log

  enabled: true

  paths:

    - /usr/share/filebeat/logs/es_logs/es_log

  fields:

    log_source: es-log-171.131

  multiline.pattern: '^\['

  multiline.negate: true

  multiline.match: after

#上面正则是是匹配以[开头的,\表示转义.

#tomcat的访问日志:

- type: log

  enabled: true

  paths:

    - /usr/share/filebeat/logs/tomcat_logs/localhost_access_log.2050-50-09.txt

  fields:

    log_source: tomcat-access-log-171.131

  multiline.pattern: '^((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})(\.((2(5[0-5]|[0-4]\d))|[0-1]?\d{1,2})){3}'

  multiline.negate: true

  multiline.match: after

#上面为添加:—————————————————————

filebeat.config.modules:

  path: ${path.config}/modules.d/*.yml

  reload.enabled: false

setup.template.settings:

  index.number_of_shards: 3

setup.kibana:

#下面是直接写入es中:

#output.elasticsearch:

#  hosts: ["192.168.171.128:9200"]

#下面是写入redis中:

#下面的filebeat-common是自定的key,要和logstash中从redis里对应的key要要一致,多个节点的nginx的都可以该key写入,但需要定义log_source以作为区分,logstash读取的时候以区分的标志来分开存放索引到es中

output.redis:

  hosts: ["192.168.171.128"]

  port: 6379

  password: "123456"

  key: "filebeat-common"

  db: 0

  datatype: list

processors:

  - add_host_metadata: ~

  - add_cloud_metadata: ~

#注意:因为默认情况下,宿主机日志路径和容器内日志路径是不一致的,所以配置文件里配置的路径如果是宿主机日志路径,容器里则找不到

##所以采取措施是:配置文件里配置成容器里的日志路径,再把宿主机的日志目录和容器日志目录做一个映射就可以了

#/usr/share/filebeat/logs/*.log 是容器里的日志路径

[root@localhost filebeat6.7.1]# cat scripts/run_filebeat6.7.1.sh

#!/bin/bash

docker run -d --name filebeat6.7.1 --net=host --restart=always --user=root -v /data/filebeat6.7.1/conf/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /data/java-logs:/usr/share/filebeat/logs  docker.elastic.co/beats/filebeat:6.7.1

#注意:因为默认情况下,宿主机日志路径和容器内日志路径是不一致的,所以配置文件里配置的路径如果是宿主机日志路径,容器里则找不到

#所以采取措施是:配置文件里配置成容器里的日志路径,再把宿主机的日志目录和容器日志目录做一个映射就可以了

[root@localhost filebeat6.7.1]# sh scripts/run_filebeat6.7.1.sh   #运行后则开始收集日志到redis

[root@localhost filebeat6.7.1]# docker ps |grep filebeat

3cc559a84904        docker.elastic.co/beats/filebeat:6.7.1   "/usr/local/bin/dock…"   8 seconds ago       Up 7 seconds                            filebeat6.7.1

[root@localhost filebeat6.7.1]# cd

到redis里查看是否以写入日志:(192.168.171.128,两台都以同一个key写入redis,所以只有一个key名,筛选进入es时再根据标识筛选)

[root@localhost ~]# docker exec -it redis4.0.10 bash

[root@localhost /]# redis-cli -a 123456

127.0.0.1:6379> KEYS *

1)"filebeat-common"

127.0.0.1:6379> quit

[root@localhost /]# exit

4.docker安装logstash6.7.1(在192.168.171.129上)——从redis读出日志,写入es集群

[root@localhost ~]# cd /data/

[root@localhost data]# ls logstash6.7.1.tar.gz

logstash6.7.1.tar.gz

[root@localhost data]# tar -zxf logstash6.7.1.tar.gz

[root@localhost data]# cd logstash6.7.1

[root@localhost logstash6.7.1]# ls

config  image  scripts

[root@localhost logstash6.7.1]# ls config/

GeoLite2-City.mmdb  log4j2.properties     logstash.yml   pipelines.yml_bak     startup.options

jvm.options         logstash-sample.conf  pipelines.yml  redis_out_es_in.conf

[root@localhost logstash6.7.1]# ls image/

logstash_6.7.1.tar

[root@localhost logstash6.7.1]# ls scripts/

run_logstash6.7.1.sh

[root@localhost logstash6.7.1]# docker load -i image/logstash_6.7.1.tar

[root@localhost logstash6.7.1]# docker images |grep logstash

logstash             6.7.1               1f5e249719fc        11 months ago       778MB

[root@localhost logstash6.7.1]# cat config/pipelines.yml  #确认配置,引用的conf目录

# This file is where you define your pipelines. You can define multiple.

# For more information on multiple pipelines, see the documentation:

#   https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

- pipeline.id: main

  path.config: "/usr/share/logstash/config/*.conf"   #容器内的目录

  pipeline.workers: 3

[root@localhost logstash6.7.1]# cat config/redis_out_es_in.conf   #查看和确认配置

input {

    redis {

        host => "192.168.171.128"

        port => "6379"

        password => "123456"

        db => "0"

        data_type => "list"

        key => "filebeat-common"

    }

}

#默认target是@timestamp,所以time_local会更新@timestamp时间。下面filter的date插件作用: 当第一次收集或使用缓存写入时候,会发现入库时间比日志实际时间有延时,导致时间不准确,最好加入date插件,使得>入库时间和日志实际时间保持一致.

filter {

    date {

        locale => "en"

        match => ["time_local", "dd/MMM/yyyy:HH:mm:ss Z"]

    }

}

output {

    if [fields][log_source] == 'system-171.130' {

        elasticsearch {

            hosts => ["192.168.171.128:9200"]

            index => "logstash-system-171.130-log-%{+YYYY.MM.dd}"

        }

    }

    if [fields][log_source] == 'system-171.131' {

        elasticsearch {

            hosts => ["192.168.171.128:9200"]

            index => "logstash-system-171.131-log-%{+YYYY.MM.dd}"

        }

    }

    if [fields][log_source] == 'catalina-log-171.130' {

        elasticsearch {

            hosts => ["192.168.171.128:9200"]

            index => "logstash-catalina-171.130-log-%{+YYYY.MM.dd}"

        }        

    }

    if [fields][log_source] == 'catalina-log-171.131' {

        elasticsearch {

            hosts => ["192.168.171.128:9200"]

            index => "logstash-catalina-171.131-log-%{+YYYY.MM.dd}"

        }        

    }

    if [fields][log_source] == 'es-log-171.130' {

        elasticsearch {

            hosts => ["192.168.171.128:9200"]

            index => "logstash-es-log-171.130-%{+YYYY.MM.dd}"

        }

    }

    if [fields][log_source] == 'es-log-171.131' {

        elasticsearch {

            hosts => ["192.168.171.128:9200"]

            index => "logstash-es-log-171.131-%{+YYYY.MM.dd}"

        }

    }

    if [fields][log_source] == 'tomcat-access-log-171.130' {

        elasticsearch {

            hosts => ["192.168.171.128:9200"]

            index => "logstash-tomcat-access-171.130-log-%{+YYYY.MM.dd}"

        }

    }   

    if [fields][log_source] == 'tomcat-access-log-171.131' {

        elasticsearch {

            hosts => ["192.168.171.128:9200"]

            index => "logstash-tomcat-access-171.131-log-%{+YYYY.MM.dd}"

        }

    }   

    stdout { codec=> rubydebug }

    #codec=> rubydebug 调试使用,能将信息输出到控制台

}

[root@localhost logstash6.7.1]# cat scripts/run_logstash6.7.1.sh

#!/bin/bash

docker run -d --name logstash6.7.1 --net=host --restart=always -v /data/logstash6.7.1/config:/usr/share/logstash/config logstash:6.7.1 

[root@localhost logstash6.7.1]# sh scripts/run_logstash6.7.1.sh  #从redis读取日志,写入es

[root@localhost logstash6.7.1]# docker ps |grep logstash

980aefbc077e        logstash:6.7.1             "/usr/local/bin/dock…"   9 seconds ago       Up 7 seconds                            logstash6.7.1

到es集群查看,如下:

到redis查看,数据已经读取走,为空了:

[root@localhost ~]# docker exec -it redis4.0.10 bash

[root@localhost /]# redis-cli -a 123456

127.0.0.1:6379> KEYS *

(empty list or set)

127.0.0.1:6379> quit

5.docker安装kibana6.7.1(在192.168.171.132上)从es中读取日志展示出来

[root@localhost ~]# cd /data/

[root@localhost data]# ls kibana6.7.1.tar.gz

kibana6.7.1.tar.gz

[root@localhost data]# tar -zxf kibana6.7.1.tar.gz

[root@localhost data]# cd kibana6.7.1

[root@localhost kibana6.7.1]# ls

config  image  scripts

[root@localhost kibana6.7.1]# ls config/

kibana.yml

[root@localhost kibana6.7.1]# ls image/

kibana_6.7.1.tar

[root@localhost kibana6.7.1]# ls scripts/

run_kibana6.7.1.sh

[root@localhost kibana6.7.1]# docker load -i image/kibana_6.7.1.tar

[root@localhost kibana6.7.1]# docker images |grep kibana

kibana              6.7.1               860831fbf9e7        11 months ago       677MB

[root@localhost kibana6.7.1]# cat config/kibana.yml

#

# ** THIS IS AN AUTO-GENERATED FILE **

#

# Default Kibana configuration for docker target

server.name: kibana

server.host: "0"

elasticsearch.hosts: [ "http://192.168.171.128:9200" ]

xpack.monitoring.ui.container.elasticsearch.enabled: true

[root@localhost kibana6.7.1]# cat scripts/run_kibana6.7.1.sh   

#!/bin/bash

docker run -d --name kibana6.7.1 --net=host --restart=always -v /data/kibana6.7.1/config/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:6.7.1

[root@localhost kibana6.7.1]# sh scripts/run_kibana6.7.1.sh  #运行,从es读取展示到kibana中

[root@localhost kibana6.7.1]# docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES

bf16aaeaf4d9        kibana:6.7.1        "/usr/local/bin/kiba…"   16 seconds ago      Up 15 seconds                           kibana6.7.1

[root@localhost kibana6.7.1]# netstat -anput |grep 5601   #kibana端口

tcp        0      0 0.0.0.0:5601            0.0.0.0:*               LISTEN      2418/node     

浏览器访问kibana:  http://192.168.171.132:5601 

kibana依次创建索引(尽量和es里索引名对应,方便查找)——查询和展示es里的数据

(1)先创建-*索引:logstash-catalina-*   点击management,如下:

输入索引名:logstash-catalina-*,点击下一步,如下:

选择时间戳: @timestamp,点击创建索引,如下:

(2)先创建-*索引:logstash-es-log-*   

点击下一步,如下:

选择时间戳,点击创建索引,如下:

(3)创建-*索引:logstash-system-*   

点击下一步,如下:

选择时间戳,点击创建索引,如下:

(4)创建-*索引:logstash-tomcat-access-*  

点击下一步,如下:

点击创建索引,如下:

查看日志,点击discover,如下: #注意:由于之前测试访问日志量少,后面又多写了些日志,方便测试。

随便选择几个点击箭头,即可展开,如下:

如果对运维课程感兴趣,可以在b站上、A站或csdn上搜索我的账号: 运维实战课程,可以关注我,学习更多免费的运维实战技术视频

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/66810.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

SpringBoot的Swagger配置

一、Swagger配置 1.添加依赖 <dependency><groupId>com.github.xiaoymin</groupId><artifactId>knife4j-spring-boot-starter</artifactId><version>3.0.2</version> </dependency> 2.修改WebMvcConfig Slf4j Configurat…

linux+docker+nacos+mysql部署

一、下载 docker pull mysql:5.7 docker pull nacos/nacos-server:v2.2.2 docker images 二、mysql部署 1、创建目录存储数据信息 mkdir ~/mysql cd ~/mysql 2、运行 MySQL 容器 docker run -id \ -p 3306:3306 \ --name mysql \ -v $PWD/conf:/etc/mysql/conf.d \ -v $PWD/…

代码随想录——二叉树(一)

文章目录 二叉树遍历先序遍历中序遍历后序遍历层序遍历层序遍历Ⅱ二叉树的右视图二叉树的层平均值N插树的层序遍历在每个树行中找最大值填充每个节点的下一个右侧节点指针填充每个节点的下一个右侧节点指针 II 二叉树遍历 先序遍历 二叉树先序遍历 递归形式 /*** Definitio…

详细介绍:持续集成与持续部署(CI/CD)技术细节(关键实践、CI/CD管道、优势与挑战)

目录 前言1、 持续集成&#xff08;CI&#xff09;1.1、持续集成的关键实践1.2、持续集成工具1.3、持续集成的优势 2、持续部署与持续交付&#xff08;CD&#xff09;2.1、持续交付&#xff08;Continuous Delivery&#xff09;2.2、持续部署&#xff08;Continuous Deployment…

Linux 系统服务开机自启动指导手册

一、引言 在 Linux 系统中&#xff0c;设置服务开机自启动是常见的系统配置任务。本文档详细介绍了多种实现服务开机自启动的方法&#xff0c;包括 systemctl 方式、通用脚本方式、crontab 方案等&#xff0c;并提供了生产环境下的方案建议和开机启动脚本示例。 二、systemct…

Java如何向http/https接口发出请求

用Java发送web请求所用到的包都在java.net下&#xff0c;在具体使用时可以用如下代码&#xff0c;你可以把它封装成一个工具类 import javax.net.ssl.*; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; import java.io.Outpu…

禁止 iOS 系统浏览器双指放大页面

网上找到禁止ios缩放的方法基本都试过了,但是还是有bug,如标题所示,下面我将总结一下禁止ios缩放,双击缩放的方法。 方法一 在 iOS 10之前&#xff0c;iOS 和 Android 都可以通过一行 meta 标签来禁止页面缩放&#xff1a; <meta content"widthdevice-width, initia…

读西瓜书的数学准备

1&#xff0c;高等数学&#xff1a;会求偏导数就行 2&#xff0c;线性代数&#xff1a;会矩阵运算就行 参考&#xff1a;线性代数--矩阵基本计算&#xff08;加减乘法&#xff09;_矩阵运算-CSDN博客 3&#xff0c;概率论与数理统计&#xff1a;知道啥是随机变量就行

PLC通信

PLC&#xff08;可编程逻辑控制器&#xff09;通信是指 PLC 与其他设备或系统之间进行数据传输和信息交换的过程 一、PLC通信方式 1 &#xff09;串行通信 数据按位顺序依次传输&#xff0c;只需要一对传输线&#xff0c;成本低&#xff0c;传输距离长&#xff0c;但速度相对…

C/C++、网络协议、网络安全类文章汇总

&#x1f6f8; 文章简介 本文章主要对本博客的所有文章进行了汇总&#xff0c;方便查找。内容涉及C/C编程&#xff0c;CMake、Makefile、Shell脚本&#xff0c;GUI编程框架MFC和QT&#xff0c;Git版本控制工具&#xff0c;网络协议基础知识&#xff0c;网络安全领域相关知识&a…

java 中多线程、 队列使用实例,处理大数据业务

场景&#xff1a; 从redis 订阅数据 调用线程来异步处理数据 直接上代码 定义线程管理类 import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.BeansException; import org.springframework.beans.factory.BeanFactory; import org…

【自动驾驶】4 智驾生态概述

目录 1 智驾生态概述 ▲ 关键组成部分 ▲ 概述 2 关键技术 ▲ 传感器 ▲ 感知 ▲ 数据闭环 3 未来市场 1 智驾生态概述 智能驾驶生态&#xff0c;简称智驾生态&#xff0c;是指围绕智能驾驶技术的开发、应用、服务和支持所形成的产业体系和合作网络。 涵盖了从硬件设…

2025.1.20——一、[RCTF2015]EasySQL1 二次注入|报错注入|代码审计

题目来源&#xff1a;buuctf [RCTF2015]EasySQL1 目录 一、打开靶机&#xff0c;整理信息 二、解题思路 step 1&#xff1a;初步思路为二次注入&#xff0c;在页面进行操作 step 2&#xff1a;尝试二次注入 step 3&#xff1a;已知双引号类型的字符型注入&#xff0c;构造…

”彩色的验证码,使用pytesseract识别出来的验证码内容一直是空“的解决办法

问题&#xff1a;彩色的验证码&#xff0c;使用pytesseract识别出来的验证码内容一直是空字符串 原因&#xff1a;pytesseract只识别黑色部分的内容 解决办法&#xff1a;先把彩色图片精确转换成黑白图片。再将黑白图片进行反相&#xff0c;将验证码部分的内容变成黑色&#…

Unity3D项目开发中的资源加密详解

前言 在Unity3D游戏开发中&#xff0c;保护游戏资源不被非法获取和篡改是至关重要的一环。资源加密作为一种有效的技术手段&#xff0c;可以帮助开发者维护游戏的知识产权和安全性。本文将详细介绍Unity3D项目中如何进行资源加密&#xff0c;并提供相应的技术详解和代码实现。…

RabbitMQ 在实际应用时要注意的问题

1. 幂等性保障 1.1 幂等性介绍 幂等性是数学和计算机科学中某些运算的性质,它们可以被多次应⽤,⽽不会改变初始应⽤的结果. 应⽤程序的幂等性介绍 在应⽤程序中,幂等性就是指对⼀个系统进⾏重复调⽤(相同参数),不论请求多少次,这些请求对系统的影响都是相同的效果. ⽐如数据库…

AIGC视频生成明星——Emu Video模型

大家好&#xff0c;这里是好评笔记&#xff0c;公主号&#xff1a;Goodnote&#xff0c;专栏文章私信限时Free。本文详细介绍Meta的视频生成模型Emu Video&#xff0c;作为Meta发布的第二款视频生成模型&#xff0c;在视频生成领域发挥关键作用。 &#x1f33a;优质专栏回顾&am…

Debian 上安装PHP

1、安装软件源拓展工具 apt -y install software-properties-common apt-transport-https lsb-release ca-certificates 2、添加 Ondřej Sur 的 PHP PPA 源&#xff0c;需要按一次回车&#xff1a; add-apt-repository ppa:ondrej/php 3、更新软件源缓存&#xff1a; apt-g…

office 2019 关闭word窗口后卡死未响应

最近关闭word文件总是出现卡死未响应的状态&#xff0c;必须从任务管理器才能杀掉word 进程&#xff0c;然后重新打开word再保存&#xff0c;很是麻烦。&#xff08;#其他特征&#xff0c;在word中打字会特别变慢&#xff0c;敲击键盘半秒才出现字符。&#xff09; office官网…

SecureUtil.aes数据加密工具类

数据加密、解密工具类 包含map和vo的数据转换 import cn.hutool.core.bean.BeanUtil; import cn.hutool.crypto.SecureUtil;import java.util.HashMap; import java.util.Map;/*** 数据解析**/ public class ParamUtils {/*** 数据解密** param params 参数* param secretKe…