Logback日志发送到Kafka

Logback日志发送到Kafka

文章目录

  • Logback日志发送到Kafka
    • 一、使用logback将日志发送至kafka
      • 1.1 引入依赖
      • 1.2 `logback.xml`简单Demo
      • 1.3 兼容性
      • 1.4 完整的样例
      • 1.5 启动程序收集日志
      • 1.6 项目Git地址

一、使用logback将日志发送至kafka

1.1 引入依赖

如果存在则跳过该步骤

pom.xml

<dependency><groupId>com.github.danielwegener</groupId><artifactId>logback-kafka-appender</artifactId><version>0.2.0</version><scope>runtime</scope>
</dependency>
<dependency><groupId>ch.qos.logback</groupId><artifactId>logback-classic</artifactId><version>1.2.3</version><scope>runtime</scope>
</dependency>

1.2 logback.xml简单Demo

<configuration><appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"><encoder><pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern></encoder></appender><!-- This is the kafkaAppender --><appender name="kafkaAppender" class="com.github.danielwegener.logback.kafka.KafkaAppender"><encoder><pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern></encoder><topic>logs</topic><keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" /><deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" /><!-- Optional parameter to use a fixed partition --><!-- <partition>0</partition> --><!-- Optional parameter to include log timestamps into the kafka message --><!-- <appendTimestamp>true</appendTimestamp> --><!-- each <producerConfig> translates to regular kafka-client config (format: key=value) --><!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --><!-- bootstrap.servers is the only mandatory producerConfig --><producerConfig>bootstrap.servers=localhost:9092</producerConfig><!-- this is the fallback appender if kafka is not available. --><appender-ref ref="STDOUT" /></appender><root level="info"><appender-ref ref="kafkaAppender" /></root>
</configuration>

1.3 兼容性

logback-kafka-appender 依赖于org.apache.kafka:kafka-clients:1.0.0:jar. 它可以将日志附加到版本为 0.9.0.0 或更高版本的 kafka 代理。

对 kafka-clients 的依赖不会被隐藏,并且可以通过依赖覆盖升级到更高的、api 兼容的版本。

1.4 完整的样例

<configuration><appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"><target>System.out</target><encoder><pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern></encoder></appender><appender name="STDERR" class="ch.qos.logback.core.ConsoleAppender"><target>System.err</target><encoder><pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern></encoder></appender><!-- This example configuration is probably most unreliable underfailure conditions but wont block your application at all --><appender name="very-relaxed-and-fast-kafka-appender" class="com.github.danielwegener.logback.kafka.KafkaAppender"><encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"><pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern></encoder><topic>boring-logs</topic><!-- we don't care how the log messages will be partitioned  --><keyingStrategy class="com.github.danielwegener.logback.kafka.keying.NoKeyKeyingStrategy" /><!-- use async delivery. the application threads are not blocked by logging --><deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.AsynchronousDeliveryStrategy" /><!-- each <producerConfig> translates to regular kafka-client config (format: key=value) --><!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --><!-- bootstrap.servers is the only mandatory producerConfig --><producerConfig>bootstrap.servers=localhost:9092</producerConfig><!-- don't wait for a broker to ack the reception of a batch.  --><producerConfig>acks=0</producerConfig><!-- wait up to 1000ms and collect log messages before sending them as a batch --><producerConfig>linger.ms=1000</producerConfig><!-- even if the producer buffer runs full, do not block the application but start to drop messages --><producerConfig>max.block.ms=0</producerConfig><!-- define a client-id that you use to identify yourself against the kafka broker --><producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed</producerConfig><!-- there is no fallback <appender-ref>. If this appender cannot deliver, it will drop its messages. --></appender><!-- This example configuration is more restrictive and will try to ensure that every messageis eventually delivered in an ordered fashion (as long the logging application stays alive) --><appender name="very-restrictive-kafka-appender" class="com.github.danielwegener.logback.kafka.KafkaAppender"><encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder"><pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern></encoder><topic>important-logs</topic><!-- ensure that every message sent by the executing host is partitioned to the same partition strategy --><keyingStrategy class="com.github.danielwegener.logback.kafka.keying.HostNameKeyingStrategy" /><!-- block the logging application thread if the kafka appender cannot keep up with sending the log messages --><deliveryStrategy class="com.github.danielwegener.logback.kafka.delivery.BlockingDeliveryStrategy"><!-- wait indefinitely until the kafka producer was able to send the message --><timeout>0</timeout></deliveryStrategy><!-- each <producerConfig> translates to regular kafka-client config (format: key=value) --><!-- producer configs are documented here: https://kafka.apache.org/documentation.html#newproducerconfigs --><!-- bootstrap.servers is the only mandatory producerConfig --><producerConfig>bootstrap.servers=localhost:9092</producerConfig><!-- restrict the size of the buffered batches to 8MB (default is 32MB) --><producerConfig>buffer.memory=8388608</producerConfig><!-- If the kafka broker is not online when we try to log, just block until it becomes available --><producerConfig>metadata.fetch.timeout.ms=99999999999</producerConfig><!-- define a client-id that you use to identify yourself against the kafka broker --><producerConfig>client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-restrictive</producerConfig><!-- use gzip to compress each batch of log messages. valid values: none, gzip, snappy  --><producerConfig>compression.type=gzip</producerConfig><!-- Log every log message that could not be sent to kafka to STDERR --><appender-ref ref="STDERR"/></appender><root level="info"><appender-ref ref="very-relaxed-and-fast-kafka-appender" /><appender-ref ref="very-restrictive-kafka-appender" /></root>
</configuration>

1.5 启动程序收集日志

  • 创建接收日志的topic
  • 启动程序即可将Kafka数据发送至Topic

1.6 项目Git地址

https://github.com/danielwegener/logback-kafka-appender

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/509534.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

01背包问题(DFS解法)

有5个物体&#xff0c;每个物品只有一个,其重量分别是为2,2,6,5,4,价值分别为6,3,5,4,6,背包的载重量为10,求装入背包的物体及总质量。 计算结果&#xff1a;15 package com.lanQiaoFor6;import java.util.ArrayList; import java.util.TreeSet;public class JAVA_6 {static …

Windows下安装Vim插件管理Vundle

VIM是编辑器之神&#xff0c;这个就不用说了&#xff0c;越使用越会体会到VIM的强大与便利。但是它的强大建立在众多插件组合之上&#xff0c;而Vim本身缺乏对插件的有效管理&#xff0c;安装插件并配置_vimrc文件非常不便。gmarik受到Ruby的bunler的启发&#xff0c;开发了vun…

AOE网

博客来源&#xff1a;http://blog.csdn.net/wang379275614/article/details/13990163 认识AOE网 有向图中&#xff0c;用顶点表示活动&#xff0c;用有向边表示活动之间开始的先后顺序&#xff0c;则称这种有向图为AOV网络&#xff1b;AOV网络可以反应任务完成的先后顺序&#…

Spark foreachRDD的使用

常出现的使用误区&#xff1a; **误区一&#xff1a;**在driver上创建连接对象&#xff08;比如网络连接或数据库连接&#xff09; 如果在driver上创建连接对象&#xff0c;然后在RDD的算子函数内使用连接对象&#xff0c;那么就意味着需要将连接对象序列化后从driver传递到w…

包子凑数(蓝桥杯)

标题&#xff1a;包子凑数 小明几乎每天早晨都会在一家包子铺吃早餐。他发现这家包子铺有N种蒸笼&#xff0c;其中第i种蒸笼恰好能放Ai个包子。每种蒸笼都有非常多笼&#xff0c;可以认为是无限笼。 每当有顾客想买X个包子&#xff0c;卖包子的大叔就会迅速选出若干笼包子来&…

makefile例子(经典)

相信在unix下编程的没有不知道makefile的&#xff0c;刚开始学习unix平台 下的东西&#xff0c;了解了下makefile的制作&#xff0c;觉得有点东西可以记录下。   下面是一个极其简单的例子&#xff1a; 现在我要编译一个Hello world&#xff0c;需要如下三个文件&#xff1a;…

Scala-SparkStreaming 2.2.0 消费 kafka0.10(生产1.0)

Scala-SparkStreaming 2.2.0 kafka0.10&#xff08;生产1.0&#xff09; 文章目录Scala-SparkStreaming 2.2.0 kafka0.10&#xff08;生产1.0&#xff09;代码Pom.xmlSparkstreaming 2.1.1版本pom文件Spark 2.2 kafka0.10(api使用的0.10&#xff0c;实际生产kafka版本是1.0)代码…

数据结构前缀,后缀,中缀表达式

[cpp] view plaincopy [cpp] view plaincopy <span style"color: rgb(51, 51, 51); font-family: Arial; font-size: 14px; line-height: 26px; background-color: rgb(255, 255, 255);">举例&#xff1a;</span> (3 4) 5 - 6 就是中缀表达式 - 3…

hdu1232畅通路程(并查集)

参考博客&#xff1a;https://blog.csdn.net/blue_skyrim/article/details/50178287 畅通工程 Time Limit: 4000/2000 MS (Java/Others) Memory Limit: 65536/32768 K (Java/Others) Total Submission(s): 62854 Accepted Submission(s): 33623 Problem Description 某省调…

gcc的简单使用教程

前几天在学习嵌入式入门时,有一个视频中就是介绍gcc的使用的,看了视频后突然好 想将GCC的手册页翻译出来,后来看到手册页发现实在太多了,凭我个人的能力根本无 法完成,只能写一些自己使用Gcc时的一些常规使用方法. GCC是GNU的成员之一,原意是GNU的C语言编译器,后来发展到不只能…

SparkStreaming参数介绍

SparkStreaming参数介绍 spark.streaming.concurrentJobs :增加job并行度 可以通过集中方法为streaming job配置此参数。 - spark-default中修改 全局性修改&#xff0c;所有的streaming job都会受到影响。 - 提交streaming job是 –conf 参数添加&#xff08;推荐&#x…

还是畅通工程(克鲁斯卡尔算法+并查集)

还是畅通工程 Time Limit: 4000/2000 MS (Java/Others) Memory Limit: 65536/32768 K (Java/Others) Total Submission(s): 53997 Accepted Submission(s): 24504 Problem Description 某省调查乡村交通状况&#xff0c;得到的统计表中列出了任意两村庄间的距离。省政府“畅…

makefile深度学习(一个工程实例来学习 Makefile)

转自 http://www.cnblogs.com/OpenShiFt/p/4313351.html?utm_sourcetuicool&utm_mediumreferral Makefile 文件的编写 学习前的准备 需要准备的工程目录结构如下&#xff1a; . ├── add │ ├── add_float.c │ ├── add.h │ └── add_int.c ├── main…

Spark算子介绍

Spark算子 文章目录Spark算子一、转换算子coalesce函数repartition函数flatMap——flatMap变换sample——抽样zip——联结mapValues——对Value值进行变换二、行动Action算子数据运算类行动算子reduce——Reduce操作collect——收集元素countByKey——按Key值统计Key/Value型RD…

数据结构实验之二叉树六:哈夫曼编码

题目描述 字符的编码方式有多种&#xff0c;除了大家熟悉的ASCII编码&#xff0c;哈夫曼编码(Huffman Coding)也是一种编码方式&#xff0c;它是可变字长编码。该方法完全依据字符出现概率来构造出平均长度最短的编码&#xff0c;称之为最优编码。哈夫曼编码常被用于数据文件压…

hdu3790最短路径问题 (Dijkstra算法)

最短路径问题 Time Limit: 2000/1000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others) Total Submission(s): 32544 Accepted Submission(s): 9565Problem Description给你n个点&#xff0c;m条无向边&#xff0c;每条边都有长度d和花费p&#xff0c;给你起…

spark master web ui 端口8080被占用解决方法

spark master web ui 端口8080被占用解决方法 Spark master web ui 默认端口为8080&#xff0c;当系统有其它程序也在使用该接口时&#xff0c;启动master时也不会报错&#xff0c;spark自己会改用其它端口&#xff0c;自动端口号加1&#xff0c;但为了可以控制到指定的端口&a…

GDB调试工具使用教程(博客)

http://blog.csdn.net/haoel/article/details/2879

树-堆结构练习——合并果子之哈夫曼树

题目描述 在一个果园里&#xff0c;多多已经将所有的果子打了下来&#xff0c;而且按果子的不同种类分成了不同的堆。多多决定把所有的果子合成一堆。 每一次合并&#xff0c;多多可以把两堆果子合并到一起&#xff0c;消耗的体力等于两堆果子的重量之和。可以看出&#xff0c;…

DataFrame函数介绍

DataFrame函数 文章目录DataFrame函数DataFrame 的函数Action 操作dataframe的基本操作集成查询DataFrame 的函数 Action 操作 collect() ,返回值是一个数组&#xff0c;返回dataframe集合所有的行 collectAsList() 返回值是一个java类型的数组&#xff0c;返回dataframe集合…