flume的作用是从接受外界的日志信息,然后输出到本地的一个框架。
agent是Flume很重要的组成,包括有source,channel,sink。
source是从外部接受日志。
channel跟内存相似,读满了之后再写到sink中。
sink是将数据写到本地,可以写在HDFS上也能先写在Fafka等等。
配置
1.
首先下载包,解压,并将bin路径配置到~/.bash_profile
复制flume-env.sh.template配置文件并修改,指定jdk的路径。
2.
在conf文件夹下新建conf文件:
# example.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources=r1
a1.sinks=k1
a1.channels=c1
#Describe/configure the source
a1.sources.r1.type=netcat
a1.sources.r1.bind=localhost
a1.sources.r1.port=44444
# Describe the sink
a1.sinks.k1.type=logger
# Use a channel which buffers events in memory
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100
# Bind the source and sink to the channel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
启动agent
flume-ng agent \
--name a1 \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/example.conf \
-Dflume.root.logger=INFO.console
在另一个控制台里输入telnet hadoop000 44444 输入些文字测试。。。
以上是Flume监听端口信息,接下来实时监听本地文件信息:
只需将#Describe/configure the source下的三行改为:
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/hadoop/data/data.log
a1.sources.r1.shell = /bin/sh -C
在conf目录下新建 exec-memory-logger.conf文件写入以上配置信息,然后启动agent。
flume-ng agent \
--name a1 \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/exec-memory-logger.conf \
-Dflume.root.logger=INFO.console
在另一个控制台里往data.log里写数据
echo hello >> data.log
echo world >> data.log
这样agent就会输出日志信息了。
接下来将A服务器上的日志实时采集到B服务器上、
图1
机器A的conf配置:
# exec-memory-avro.conf:
# Name the components on this agent
exec-memory-avro.sources=exec-source
exec-memory-avro.sinks=avro-sink
exec-memory-avro.channels= memory-channel
#Describe/configure the source
exec-memory-avro.sources.exec-source.type=exec
exec-memory-avro.sources.exec-source.command=tail -F /home/hadoop/data/data.log
exec-memory-avro.sources.exec-source.shell= /bin/sh -c
# Describe the sink
exec-memory-avro.sinks.avro-sink.type=avro
exec-memory-avro.sinks.avro-sink.hostname = hadoop000
exec-memory-avro.sinks.avro-sink.port = 44444
# Use a channel which buffers events in memory
exec-memory-avro.channels.memory-channel.type=memory
# Bind the source and sink to the channel
exec-memory-avro.sources.exec-source.channels=memory-channel
exec-memory-avro.sinks.avro-sink.channel=memory-channel
机器B的conf文件:
avro-memory-logger.sources = avro-source
avro-memory-logger.sinks = logger-sink
avro-memory-logger.channels = memory-channel
avro-memory-logger.sources.avro-source.type = avro
avro-memory-logger.sources.avro-source.bind = hadoop000
avro-memory-logger.sources.avro-source.port = 44444
avro-memory-logger.sinks.logger-sink.type = logger
avro-memory-logger.channels.memory-channel.type = memory
avro-memory-logger.sources.avro-source.channels = memory-channel
avro-memory-logger.sinks.logger-sink.channel = memory-channel
配置完成。先启动机器B(第一个控制台):
flume-ng agent \
--name avro-memory-logger \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/avro-memory-logger.conf \
-Dflume.root.logger=INFO,console
再启动机器A
(第二个控制台)
:
flume-ng agent \
--name exec-memory-avro \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/exec-memory-avro.conf \
-Dflume.root.logger=INFO,console
在第三个控制台的data目录输入echo welcome >> data.log
第一个控制台就会有日志信息显示了。。。