kafka 命令行命令大全

kafka 脚本

connect-distributed.sh
connect-mirror-maker.sh
connect-standalone.sh
kafka-acls.sh
kafka-broker-api-versions.sh
kafka-configs.sh
kafka-console-consumer.sh
kafka-console-producer.sh
kafka-consumer-groups.sh
kafka-consumer-perf-test.sh
kafka-delegation-tokens.sh
kafka-delete-records.sh
kafka-dump-log.sh
kafka-leader-election.sh
kafka-log-dirs.sh
kafka-mirror-maker.sh
kafka-preferred-replica-election.sh
kafka-producer-perf-test.sh
kafka-reassign-partitions.sh
kafka-replica-verification.sh
kafka-run-class.sh
kafka-server-start.sh
kafka-server-stop.sh
kafka-streams-application-reset.sh
kafka-topics.sh
kafka-verifiable-consumer.sh
kafka-verifiable-producer.sh
trogdor.sh
windows
zookeeper-security-migration.sh
zookeeper-server-start.sh
zookeeper-server-stop.sh
zookeeper-shell.sh

集群管理

# 启动zookeeper
bin/zookeeper-server-start.sh config/zookeeper.properties &# 停止zookeeper
bin/zookeeper-server-stop.sh# 前台启动broker Ctrl + C 关闭
bin/kafka-server-start.sh <path>/server.properties# 后台启动broker
bin/kafka-server-start.sh -daemon <path>/server.properties# 关闭broker
bin/kafka-server-stop.sh

topic相关

# 创建topic(4个分区,2个副本)
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 4 --topic test
# kafka版本 >= 2.2
bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic test--create:指定创建topic动作--topic:指定新建topic的名称--zookeeper: 指定kafka连接zk的连接url,该值和server.properties文件中的配置项{zookeeper.connect}一样--partitions:指定当前创建的kafka分区数量,默认为1个--replication-factor:指定每个分区的复制因子个数,默认1个# 分区扩容,注意:分区数量只能增加,不能减少
# kafka版本 < 2.2
bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic topic1 --partitions 2
# kafka版本 >= 2.2
bin/kafka-topics.sh --bootstrap-server broker_host:port --alter --topic topic1 --partitions 2# 删除topic
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test# 查询topic列表
bin/kafka-topics.sh --zookeeper 127.0.0.1:2181 --list
# 查询topic列表(支持0.9版本+)
bin/kafka-topics.sh --list --bootstrap-server localhost:9092# 查看所有topic的详细信息
bin/kafka-topics.sh --zookeeper 127.0.0.1:2181 --describe # 查询topic详情
bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic topicname

消费者组

查看consumer group列表有新、旧两种命令,分别查看新版(信息保存在broker中)consumer列表和老版(信息保存在zookeeper中)consumer列表,因而需要区分指定bootstrap–server和zookeeper参数

# 新消费者列表查询(支持0.9版本+)
bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server localhost:9092 --list
# 消费者列表查询(支持0.10版本+)
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list# 显示某个消费组的消费详情(仅支持offset存储在zookeeper上的)
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper localhost:2181 --group test
# 显示某个新消费组的消费详情(0.9版本 - 0.10.1.0 之前)
bin/kafka-consumer-groups.sh --new-consumer --bootstrap-server localhost:9092 --describe --group my-group
## 显示某个消费组的消费详情(0.10.1.0版本+)
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group# 重设消费者组位移
# 最早处
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group groupname --reset-offsets --all-topics --to-earliest --execute
# 最新处
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group groupname --reset-offsets --all-topics --to-latest --execute
# 某个位置
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group groupname --reset-offsets --all-topics --to-offset 2000 --execute
# 调整到某个时间之后的最早位移
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group groupname --reset-offsets --all-topics --to-datetime 2019-09-15T00:00:00.000# 删除消费者组
bin/kafka-consumer-groups.sh --zookeeper localhost:2181 --delete --group groupname

显示某个消费组的消费详情
显示某个消费组的消费详情
各字段含义如下

TOPICPARTITIONCURRENT-OFFSETLOG-END-OFFSETLAGCONSUMER-IDHOSTCLIENT-ID
topic名字分区id当前已消费的条数总条数未消费的条数消费id主机ip客户端id

发送和消费

# 生产者
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test# 消费者,其中"--from-beginning"为可选参数,表示要从头消费消息
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topicname --from-beginning
# 指定groupid
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topicname --from-beginning --consumer-property group.id=old-consumer-group
# 指定分区
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic topicname --from-beginning --partition 0# 新生产者(支持0.9版本+)
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test --producer.config config/producer.properties# 新消费者(支持0.9版本+)
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --new-consumer --from-beginning --consumer.config config/consumer.properties# kafka-verifiable-consumer.sh(消费者事件,例如:offset提交等)
bin/kafka-verifiable-consumer.sh --broker-list localhost:9092 --topic test --group-id groupName# 高级点的用法
bin/kafka-simple-consumer-shell.sh --brist localhost:9092 --topic test --partition 0 --offset 1234  --max-messages 10

切换leader

# kafka版本 <= 2.4
bin/kafka-preferred-replica-election.sh --zookeeper zk_host:port/chroot# kafka新版本
bin/kafka-preferred-replica-election.sh --bootstrap-server broker_host:port

kafka自带压测命令

bin/kafka-producer-perf-test.sh --topic test --num-records 100 --record-size 1 --throughput 100  --producer-props bootstrap.servers=localhost:9092

kafka持续发送消息

持续发送消息到指定的topic中,且每条发送的消息都会有响应信息:

kafka-verifiable-producer.sh --broker-list $(hostname -i):9092 --topic test --max-messages 100000

zookeeper-shell.sh

如果kafka集群的zk配置了chroot路径,那么需要加上/path

bin/zookeeper-shell.sh localhost:2181[/path]
ls /brokers/ids
get /brokers/ids/0

迁移分区

  1. 创建规则json
cat > increase-replication-factor.json <<EOF
{"version":1, "partitions":[
{"topic":"__consumer_offsets","partition":0,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":1,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":2,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":3,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":4,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":5,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":6,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":7,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":8,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":9,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":10,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":11,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":12,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":13,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":14,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":15,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":16,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":17,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":18,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":19,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":20,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":21,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":22,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":23,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":24,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":25,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":26,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":27,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":28,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":29,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":30,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":31,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":32,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":33,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":34,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":35,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":36,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":37,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":38,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":39,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":40,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":41,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":42,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":43,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":44,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":45,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":46,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":47,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":48,"replicas":[0,1]},
{"topic":"__consumer_offsets","partition":49,"replicas":[0,1]}]
}
EOF
  1. 执行
bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file increase-replication-factor.json --execute
  1. 验证
bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file increase-replication-factor.json --verify

MirrorMaker 跨机房灾备工具

bin/kafka-mirror-maker.sh --consumer.config consumer.properties --producer.config producer.properties --whitelist topicA|topicB

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/453825.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

kotlin将对象转换为map_Kotlin程序将哈希映射(HashMap)转换为列表(List)

Kotlin程序将哈希映射(HashMap)转换为列表(List)在此程序中&#xff0c;您将学习在Kotlin中将map转换为列表的不同方法。示例&#xff1a;将map转换为列表示例import java.util.ArrayListimport java.util.HashMapfun main(args: Array) {val map HashMap()map.put(1, "a…

零元学Expression Blend 4 - Chapter 4元件重复运用的观念

零元学Expression Blend 4 - Chapter 4元件重复运用的观念 原文:零元学Expression Blend 4 - Chapter 4元件重复运用的观念本章将教大家Blend元件重复运用的观念&#xff0c;这在Silverlight设计中是非常重要的&#xff0c;另外加码赠送渐层工具(Gradient Tool)。 ? 本章将教…

Python 内置模块之 ConfigParser - 解析 ini 文件

ini配置文件是被configParser直接解析然后再加载的&#xff0c;如果只是修改配置文件&#xff0c;并不会改变已经加载的配置 INI文件结构简单描述 INI文件就是扩展名为“ini”的文件。在Windows系统中&#xff0c;INI文件是很多&#xff0c;最重要的就是“System.ini”、“Sy…

电脑老是弹出vrvedp_m_出现三个可疑进程vrvedp_m.exe vrvrf_c.exe vrvsafec.exe

满意答案 你机器里装了北信源的DeviceRegist软件,这个软件不是杀毒软件或者防毒软件,而是一个远程桌面管理软件。这类软件其实和木马程序原理上一样,只不过是正规软件公司开发的,但是流氓程度不容小觑,即使在安全模式下也会加载vrvrf_c.exe,vrvedp_m.exe,vrvsafec.exe,wat…

音视频编解码 文件格式 协议内容详解

编解码学习笔记&#xff08;一&#xff09;&#xff1a;基本概念 媒体业务是网络的主要业务之间。尤其移动互联网业务的兴起&#xff0c;在运营商和应用开发商中&#xff0c;媒体业务份量极重&#xff0c;其中媒体的编解码服务涉及需求分析、应用开发、释放license收费等等。最…

git 拉取远程其他分支代码_【记录】git 拉取远程分支代码,同步到另一个git上...

最近有需求从某git 上拉取所有分支代码同步到另一git上&#xff0c;现记录操作步骤&#xff0c;以便日后使用&#xff1a;1&#xff1a;先克隆其中一个分支代码到本地环境git clone -b test http://账号:密码XXX.git2&#xff1a;查看本地分支git brach3&#xff1a;查看远程分…

WIN下的CMD下载命令

certutil -urlcache -split -f 远程地址 本地保存的文件跑径与文 件名 # 如里不写本地文 件名与路径名&#xff0c; 会自动跟远程文 件名相同&#xff0c; 并保存到当前目 录下另一个是&#xff1a; bitsadmin /rawreturn /transfer getfile http://download.sysinternals.com…

python 第三方模块之 APScheduler - 定时任务

介绍 APScheduler的全称是Advanced Python Scheduler。它是一个轻量级的 Python 定时任务调度框架。APScheduler 支持三种调度任务&#xff1a;固定时间间隔&#xff0c;固定时间点&#xff08;日期&#xff09;&#xff0c;Linux 下的 Crontab 命令。同时&#xff0c;它还支持…

hadoop分布式搭建

一&#xff0c;前提&#xff1a;下载好虚拟机和安装完毕Ubuntu系统。因为我们配置的是hadoop分布式&#xff0c;所以需要两台虚拟机&#xff0c;一台主机&#xff08;master&#xff09;&#xff0c;一台从机&#xff08;slave&#xff09; 选定一台机器作为 Master 在 Master …

Python 第三方模块之 imgaug (图像增强)

imgaug是一个封装好的用来进行图像augmentation的python库,支持关键点(keypoint)和bounding box一起变换。 项目主页: imgaug doc 1. 安装和卸载 # 通过github安装 sudo pip install githttps://github.com/aleju/imgaug# 通过pypi安装 sudo pip install imgaug# 本地安装, …

MPEG(mpeg1,mpeg2,mpeg4) 与H264 QP值间 关系

H264 Quant与MPEG Quant数值参对表 x264vfw 的1pass 是按照I q:21P q:24B q:26的量化算的,而且在vfw里面不能改变这些参数.但在mencoder里则可以定义1pass的 qp_constant<1−51>这个和xvid不同的,xvid一般是用q2跑1pass的,当然你也可以在x264设置一下,但是要清楚的是 H.2…

maya脚本用python还是mel_替换/替换材质的Maya Python/MEL脚本

在CreativeCrash上有一个旧线程处理此问题。我在那里展示的脚本如下(请参阅原始线程了解更多信息)&#xff1a;proc connectAndSet(string $original, string $target){$conn connectionInfo -sfd $original;if ($conn ! ""){connectAttr -force $conn $target;} el…

FreeBSD长模式不兼容

二进制转换与此平台上的长模式不兼容。此虚拟环境中的长模式将被禁用。因此需要使用长模式的应用程序将无法正常运行。请参见 http://vmware.com/info?id152 了解更多详细信息。 mark转载于:https://www.cnblogs.com/tuhooo/p/8116442.html

Python 第三方模块之 numpy.random

本文概述 随机数是NumPy库中存在的模块。该模块包含用于生成随机数的功能。该模块包含一些简单的随机数据生成方法, 一些排列和分布函数以及随机生成器函数。 简单随机数据 简单随机数据具有以下功能&#xff1a; 1)p.random.rand(d0, d1, …, dn) 随机模块的此功能用于生…

xvid 详解 代码分析 编译等

1. Xvid参数详解 众所周知&#xff0c;Mencoder以其极高的压缩速率和不错的画质赢得了很多朋友的认同&#xff01; 原来用Mencoder压缩Xvid的AVI都是使用Xvid编码器的默认设置&#xff0c;现在我来给大家冲冲电&#xff0c;讲解一下怎样使用Mencoder命令行高级参数制作Xvid编…

s4800扫描电镜的CSS3_Hitachi S-4800型场发射扫描电子显微镜+能谱

一、主要部件&#xff1a;S-4800主机(包括真空系统、电子光学系统、检测器)、X射线能谱仪&#xff0c;E-1030喷金喷碳装置等。二、主要性能指标&#xff1a;二次电子分辨率&#xff1a;1.0 nm(15 kV)&#xff1b;2.0 nm(1 kV)&#xff1b;背散射电子分辨率&#xff1a;3.0 nm (…

很多人喜欢露脚踝你觉得时尚吗?

当然是 时尚时尚最时尚的 露&#xff01;****脚&#xff01;脖&#xff01;子&#xff01;image.png人生就是这么奇怪 美容整形可以让你拥有想要的五官 做个手术健个身能让你拥有梦寐的线条 唯独身高这事很难改变&#xff08;说多了都是泪&#xff09; 氮素呢 再难也难不倒众位…

深度学习之生成式对抗网络 GAN(Generative Adversarial Networks)

一、GAN介绍 生成式对抗网络GAN&#xff08;Generative Adversarial Networks&#xff09;是一种深度学习模型&#xff0c;是近年来复杂分布上无监督学习最具前景的方法之一。它源于2014年发表的论文&#xff1a;《Generative Adversarial Nets》&#xff0c;论文地址&#xf…

android object数组赋值_Java对象数组定义与用法详解

本文实例讲述了Java对象数组定义与用法。分享给大家供大家参考&#xff0c;具体如下&#xff1a;所谓的对象数组&#xff0c;就是指包含了一组相关的对象&#xff0c;但是在对象数组的使用中一定要清楚一点&#xff1a;数组一定要先开辟空间&#xff0c;但是因为其是引用数据类…

Fiddler抓取https证书问题

正常的使用方法 Fiddler 抓包工具总结 大部分问题的解决方案 fiddler4在win7抓取https的配置整理 像我脸一样黑的解决方案 Fiddler https 证书问题 可能的解释&#xff1a; Fiddler自带两个cert engine&#xff0c;一个是makecert&#xff0c;一个是CertEnroll&#xff0c;可…