kafka集群搭建:
1 查看有哪些group:
1 [root@slave3 bin]# kafka-consumer-groups.sh --bootstrap-server master:9092 --list 2 Note: This will not show information about old Zookeeper-based consumers. 3 4 sys 5 console-consumer-5032 6 console-consumer-43565 7 console-consumer-72223 8 UDISKMONITORLoggroup
2 查看都有哪些topic:
[root@slave2 bin]# kafka-topics.sh --zookeeper master:2181,slave1 --list mytest mytest1 mytest2 mytest3 event app
3 查看某个group的消费信息(0.10.1.1+版本)
[root@master ~]# kafka-consumer-groups.sh --bootstrap-server master:9092 --describe --group sys Consumer group 'sys' has no active members. TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID event 0 709 709 0 - - - app 0 820 838 18 -
4 把某个group的offset设置到最初或指定位置:
# 把group sys在topic:event上的offcet恢复到最初 [root@slave3 ~]# kafka-consumer-groups.sh --bootstrap-server master:9092 --group sys --topic event --reset-offsets --to-earliest –execute TOPIC PARTITION NEW-OFFSET event 0 519 # 【注意】最初的offset不一定是0,比如本例,519位置以前的数据已经过期,所以offset的最初位置就是519 # 恢复offcet到指定位置 [root@slave3 ~]# kafka-consumer-groups.sh --bootstrap-server master:9092 --group sys --topic event --reset-offsets --to-offset 1000 --execute [2020-06-17 16:26:06,074] WARN New offset (518) is lower than earliest offset. Value will be set to 519 (kafka.admin.ConsumerGroupCommand$) TOPIC PARTITION NEW-OFFSET event 0 1000 # 把offcet从当前位置往前移动100个,如果是正数就是往后移动。 [root@slave3 ~]# kafka-consumer-groups.sh --bootstrap-server master:9092 --group sys --topic event --reset-offsets --shift-by -100 --execute
5 创建topic:
[root@slave3 bin]# kafka-topics.sh --zookeeper slave1:2181,slave2,slave3 --create --topic mytest --partitions 3 --replication-factor 3 # 默认端口2181,可写可不写。最好节点都写上,如果某个节点挂掉会自动寻找下一个节点。 # topic后面的参数是topic名字,partitions后面是分区数量,replication-factor后面是副本数量 # 增加分区,原来只有1个parttions,现在增加到3个。 [root@slave3 bin]# kafka-topics.sh --zookeeper 127.0.0.1:2181 --alter --partitions 3 --topic mytest
6 在控制台创建生产者:
[root@slave3 bin]# ./kafka-console-producer.sh --broker-list slave1:9092,slave2,slave3 --topic mytest zookeeper is not a recognized option # 如果出错就用下面的语句,看好端口 [root@slave3 bin]# ./kafka-console-producer.sh --broker-list 10.18.1.103:9092 --topic mytest >123
7 在控制台创建消费者:
[root@slave3 bin]# ./kafka-console-consumer.sh --zookeeper slave2:2181,slave3 --topic mytest Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper]. # 如果出现上面的告警提示,也可以用。如果不想出现这个提示可以使用下面的命令,注意端口变化 [root@slave3 bin]# ./kafka-console-consumer.sh --bootstrap-server 10.18.1.102:9092 --topic test # 如果想从头开始读,可以在最后加上--from-beginning [root@slave3 bin]# ./kafka-console-consumer.sh --bootstrap-server 10.18.1.102:9092 --topic test --from-beginning
8 查看topic的描述信息:
[root@slave2 bin]# kafka-topics.sh --zookeeper slave1,slave2 --describe --topic mytest1 Topic:mytest1 PartitionCount:5 ReplicationFactor:1 Configs: Topic: mytest1 Partition: 0 Leader: 116 Replicas: 116 Isr: 116 Topic: mytest1 Partition: 1 Leader: 117 Replicas: 117 Isr: 117 Topic: mytest1 Partition: 2 Leader: 118 Replicas: 118 Isr: 118 Topic: mytest1 Partition: 3 Leader: 119 Replicas: 119 Isr: 119 Topic: mytest1 Partition: 4 Leader: 116 Replicas: 116 Isr: 116
9 查看topic在各partition的offset最小最大值:
[root@slave2 data]# kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list slave1:9092,slave2:9092 --topic mytest1 --time -1 mytest1:1:932 mytest1:2:893 mytest1:0:868
10 删除topic:
[root@slave2 data]# kafka-topics.sh --zookeeper master,slave1,slave2 --delete --topic mytest1 # 此时mytest1被标记删除,但还可以使用,默认一周后会删除 # 然后去kafka存数据的目录,把有关mytest1的相关数据删除掉 [root@slave2 data]# zkCli.sh # 或者zookeeper-client [zk: localhost:2181(CONNECTED) 2] rmr /brokers/topics/mytest1 # 去zookeeper中删除当前topic的原数据 [zk: localhost:2181(CONNECTED) 2] rmr /admin/delete_topics/mytest1 # 删除‘被标记已删除’的topic [zk: localhost:2181(CONNECTED) 2]ls /config/topics/mytest1 # 这里也有,最好也删除吧 # 至此删除完毕