这里kafka版本是2.8.1,在3.x版本中命令可能不同。
管理服务命令
1.启动kafka服务
(nohup) ./bin/kafka-server-start.sh config/server.properties &
2.关闭kafka服务
./bin/kafka-server-stop.sh
Topic相关
3.查看所有的topic
./bin/kafka-topics.sh --list --zookeeper localhost:2181
4.查看所有topic详细信息
./bin/kafka-topics.sh --zookeeper localhost:2181 --describe
Topic: __consumer_offsets TopicId: 6nKvf848TouaWzEOuhVfzA PartitionCount: 50 ReplicationFactor: 1 Configs: compression.type=producer,cleanup.policy=compact,segment.bytes=104857600
Topic: __consumer_offsets Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 1 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 2 Leader: 0 Replicas: 0 Isr: 0
Topic: __consumer_offsets Partition: 3 Leader: 0 Replicas: 0 Isr: 0
Topic: test TopicId: lBDvWEsRSXq_g0qg1zZ2Zw PartitionCount: 2 ReplicationFactor: 1 Configs:
Topic: test Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: test Partition: 1 Leader: 0 Replicas: 0 Isr: 0
5.列出指定topic的详细信息
./bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic test
Topic: test TopicId: lBDvWEsRSXq_g0qg1zZ2Zw PartitionCount: 2 ReplicationFactor: 1 Configs:
Topic: test Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: test Partition: 1 Leader: 0 Replicas: 0 Isr: 0
6.删除一个topic
在这里需要先修改server.properties文件的内容,增加
delete.topic.enable=true
然后重启服务
./bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test
7.创建一个叫test的topic,有两个分区,每个分区3个副本
--partitions: 分区数
--replication-factor: 副本数
./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic test --replication-factor 3 --partitions 2
如果只是单节点的kafka则只能创建1个副本,否则会报错
./bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic test --replication-factor 1 --partitions 2
Error while executing topic command : Replication factor: 3 larger than available brokers: 1.
消息相关
8.测试kafka发送和接收消息(启动两个终端)
- 发送消息(注意端口号为配置文件里面的端口号):
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>test 111
>test 222
- 消费信息(端口号与配置文件保持一致,或与发送端口保持一致)
加--from-beginning从最开始消费所有的信息
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
test 222
test 111
不加--from-beginning从最新一条消息开始消费
生产:
./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
>test 333
消费:
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test
test 333
--group: 指定消费者组
./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning --group t1
test 222
test 333
test 111
9.查看某个topic对应的消息数量
--time -1: 表示要获取指定topic所有分区当前的最大位移(历史总消息数)
./bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list localhost:9092 --topic test --time -1
输出示例:
test:0:1
test:1:2
第一个数字代表分区,第二个数字代表偏移量
--time -2: 表示获取从当前最早位移(被消费的消息数),两个命令的输出结果相减便可得到所有分区当前的消息总数
./bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list localhost:9092 --topic test --time -2
输出示例:
test:0:0
test:1:0
第一个数字代表分区,第二个数字代表偏移量
10.显示所有消费者
./bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --list
console-consumer-27144
t1
11.获取正在消费的topic(t1)的group的offset
./bin/kafka-consumer-groups.sh --describe --group t1 --bootstrap-server localhost:9092
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
t1 test 0 1 1 0 consumer-t1-1-40e73b59-f822-4c1c-99ce-4a7aa8f63ad8 /192.168.108.137 consumer-t1-1
t1 test 1 2 2 0 consumer-t1-1-40e73b59-f822-4c1c-99ce-4a7aa8f63ad8 /192.168.108.137 consumer-t1-1
supervisor管理zookeeper和kafka服务:
1.服务一般通过apache启动,将两者的目录[启动相关,日志,数据]所有者都改为apache用户属组
2.一般不同的服务需要写在不同的配置文件中,如果写在同一个配置文件里,则需要变更supervisor服务的设置
设置foreground = true后重启服务
3.zookeeper的启动命令为/usr/local/zookeeper/bin/zkServer.sh start-foreground
zookeeper:
cat /etc/supervisord.d/zookeeper.conf
[program:zookeeper]
command=/usr/local/zookeeper/apache-zookeeper-3.5.10-bin/bin/zkServer.sh start-foreground
directory=/usr/local/zookeeper/apache-zookeeper-3.5.10-bin/
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/zookeeper.log
redirect_stderr=true
user=apache
kafka:
cat /etc/supervisord.d/kafka.conf
[program:kafka]
command=/usr/local/kafka/kafka_2.12-2.8.1/bin/kafka-server-start.sh /usr/local/kafka/kafka_2.12-2.8.1/config/server.properties
directory=/usr/local/kafka/kafka_2.12-2.8.1/
autostart=true
autorestart=true
stdout_logfile=/var/log/supervisor/kafka.log
redirect_stderr=true
user=apache