• kafka集群扩容后的topic分区迁移


    kafka集群扩容后的topic分区迁移

     ./bin/kafka-topics.sh --zookeeper node3:2181,node4:2181,node5:2181  --alter --topic dftt --partitions 4

    kafka集群扩容后,新的broker上面不会数据进入这些节点,也就是说,这些节点是空闲的;它只有在创建新的topic时才会参与工作。除非将已有的partition迁移到新的服务器上面;
    所以需要将一些topic的分区迁移到新的broker上。

    kafka-reassign-partitions.sh是kafka提供的用来重新分配partition和replica到broker上的工具
    简单实现重新分配需要三步:

    • 生成分配计划(generate)
    • 执行分配(execute)
    • 检查分配的状态(verify)

    具体操作如下:

    1. 生成分配计划

    编写分配脚本:
    vi topics-to-move.json

    内容如下:

    {"topics":
        [{"topic":"event_request"}],
        "version": 1
    }

    执行分配计划生成脚本:

    kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --topics-to-move-json-file topics-to-move.json --broker-list "5,6,7,8" --generate

    执行结果如下:

    [hadoop@sdf-nimbus-perf topic_reassgin]$ kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --topics-to-move-json-file topics-to-move.json --broker-list "5,6,7,8" --generate
    Current partition replica assignment  #当前分区的副本分配
    
    {"version":1,"partitions":[{"topic":"event_request","partition":0,"replicas":[3,4]},{"topic":"event_request","partition":1,"replicas":[4,5]}]}
    Proposed partition reassignment configuration #建议的分区配置
    
    {"version":1,"partitions":[{"topic":"event_request","partition":0,"replicas":[6,5]},{"topic":"event_request","partition":1,"replicas":[7,6]}]}

    Proposed partition reassignment configuration 后是根据命令行的指定的brokerlist生成的分区分配计划json格式。将 Proposed partition reassignment configuration的配置copy保存到一个文件中 topic-reassignment.json
    vi topic-reassignment.json

    {"version":1,"partitions":[{"topic":"event_request","partition":0,"replicas":[6,5]},{"topic":"event_request","partition":1,"replicas":[7,6]}]}

    2. 执行分配(execute)

    根据step1 生成的分配计划配置json文件topic-reassignment.json,进行topic的重新分配。

    kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --reassignment-json-file topic-reassignment.json --execute

    执行前的分区分布:

    [hadoop@sdf-nimbus-perf topic_reassgin]$ le-kafka-topics.sh --describe --topic event_request
    Topic:event_request PartitionCount:2    ReplicationFactor:2 Configs:
        Topic: event_request    Partition: 0    Leader: 3   Replicas: 3,4   Isr: 3,4
        Topic: event_request    Partition: 1    Leader: 4   Replicas: 4,5   Isr: 4,5

    执行后的分区分布:

    [hadoop@sdf-nimbus-perf topic_reassgin]$ le-kafka-topics.sh --describe --topic event_request
    Topic:event_request PartitionCount:2    ReplicationFactor:4 Configs:
        Topic: event_request    Partition: 0    Leader: 3   Replicas: 6,5,3,4   Isr: 3,4
        Topic: event_request    Partition: 1    Leader: 4   Replicas: 7,6,4,5   Isr: 4,5

    3. 检查分配的状态

    查看分配的状态:正在进行

    [hadoop@sdf-nimbus-perf topic_reassgin]$ kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --reassignment-json-file topic-reassignment.json --verify
    Status of partition reassignment:
    Reassignment of partition [event_request,0] is still in progress
    Reassignment of partition [event_request,1] is still in progress
    [hadoop@sdf-nimbus-perf topic_reassgin]$ 

    查看“is still in progress” 状态时的分区,副本分布状态:

    发现Replicas有4个哦,说明在重新分配的过程中新旧的副本都在进行工作。

    [hadoop@sdf-nimbus-perf topic_reassgin]$ le-kafka-topics.sh --describe --topic event_request
    Topic:event_request PartitionCount:2    ReplicationFactor:4 Configs:
        Topic: event_request    Partition: 0    Leader: 3   Replicas: 6,5,3,4   Isr: 3,4
        Topic: event_request    Partition: 1    Leader: 4   Replicas: 7,6,4,5   Isr: 4,5

    查看分配的状态:分配完成。

    [hadoop@sdf-nimbus-perf topic_reassgin]$ kafka-reassign-partitions.sh --zookeeper $ZK_CONNECT --reassignment-json-file topic-reassignment.json --verify
    Status of partition reassignment:
    Reassignment of partition [event_request,0] completed successfully
    Reassignment of partition [event_request,1] completed successfully

    查看“completed successfully”状态的分区,副本状态:

    已经按照生成的分配计划正确的完成了分区的重新分配。

    [hadoop@sdf-nimbus-perf topic_reassgin]$ le-kafka-topics.sh --describe --topic event_request
    Topic:event_request PartitionCount:2    ReplicationFactor:2 Configs:
        Topic: event_request    Partition: 0    Leader: 6   Replicas: 6,5   Isr: 6,5
        Topic: event_request    Partition: 1    Leader: 7   Replicas: 7,6   Isr: 6,7
  • 相关阅读:
    如何将本地的项目,上传到github
    Pytest 失败重运行
    Jenkins项目构建成功后,配置邮件
    Jenkins构建UI自动化项目,指定本地执行,没弹起浏览显示
    pytest_重写pytest_sessionfinish方法的执行顺序_结合报告生成到发送邮件
    python +pytestreport 生成测试报告_报告没有生成图标和报告样式错乱
    Jenkins的搭建及配置
    Jenkins构建项目遇到的问题总结
    pytest_terminal_summary重写收集测试报告并发送邮件,报错"Argument(s) {'Config'} are declared in the hookimpl but can not be found in the hookspec"
    Jenkins创建任务进行构建项目配置
  • 原文地址:https://www.cnblogs.com/mylovelulu/p/10283044.html
Copyright © 2020-2023  润新知