• Linux 下kafka集群搭建


    主机的IP地址:

    主机IP地址 zookeeper kafka
    10.19.85.149 myid=1 broker.id=1
    10.19.15.103 myid=2 broker.id=2
    10.19.189.221 myid=3 broker.id=3
    配置文件:
    # cat zoo.cfg
    tickTime=2000
    initLimit=10
    syncLimit=5
    dataDir=/data/zookeeper/data
    dataLogDir=/data/zookeeper/log
    clientPort=2181
    server.1=10.19.85.149:2888:3888
    server.2=10.19.15.103:2888:3888
    server.3=10.19.189.221:2888:3888
    #maxClientCnxns=60
    #autopurge.snapRetainCount=3
    #autopurge.purgeInterval=1
    注释:2888表示zookeeper程序监听端口,3888表示zookeeper选举通信端口。

    按照上述指定:
    echo 1 > /data/zookeeper/data/myid

    echo 2 > /data/zookeeper/data/myid

    echo 3 > /data/zookeeper/data/myid

    报错信息:
    Error contacting service. It is probably not running.
    一次启动三个节点:
    # zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED

    报错信息查案zookeeper.out

    # zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Mode: leader

    zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Mode: follower

    --以下为单节点的状态:
    #zkServer.sh status
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Mode: standalone

    --zookeeper关闭:
    # zkServer.sh start

    --查看zookeeper的信息:
    zkCli.sh -server 127.0.0.1:2181

    安装kafka:

    --配置信息:
    # cd /usr/local/kafka/config
    # cat server.properties | grep -v ^#|uniq | tr -s ' '

    broker.id=1
    host.name=10.19.85.149
    auto.create.topics.enable=true
    delete.topic.enable = true
    message.max.bytes=200000000
    replica.fetch.max.bytes=204857600
    fetch.message.max.bytes=204857600
    num.network.threads=3
    num.io.threads=8
    socket.send.buffer.bytes=102400
    socket.receive.buffer.bytes=102400
    socket.request.max.bytes=1048576000
    log.dirs=/data/kafka/log
    num.partitions=3
    num.recovery.threads.per.data.dir=1
    offsets.topic.replication.factor=1
    transaction.state.log.replication.factor=1
    transaction.state.log.min.isr=1
    log.retention.hours=168
    log.segment.bytes=1073741824
    log.retention.check.interval.ms=300000
    zookeeper.connect=10.19.85.149:2181,10.19.15.103:2181,10.19.189.221:2181
    zookeeper.connection.timeout.ms=6000
    group.initial.rebalance.delay.ms=0

    --设置环境变量:

    --启动kafka:
    # /usr/local/kafka/bin/kafka-server-start.sh /usr/local/kafka/config/server.properties &
    -- 启动:
    # /usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties &
    注意:kafka的broker_id的值要和zookeeper的ID值一样。


    --启动之后可以查看jps进程:
    # jps
    22851 Kafka
    22884 Jps
    22151 QuorumPeerMain
    QuorumPeerMain为zookeeper的进程,kafka为kafka的进程。

    --kafka常用的操作命令:

    注释以下命令需要设置环境变量:
    # cat /etc/profile.d/kafka.sh
    export PATH=$PATH:/usr/local/kafka/bin

    --关闭kafka:
    kafka-server-stop.sh
    --启动kafka:
    kafka-server-start.sh -daemon /usr/local/kafka/config/server.properties

    --创建topic:
    说明:创建一个主题名为topic_tidb的主题,有2个复制3个分区
    kafka-topics.sh --create --zookeeper 10.19.85.149,10.19.15.103,10.19.189.221 --replication-factor 2 --partitions 3 --topic topic_tidb
    WARNING: Due to limitations in metric names, topics with a period ('.') or underscore ('_') could collide. To avoid issues it is best to use either, but not both.
    Created topic "topic_tidb".
    --查看topic是否创建成功:
    kafka-topics.sh --list --zookeeper 10.19.85.149,10.19.15.103,10.19.189.221

    topic_tidb
    --查看topic_tidb的详细信息:

    # kafka-topics.sh --describe --zookeeper 10.19.85.149,10.19.15.103,10.19.189.221 --topic topic_tidb
    Topic:topic_tidbPartitionCount:3ReplicationFactor:2Configs:
    Topic: topic_tidbPartition: 0Leader: 2Replicas: 2,3Isr: 2,3
    Topic: topic_tidbPartition: 1Leader: 3Replicas: 3,1Isr: 3,1
    Topic: topic_tidbPartition: 2Leader: 1Replicas: 1,2Isr: 1,2
    --删除Topic:
    #kafka-topics.sh --delete --zookeeper 10.19.85.149,10.19.15.103,10.19.189.221 --topic topic_tidb

    --创建消息生产者发送消息:
    kafka-console-producer.sh --broker-list 10.19.85.149:9092,10.19.15.103:9092,10.19.189.221:9092 --topic topic_tidb
    kafka-console-producer.sh --broker-list 10.19.85.149:9092,10.19.15.103:9092,10.19.189.221:9092 --topic topic_tidb
    >
    >wuhan
    >I Love Java!


    --创建消息消费者接收消息:
    kafka-console-consumer.sh --zookeeper 10.19.85.149:2181,10.19.15.103:2181,10.19.189.221:2181 --from-beginning --topic topic_tidb

    Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].

    wuhan
    I Love Java!

  • 相关阅读:
    Android开发(二十一)——自动更新
    Android开发(十九)——ViewFlipper中的onClick事件和onFling事件冲突
    Android开发(十八)——头部、中部、底部布局技巧
    Android开发(十七)——关闭中间activity
    Android开发(十六)——Android listview onItemClick事件失效的原因
    Android开发(十五)——ListView中Items的间距margin
    Android开发(十四)——SimpleAdapter与自定义控件
    [ MongoDB ] 3.X权限认证控制
    批量修改主机密码并发送到邮箱
    [ ceph ] CEPH 部署完整版(CentOS 7 + luminous)
  • 原文地址:https://www.cnblogs.com/a-du/p/11691710.html
Copyright © 2020-2023  润新知