• zookeeper加Kafka集群配置


    官方

    https://zookeeper.apache.org/doc/r3.5.6/zookeeperStarted.html#sc_Prerequisites

    https://www.cnblogs.com/luotianshuai/p/5206662.html
    https://www.cnblogs.com/kevingrace/p/9021508.html

    https://www.cnblogs.com/zgqbky/p/11835780.html 国强

    1.下载需要的安装包

    wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz

    wget http://mirrors.shu.edu.cn/apache/kafka/1.1.0/kafka_2.11-1.1.0.tgz

    http://kafka.apache.org/downloads

    以前的版本在下面下载

     2.准备 zookeeper目录 三个节点192.168.1.151 192.168.1.152 192.168.1.153   备注:请注意ip地址与版本号

    mkdir /opt/zookeeper
    mkdir /opt/zookeeper/zkdata
    mkdir /opt/zookeeper/zkdatalog
    cp /root/zookeeper-3.4.10.tar.gz /opt/
    cd /opt/
    tar -zxvf zookeeper-3.4.10.tar.gz -C /opt/zookeeper
    cp /opt/zookeeper/zookeeper-3.4.10/conf/zoo_sample.cfg /opt/zookeeper/zookeeper-3.4.10/conf/zoo.cfg #备份配置文件
    vi /opt/zookeeper/zookeeper-3.4.10/conf/zoo.cfg   # 这样查看的配置 cat /opt/zookeeper/zookeeper-3.4.10/conf/zoo.cfg | egrep -v "^$|^#"
    tickTime=2000
    initLimit=10
    syncLimit=5
    dataDir=/opt/zookeeper/zkdata      修改日志路径
    dataLogDir=/opt/zookeeper/zkdatalog 添加
    clientPort=2181
    server.1=192.168.120.81:2888:3888 添加
    server.2=192.168.120.82:2888:3888 添加
    server.3=192.168.120.83:2888:3888 添加
    #server.1 这个1是服务器的标识也可以是其他的数字, 表示这个是第几号服务器,用来标识服务器,这个标识要写到快照目录下面myid文件里
    #192.168.120.83为集群里的IP地址,第一个端口是master和slave之间的通信端口,默认是2888,第二个端口是leader选举的端口,集群刚启动的时候选举或者leader挂掉之后进行新的选举的
    端口默认是3888
     
    
    

    注意
    如果指定了日志位置需要修改下面参数
    cp /opt/zookeeper/zookeeper-3.4.10/bin/zkServer.sh /opt/zookeeper/zookeeper-3.4.10/bin/zkServer.sh.bak

    vi /opt/zookeeper/zookeeper-3.4.10/bin/zkServer.sh

    
    
    ZOO_LOG_DIR="$($GREP "^[[:space:]]*dataLogDir" "$ZOOCFG" | sed -e 's/.*=//')"  #125添加这一行
     
    3.拷贝配置好的文件到其他主机
    
    
    scp -r /opt/zookeeper 192.168.120.82:/opt/
    scp -r /opt/zookeeper 192.168.120.83:/opt/
    
    
    #server1
    echo "1" > /opt/zookeeper/zkdata/myid
    #server2
    echo "2" > /opt/zookeeper/zkdata/myid
    #server3
    echo "3" > /opt/zookeeper/zkdata/myid
    
    
    
    
    
    /opt/zookeeper/zookeeper-3.4.10/bin/zkServer.sh start #启动服务
    /opt/zookeeper/zookeeper-3.4.10/bin/zkServer.sh status #查看状态
    
    
    kafa集群目录准备
    
    
    mkdir /opt/kafka
    mkdir /opt/kafka/kafkalogs

    cd /opt/kafka
    wget http://mirrors.shu.edu.cn/apache/kafka/1.1.0/kafka_2.11-1.1.0.tgz
    tar -zxvf kafka_2.11-1.1.0.tgz
    
    

    vi /opt/kafka/kafka_2.11-1.1.0/config/server.properties

    
    
    listeners=PLAINTEXT://192.168.120.81:9092
    log.dirs=/opt/kafka/kafkalogs/
    zookeeper.connect=192.168.120.81:2181,192.168.120.82:2181,192.168.120.83:2181
    
    
    
    
    

    [root@81server ~]# cat /opt/kafka/kafka_2.11-1.1.0/config/server.properties | egrep -v "^$|^#"  #查看所有参数 备注:这个只是参考,修改的其实是上面的配置文件

     
    broker.id=0
    listeners=PLAINTEXT://192.168.120.81:9092
    num.network.threads=3
    num.io.threads=8
    socket.send.buffer.bytes=102400
    socket.receive.buffer.bytes=102400
    socket.request.max.bytes=104857600
    log.dirs=/opt/kafka/kafkalogs/
    num.partitions=1
    num.recovery.threads.per.data.dir=1
    offsets.topic.replication.factor=1
    transaction.state.log.replication.factor=1
    transaction.state.log.min.isr=1
    log.flush.interval.messages=10000
    log.flush.interval.ms=1000
    log.retention.hours=168
    log.retention.bytes=1073741824
    log.segment.bytes=1073741824
    log.retention.check.interval.ms=300000
    zookeeper.connect=192.168.120.81:2181,192.168.120.82:2181,192.168.120.83:2181
    zookeeper.connection.timeout.ms=6000
    group.initial.rebalance.delay.ms=0


    拷贝配置好的文件和程序

    
    
    scp -r /opt/kafka 192.168.120.82:/opt/
    scp -r /opt/kafka 192.168.120.83:/opt/
    
    

    其他节点只修改两次
    vi /opt/kafka/kafka_2.11-1.1.0/config/server.properties

    
    
    broker.id=1 #此处不能重复
    ......
    listeners=PLAINTEXT://192.168.120.82:9092
    
    

    运行kafka服务在三个节点

    
    
    nohup /opt/kafka/kafka_2.11-1.1.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.11-1.1.0/config/server.properties >/dev/null 2>&1 &
     

    随便在其中一台节点主机执行

    /opt/kafka/kafka_2.11-1.1.0/bin/kafka-topics.sh --create --zookeeper 192.168.120.81:2181,192.168.120.82:2181,192.168.120.83:2181 --replication-factor 1 --partitions 1 --topic test
    
    /opt/kafka/kafka_2.11-1.1.0/bin/kafka-topics.sh --list --zookeeper 192.168.120.81:2181,192.168.120.82:2181,192.168.120.83:2181 #查看刚才的信息

    查看topic状态

    /opt/kafka/kafka_2.11-1.1.0/bin/kafka-topics.sh --describe --zookeeper 192.168.120.81:2181,192.168.120.82:2181,192.168.120.83:2181 

     kafka内容查询

    
    
    /opt/kafka/kafka_2.11-1.1.0/bin/kafka-consumer-groups.sh --bootstrap-server 
    192.168.1.151:9092,192.168.1.152:9092,192.168.1.153:9092 --list groupid
    
    /opt/kafka/kafka_2.11-1.1.0/bin/kafka-consumer-groups.sh --bootstrap-server 
    192.168.1.151:9092,192.168.1.152:9092,192.168.1.153:9092 --group orderdy030_refund --describe
    
    
    
    
    
    
    
    

    vi zookeeper.sh

    
    
    /opt/zookeeper/zookeeper-3.4.10/bin/zkServer.sh start
    
    

    vi kafka.sh

    
    
    nohup /opt/kafka/kafka_2.11-1.1.0/bin/kafka-server-start.sh /opt/kafka/kafka_2.11-1.1.0/config/server.properties >/dev/null 2>&1 &
    
    
    
    
    
    
    
    
    复制代码
    /opt/zookeeper/zookeeper-3.4.10/bin/zkCli.sh -server 127.0.0.1:2181
    
    ls /
    get /brokers/ids/0

    get /brokers/topics/test/partitions/0

    复制代码
     
     
     
     


     
  • 相关阅读:
    通信的真正端点不是主机而是主机中的进程
    futures
    What's the customers care is only Myinput and Uroutput on the Cloud.What's more,MySecurity.
    r
    迭代器遍历列表 构造方法 constructor ArrayList Vector LinkedList Array List 时间复杂度
    2009年4月,Twitter宣布他们已经把大部分后端程序从Ruby迁移到Scala
    So the type system doesn’t feel so static.

    Unit redis-server.service is masked.
    Makefile 描述的是文件编译的相关规则,它的规则主要是两个部分组成,分别是依赖的关系和执行的命令 PHONY伪目标实践
  • 原文地址:https://www.cnblogs.com/wwtao/p/11836250.html
Copyright © 2020-2023  润新知