• 基于docker搭建zookeeper集群、kafka集群


    zookeeper集群搭建

    https://www.cnblogs.com/znicy/p/7717426.html     #Docker中搭建zookeeper集群,昵称:zni.feng

    https://www.cnblogs.com/luotianshuai/p/5206662.html  #真机搭建参数参考

    -------------------------------------------------------------------------------------------

    docker inspect docker.io/zookeeper

    "Cmd": [
        "zkServer.sh",
        "start-foreground"
    ],
    "Volumes": {
        "/data": {},
        "/datalog": {}
    },
    "WorkingDir": "/zookeeper-3.4.10",
    "Entrypoint": [
        "/docker-entrypoint.sh"
    ],                                   #由镜像信息看出,容器启动会运行这个脚本
    此脚本内容:https://www.cnblogs.com/znicy/p/7717426.html 
    由以上内容,创建启动zookeeper集群容器脚本:采用容器网络,zookeeper只需要内部之间互相通信协调,采用host网络会报错(目前没找到问题)
    #!/bin/bash
    #Get zookeeper image
    zkimage=`docker images | grep zookeeper | awk {'print $1'}`
    if [ -n "$zkimage" ]
    then
        echo 'The zookeeper image is already existed.'
    else
        echo 'Pull the latest zookeeper image.'
        docker pull zookeeper
    fi
    
    #Create network for zookeeper containers
    zknet=`docker network ls | grep yapi_net | awk {'print $2'}`
    if [ -n "$zknet" ]
    then
        echo 'The zknetwork is already existed.'
    else
        echo 'Create zknetwork.'
        docker network create --subnet 172.30.0.0/16 yapi_net
    fi
    
    #Start zookeeper cluster
    echo 'Start 3 zookeeper servers.'
    rm -rf   /opt/zookeeper_1/data  /opt/zookeeper_1/datalog  /var/log/zookeeper_1/log
    rm -rf   /opt/zookeeper_2/data  /opt/zookeeper_2/datalog  /var/log/zookeeper_2/log
    rm -rf   /opt/zookeeper_3/data  /opt/zookeeper_3/datalog  /var/log/zookeeper_3/log
    
    mkdir -p  /opt/zookeeper_1/data  /opt/zookeeper_1/datalog  /var/log/zookeeper_1/log
    mkdir -p  /opt/zookeeper_2/data  /opt/zookeeper_2/datalog  /var/log/zookeeper_2/log
    mkdir -p  /opt/zookeeper_3/data  /opt/zookeeper_3/datalog  /var/log/zookeeper_3/log
    ZOO_SERVERS="server.1=zookeeper_1:2888:3888 server.2=zookeeper_2:2888:3888 server.3=zookeeper_3:2888:3888"
    
    docker run --network yapi_net --ip 172.30.0.31 -d --restart always -v /opt/zookeeper_1/data:/data -v /opt/zookeeper_1/datalog:/datalog -v /var/log/zookeeper_1/log:/logs  -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=1 --name zookeeper_1 -p 2182:2181 docker.io/zookeeper
    docker run --network yapi_net --ip 172.30.0.32 -d --restart always -v /opt/zookeeper_2/data:/data -v /opt/zookeeper_2/datalog:/datalog -v /var/log/zookeeper_2/log:/logs  -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=2 --name zookeeper_2 -p 2183:2181 docker.io/zookeeper
    docker run --network yapi_net --ip 172.30.0.33 -d --restart always -v /opt/zookeeper_3/data:/data -v /opt/zookeeper_3/datalog:/datalog -v /var/log/zookeeper_3/log:/logs  -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=3 --name zookeeper_3 -p 2184:2181 docker.io/zookeeper
    
    #docker run --network host -d --restart always -v /opt/zookeeper_1/data:/data -v /opt/zookeeper_1/datalog:/datalog -v /var/log/zookeeper_1/log:/logs  -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=1 --name zookeeper_1 -p 2182:2181 docker.io/zookeeper 
    #docker run --network host -d --restart always -v /opt/zookeeper_2/data:/data -v /opt/zookeeper_2/datalog:/datalog -v /var/log/zookeeper_2/log:/logs  -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=2 --name zookeeper_2 -p 2183:2181 docker.io/zookeeper 
    #docker run --network host -d --restart always -v /opt/zookeeper_3/data:/data -v /opt/zookeeper_3/datalog:/datalog -v /var/log/zookeeper_3/log:/logs  -e ZOO_SERVERS="$ZOO_SERVERS" -e ZOO_MY_ID=3 --name zookeeper_3 -p 2184:2181 docker.io/zookeeper 

    执行代码即可 

    补充

    1、(telnet  ip  port  验证时  显示拒绝链接  原因是配置文件不完整,所以要改变脚本的 环境变量,修改依据,Entrypoint),完整配置文件如下:

    clientPort=2181
    dataDir=/data
    dataLogDir=/datalog
    tickTime=2000
    initLimit=5
    syncLimit=2
    autopurge.snapRetainCount=3
    autopurge.purgeInterval=0
    maxClientCnxns=60
    server.1=zookeeper_1:2888:3888
    server.2=zookeeper_2:2888:3888
    server.3=zookeeper_3:2888:3888

     2、单机(非集群)的配置文件

    clientPort=2181
    dataDir=/data
    dataLogDir=/datalog
    tickTime=2000
    initLimit=5
    syncLimit=2
    autopurge.snapRetainCount=3
    autopurge.purgeInterval=0
    maxClientCnxns=60

    kafka集群

    配置事项:

    理论解析

    Kafka的分区数:参考1  参考2

      分区多的优点,理论上整个集群所能达到的吞吐量就越大。

      分区不是越多越好,一、客户端/服务器端需要使用的内存就越多二、文件句柄的开销(ulimit -n)三、降低高可用性,因为要进行分区副本leader的选举

      分区数 =  Tt / max(Tp, Tc),(创建一个只有1个分区的topic,然后测试这个topic的producer吞吐量和consumer吞吐量。假设它们的值分别是Tp和Tc,单位可以是MB/s。然后假设总的目标吞吐量是Tt)

    consumer线程数最好=分区数

    listenersadvertised.listeners的区别:https://www.colabug.com/6020170.html    http://www.devtalking.com/articles/kafka-practice-16/

     kafka原理深度剖析1存储+吞吐量解析       详解1         详解2

    Kafka配置解析:配置解析1    配置解析2

    配置详细解析3(官方)

    配置解析4

     由开机启动脚本,编写代码。kafka采用host网络,要不别的机器不能发消息,因为ip是容器内部的ip

    #!/bin/bash
    #Get kafka image
    kfkimage=`docker images | grep 'docker.io/wurstmeister/kafka' | awk {'print $1'}`
    if [ -n "$kfkimage" ]
    then
        echo 'The docker.io/wurstmeister/kafka is already existed.'
    else
        echo 'Pull the image.'
        docker pull docker.io/wurstmeister/kafka
    fi
    
    #Create network for zookeeper containers
    kfknet=`docker network ls | grep yapi_net | awk {'print $2'}`
    if [ -n "$kfknet" ]
    then
        echo 'The kfknetwork is already existed.'
    else
        echo 'Create kfknetwork.'
        docker network create --subnet 172.30.0.0/16 yapi_net
    fi
    
    #Start 3  zookeeper cluster
    echo 'Start 3 kafka servers.'
    
    rm -rf  /opt/kafka_1/logdata
    rm -rf  /opt/kafka_2/logdata
    rm -rf  /opt/kafka_3/logdata
    
    mkdir -p  /opt/kafka_1/logdata
    mkdir -p  /opt/kafka_2/logdata
    mkdir -p  /opt/kafka_3/logdata
    
    #kafka ip
    kfk_1_ip='172.30.0.41'
    kfk_2_ip='172.30.0.42'
    kfk_3_ip='172.30.0.43'
    zk_jiqun_ip='172.30.0.31:2181'
    #zk_jiqun_ip='172.30.0.31:2181,172.30.0.32:2181,172.30.0.33:2181'
    #zk_jiqun_ip='192.168.0.128:2181'
    
    #docker run --restart always -d --name kafka_1 --network yapi_net --ip ${kfk_1_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9092' -e KAFKA_BROKER_ID='41' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://192.168.181.130:9092' -v /opt/kafka_1/logdata:/kafka -p 9092:9092  docker.io/wurstmeister/kafka
    #docker run --restart always -d --name kafka_2 --network yapi_net --ip ${kfk_2_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9093' -e KAFKA_BROKER_ID='42' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://192.168.181.130:9093' -v /opt/kafka_2/logdata:/kafka -p 9093:9092  docker.io/wurstmeister/kafka
    #docker run --restart always -d --name kafka_3 --network yapi_net --ip ${kfk_3_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9094' -e KAFKA_BROKER_ID='43' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://192.168.181.130:9094' -v /opt/kafka_3/logdata:/kafka -p 9094:9092  docker.io/wurstmeister/kafka
    
    #docker run --restart always -d --name kafka_1 --network yapi_net --ip ${kfk_1_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9092' -e KAFKA_BROKER_ID='41' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://172.30.0.41:9092' -v /opt/kafka_1/logdata:/kafka -p 9092:9092  05cef8845b3d 
    #docker run --restart always -d --name kafka_2 --network yapi_net --ip ${kfk_2_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9093' -e KAFKA_BROKER_ID='42' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://172.30.0.42:9092' -v /opt/kafka_2/logdata:/kafka -p 9093:9092  05cef8845b3d
    #docker run --restart always -d --name kafka_3 --network yapi_net --ip ${kfk_3_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9094' -e KAFKA_BROKER_ID='43' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://172.30.0.43:9092' -v /opt/kafka_3/logdata:/kafka -p 9094:9092  05cef8845b3d
    
    #docker run --restart always -d --name kafka_1 --network yapi_net --ip ${kfk_1_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9092' -e KAFKA_BROKER_ID='41' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://'${kfk_1_ip}':9092' -v /opt/kafka_1/logdata:/kafka -p 9092:9092  docker.io/wurstmeister/kafka
    #docker run --restart always -d --name kafka_2 --network yapi_net --ip ${kfk_2_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9092' -e KAFKA_BROKER_ID='42' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://'${kfk_2_ip}':9092' -v /opt/kafka_2/logdata:/kafka -p 9093:9092  docker.io/wurstmeister/kafka
    #docker run --restart always -d --name kafka_3 --network yapi_net --ip ${kfk_3_ip} -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9092' -e KAFKA_BROKER_ID='43' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://'${kfk_3_ip}':9092' -v /opt/kafka_3/logdata:/kafka -p 9094:9092  docker.io/wurstmeister/kafka
    
    docker run --restart always -d --name kafka_1 --network host -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9092' -e KAFKA_BROKER_ID='41' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://192.168.0.128:9092' -v /opt/kafka_1/logdata:/kafka -p 9092:9092  docker.io/wurstmeister/kafka
    docker run --restart always -d --name kafka_2 --network host -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9093' -e KAFKA_BROKER_ID='42' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://192.168.0.128:9093' -v /opt/kafka_2/logdata:/kafka -p 9093:9092  docker.io/wurstmeister/kafka
    docker run --restart always -d --name kafka_3 --network host -e KAFKA_ZOOKEEPER_CONNECT=${zk_jiqun_ip} -e KAFKA_LISTENERS='PLAINTEXT://0.0.0.0:9094' -e KAFKA_BROKER_ID='43' -e KAFKA_ADVERTISED_LISTENERS='PLAINTEXT://192.168.0.128:9094' -v /opt/kafka_3/logdata:/kafka -p 9094:9092  docker.io/wurstmeister/kafka

    执行代码即可

    kafka压力测试        kafka收发消息测试

  • 相关阅读:
    EPUB书籍阅读器插件分享
    网页端压缩解压缩插件JSZIP库的使用
    让编辑器支持word的复制黏贴,支持截屏的黏贴
    MYSQL GTID position
    Google SRE
    MySQL大小写敏感
    SpringMVC model 多余字段 忽略
    To B Vs To C
    滴滴 CTO 架构师 业务 技术 战役 时间 赛跑 超前 设计
    Spring Boot 集成Swagger
  • 原文地址:https://www.cnblogs.com/fanever/p/10769784.html
Copyright © 2020-2023  润新知