• kafka 集群的部署安装


    这里我们罗列一下我们的环境
    10.19.18.88 zk1
    10.19.16.84 zk2
    10.19.11.44 zk3
    

    这里公司需要接入kafka用于zipkin来定位调用链

    kafka 的地址是http://kafka.apache.org/

    zipkin 的地址是https://github.com/openzipkin/zipkin/tree/master/zipkin-server#environment-variables

    kafka-manager 地址是https://github.com/yahoo/kafka-manager

    这里首先我们现在kafka的下载包

    kafka     download      http://kafka.apache.org/downloads      https://archive.apache.org/dist/kafka/0.11.0.0/kafka_2.11-0.11.0.0.tgz


    下载包之后我们直接解压使用,因为这里我们环境已经配置过zookeeper

    所以这里我就不配置kafka/config/zookeeper.properties

    我们直接修改kafka的配置文件:

    # The id of the broker. This must be set to a unique integer for each broker.  
    broker.id=1
    ############################# Socket Server Settings #############################  
    listeners=PLAINTEXT://10.19.18.88:1092
    port=1092
    host.name=10.19.18.88
    # The number of threads handling network requests  
    num.network.threads=8
    # The number of threads doing disk I/O  
    num.io.threads=8
    # The send buffer (SO_SNDBUF) used by the socket server  
    socket.send.buffer.bytes=1048576
    # The receive buffer (SO_RCVBUF) used by the socket server  
    socket.receive.buffer.bytes=1048576
    # The maximum size of a request that the socket server will accept (protection against OOM)  
    socket.request.max.bytes=104857600
    # The number of queued requests allowed before blocking the network threads  
    queued.max.requests=100
    # The purge interval (in number of requests) of the fetch request purgatory  
    fetch.purgatory.purge.interval.requests=200
    # The purge interval (in number of requests) of the producer request purgatory  
    producer.purgatory.purge.interval.requests=200
      
    ############################# Log Basics #############################  
    # A comma seperated list of directories under which to store log files  
    log.dirs=/data/package/kafka/kafka-logs
    # The default number of log partitions per topic.   
    num.partitions=24
    # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.  
    num.recovery.threads.per.data.dir=2
    # The maximum size of message that the server can receive  
    message.max.bytes=1000000
    # Enable auto creation of topic on the server  
    auto.create.topics.enable=true
    # The interval with which we add an entry to the offset index  
    log.index.interval.bytes=4096
    # The maximum size in bytes of the offset index  
    log.index.size.max.bytes=10485760
    # Allow to delete topics
    delete.topic.enable=true
    ############################# Log Flush Policy #############################  
    # The number of messages to accept before forcing a flush of data to disk  
    log.flush.interval.messages=20000
    # The maximum amount of time a message can sit in a log before we force a flush  
    log.flush.interval.ms=10000
    # The frequency in ms that the log flusher checks whether any log needs to be flushed to disk  
    log.flush.scheduler.interval.ms=2000
    ############################# Log Retention Policy #############################  
    # The minimum age of a log file to be eligible for deletion  
    log.retention.hours=168
    # A size-based retention policy for logs.   
    log.retention.bytes=1073741824
    # The maximum size of a log segment file. When this size is reached a new log segment will be created.  
    log.segment.bytes=1073741824
    # The interval at which log segments are checked to see if they can be deleted according  
    # to the retention policies  
    log.retention.check.interval.ms=300000
    # The maximum time before a new log segment is rolled out (in hours)  
    log.roll.hours=168
    ############################# Zookeeper #############################  
    # Zookeeper connection string (see zookeeper docs for details).  
    zookeeper.connect=10.19.18.88:12081,10.19.16.84:12081,10.19.11.44:12081
    # Timeout in ms for connecting to zookeeper  
    zookeeper.connection.timeout.ms=6000
    # How far a ZK follower can be behind a ZK leader  
    zookeeper.sync.time.ms=2000
      
    ############################# Replication configurations ################  
    # default replication factors for automatically created topics  
    default.replication.factor=3
    # Number of fetcher threads used to replicate messages from a source broker.  
    num.replica.fetchers=4
    # The number of bytes of messages to attempt to fetch for each partition.  
    replica.fetch.max.bytes=1048576
    # max wait time for each fetcher request issued by follower replicas.   
    replica.fetch.wait.max.ms=500
    # The frequency with which the high watermark is saved out to disk  
    replica.high.watermark.checkpoint.interval.ms=5000
    # The socket timeout for network requests.  
    replica.socket.timeout.ms=30000
    # The socket receive buffer for network requests  
    replica.socket.receive.buffer.bytes=65536
    # If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr  
    replica.lag.time.max.ms=10000
    # The socket timeout for controller-to-broker channels  
    controller.socket.timeout.ms=30000
    controller.message.queue.size=10

    这里不同的就是上面红色标注的,我们这里有三台机器组成的kafka集群

    vim bin/kafka-server-start.sh
    修改最后一行
    
    #exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@"
    exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "/data/package/kafka/config/server.properties"
    

     同样的方法,我们修改一下其他的两台server的配置文件

    然后我们就可以一次启动kafka了。

    最后我们安装kafka-manager

    功能:
    为了简化开发者和服务工程师维护Kafka集群的工作,yahoo构建了一个叫做Kafka管理器的基于Web工具,叫做 Kafka Manager。这个管理工具可以很容易地发现分布在集群中的哪些topic分布不均匀,或者是分区在整个集群分布不均匀的的情况。它支持管理多个集群、选择副本、副本重新分配以及创建Topic。同时,这个管理工具也是一个非常好的可以快速浏览这个集群的工具,有如下功能:
    
    1.管理多个kafka集群
    2.便捷的检查kafka集群状态(topics,brokers,备份分布情况,分区分布情况)
    3.选择你要运行的副本
    4.基于当前分区状况进行
    5.可以选择topic配置并创建topic(0.8.1.1和0.8.2的配置不同)
    6.删除topic(只支持0.8.2以上的版本并且要在broker配置中设置delete.topic.enable=true)
    7.Topic list会指明哪些topic被删除(在0.8.2以上版本适用)
    8.为已存在的topic增加分区
    9.为已存在的topic更新配置
    10.在多个topic上批量重分区
    11.在多个topic上批量重分区(可选partition broker位置)
    安装步骤
    
    1、获取kafka-manager源码,并编译打包
    # cd /usr/local
    # git clone https://github.com/yahoo/kafka-manager
    # cd kafka-manager
    # ./sbt clean dist
    注: 执行sbt编译打包可能花费很长时间,如果你hang在如下情况
    将project/plugins.sbt 中的logLevel参数修改为logLevel := Level.Debug(默认为Warn)
    
    2、安装配置
    编译成功后,会在target/universal下生成一个zip包
    
    # cd /usr/local/kafka-manager/target/universal
    # unzip kafka-manager-1.3.3.7.zip  
    将application.conf中的kafka-manager.zkhosts的值设置为你的zk地址
    如:kafka-manager.zkhosts="172.16.218.201:2181,172.16.218.202:2181,172.16.218.203:2181"
    3、启动,指定配置文件位置和启动端口号,默认为9000
    直接启动:
    
    # cd kafka-manager-1.3.3.7/bin 
    # ./kafka-manager -Dconfig.file=../conf/application.conf
    后台运行:
    
    # ./kafka-manager -h 
    # nohup ./kafka-manager -Dconfig.file=../conf/application.conf &
    指定端口,例如:
    
    # nohup bin/kafka-manager -Dconfig.file=conf/application.conf -Dhttp.port=9001 &
    第一次进入web UI要进行kafka cluster的相关配置,根据自己的信息进行配置。
    

    参考文章链接:

    kafka
    https://www.cnblogs.com/shiyiwen/p/6150213.html
    http://blog.51cto.com/xiangcun168/1933509
    http://orchome.com/41
    http://www.360doc.com/content/16/1117/16/37253246_607304757.shtml
    http://jayveehe.github.io/2017/02/01/elk-stack/
    http://blog.51cto.com/wuyebamboo/1963786
    https://facingissuesonit.com/2017/05/29/integrate-filebeat-kafka-logstash-elasticsearch-and-kibana/
    https://www.yuanmas.com/info/GlypPG18y2.html
    https://www.cnblogs.com/yinchengzhe/p/5111635.html  kafka 参数
    https://www.cnblogs.com/weixiuli/p/6413109.html        kafka 配置文件参数详解
  • 相关阅读:
    springboot CRUD+分页(基于JPA规范)
    springboot中yml配置文件
    springboot中配置切换
    springboot中修改端口和上下文路径
    springboot中全局异常处理器
    springboot热部署
    新的表格展示利器 Bootstrap Table Ⅰ
    关于html转换为pdf案例的一些测试与思考
    java设计模式 策略模式Strategy
    java设计模式 模板方法模式Template Method
  • 原文地址:https://www.cnblogs.com/smail-bao/p/7987340.html
Copyright © 2020-2023  润新知