• Kafka:docker安装Kafka消息队列


    安装之前先看下图

    Kafka基础架构及术语

     Kafka基本组成

    Kafka cluster: Kafka消息队列(存储消息的队列组件)

    Zookeeper: 注册中心(kafka集群依赖zookeeper来保存集群的的元信息,来保证系统的可用性

    Producer: 提供者(往队列放数据的程序或代码)

    Consumer: 消费者(从队列取数据的程序或代码)

    Kafka cluster 组成
        BrokerBroker是kafka实例,每个服务器上有一个或多个kafka的实例,我们姑且认为每个broker对应一台服务器。每个kafka集群内的broker都有一个不重复的编号,如图中的broker-0、broker-1等……
        Topic消息的主题,可以理解为消息的分类,kafka的数据就保存在topic。在每个broker上都可以创建多个topic。
        PartitionTopic的分区,每个topic可以有多个分区,分区的作用是做负载,提高kafka的吞吐量。同一个topic在不同的分区的数据是不重复的,partition的表现形式就是一个一个的文件夹!
        Replication: 每一个分区都有多个副本,副本的作用是做备胎。当主分区(Leader)故障的时候会选择一个备胎(Follower)上位,成为Leader。在kafka中默认副本的最大数量是10个,且副本的数量不能大于Broker的数量,follower和leader绝对是在不同的机器,同一机器对同一个分区也只可能存放一个副本(包括自己)。
        Message每一条发送的消息主体。

    Consumer Group组成我们可以将多个消费组组成一个消费者组,在kafka的设计中同一个分区的数据只能被消费者组中的某一个消费者消费。同一个消费者组的消费者可以消费同一个topic的不同分区的数据,这也是为了提高kafka的吞吐量!

    安装Zookeeper

    #docker下载zookeeper镜像
    docker pull wurstmeister/zookeeper:latest
    #生成zookeeper容器
    docker run -d --name zookeeper -p 2181:2181 -v /etc/localtime:/etc/localtime wurstmeister/zookeeper:latest

    在这里插入图片描述

    配置详解

    • -v /etc/localtime:/etc/localtime 容器时间同步虚拟机的时间

    安装Kafka

    #docker下载kafka镜像
    docker pull wurstmeister/kafka:latest
    #生成容器
    docker run  -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0 -e KAFKA_ZOOKEEPER_CONNECT=10.9.44.11:2181 -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://10.9.44.11:9092 -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka:latest

    在这里插入图片描述

    配置详解

    • -e KAFKA_BROKER_ID=0    #在kafka集群中,每个kafka都有一个BROKER_ID来区分自己
    • -e KAFKA_ZOOKEEPER_CONNECT=10.9.44.11:2181/kafka         #配置zookeeper管理kafka的路径10.9.44.11:2181/kafka
    • -e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://10.9.44.11:9092         #把kafka的地址端口注册给zookeeper
    • -e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092      #配置kafka的监听端口

     完整server.properties配置文件

     路径/etc/kafka/

    # Licensed to the Apache Software Foundation (ASF) under one or more
    # contributor license agreements.  See the NOTICE file distributed with
    # this work for additional information regarding copyright ownership.
    # The ASF licenses this file to You under the Apache License, Version 2.0
    # (the "License"); you may not use this file except in compliance with
    # the License.  You may obtain a copy of the License at
    #
    #    http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    # see kafka.server.KafkaConfig for additional details and defaults
    
    ############################# Server Basics #############################
    
    ##################################################################################
    #  broker就是一个kafka的部署实例,在一个kafka集群中,每一台kafka都要有一个broker.id
    #  并且,该id唯一,且必须为整数
    ##################################################################################
    broker.id=10
    
    ############################# Socket Server Settings #############################
    
    # The address the socket server listens on. It will get the value returned from 
    # java.net.InetAddress.getCanonicalHostName() if not configured.
    #   FORMAT:
    #     listeners = security_protocol://host_name:port
    #   EXAMPLE:
    #     listeners = PLAINTEXT://your.host.name:9092
    #listeners=PLAINTEXT://:9092
    
    # Hostname and port the broker will advertise to producers and consumers. If not set, 
    # it uses the value for "listeners" if configured.  Otherwise, it will use the value
    # returned from java.net.InetAddress.getCanonicalHostName().
    #advertised.listeners=PLAINTEXT://your.host.name:9092
    
    ##################################################################################
    #The number of threads handling network requests
    # 默认处理网络请求的线程个数 3个
    ##################################################################################
    num.network.threads=3
    ##################################################################################
    # The number of threads doing disk I/O
    # 执行磁盘IO操作的默认线程个数 8
    ##################################################################################
    num.io.threads=8
    
    ##################################################################################
    # The send buffer (SO_SNDBUF) used by the socket server
    # socket服务使用的进行发送数据的缓冲区大小,默认100kb
    ##################################################################################
    socket.send.buffer.bytes=102400
    
    ##################################################################################
    # The receive buffer (SO_SNDBUF) used by the socket server
    # socket服务使用的进行接受数据的缓冲区大小,默认100kb
    ##################################################################################
    socket.receive.buffer.bytes=102400
    
    ##################################################################################
    # The maximum size of a request that the socket server will accept (protection against OOM)
    # socket服务所能够接受的最大的请求量,防止出现OOM(Out of memory)内存溢出,默认值为:100m
    # (应该是socker server所能接受的一个请求的最大大小,默认为100M)
    ##################################################################################
    socket.request.max.bytes=104857600
    
    ############################# Log Basics (数据相关部分,kafka的数据称为log)#############################
    
    ##################################################################################
    # A comma seperated list of directories under which to store log files
    # 一个用逗号分隔的目录列表,用于存储kafka接受到的数据
    ##################################################################################
    log.dirs=/home/uplooking/data/kafka
    
    ##################################################################################
    # The default number of log partitions per topic. More partitions allow greater
    # parallelism for consumption, but this will also result in more files across
    # the brokers.
    # 每一个topic所对应的log的partition分区数目,默认1个。更多的partition数目会提高消费
    # 并行度,但是也会导致在kafka集群中有更多的文件进行传输
    # (partition就是分布式存储,相当于是把一份数据分开几份来进行存储,即划分块、划分分区的意思)
    ##################################################################################
    num.partitions=1
    
    ##################################################################################
    # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
    # This value is recommended to be increased for installations with data dirs located in RAID array.
    # 每一个数据目录用于在启动kafka时恢复数据和在关闭时刷新数据的线程个数。如果kafka数据存储在磁盘阵列中
    # 建议此值可以调整更大。
    ##################################################################################
    num.recovery.threads.per.data.dir=1
    
    ############################# Log Flush Policy (数据刷新策略)#############################
    
    # Messages are immediately written to the filesystem but by default we only fsync() to sync
    # the OS cache lazily. The following configurations control the flush of data to disk.
    # There are a few important trade-offs(平衡) here:
    #    1. Durability 持久性: Unflushed data may be lost if you are not using replication.
    #    2. Latency 延时性: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
    #    3. Throughput 吞吐量: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
    # The settings below allow one to configure the flush policy to flush data after a period of time or
    # every N messages (or both). This can be done globally and overridden on a per-topic basis.
    # kafka中只有基于消息条数和时间间隔数来制定数据刷新策略,而没有大小的选项,这两个选项可以选择配置一个
    # 当然也可以两个都配置,默认情况下两个都配置,配置如下。
    
    # The number of messages to accept before forcing a flush of data to disk
    # 消息刷新到磁盘中的消息条数阈值
    #log.flush.interval.messages=10000
    
    # The maximum amount of time a message can sit in a log before we force a flush
    # 消息刷新到磁盘生成一个log数据文件的时间间隔
    #log.flush.interval.ms=1000
    
    ############################# Log Retention Policy(数据保留策略) #############################
    
    # The following configurations control the disposal(清理) of log segments(分片). The policy can
    # be set to delete segments after a period of time, or after a given size has accumulated(累积).
    # A segment will be deleted whenever(无论什么时间) *either* of these criteria(标准) are met. Deletion always happens
    # from the end of the log.
    # 下面的配置用于控制数据片段的清理,只要满足其中一个策略(基于时间或基于大小),分片就会被删除
    
    # The minimum age of a log file to be eligible for deletion
    # 基于时间的策略,删除日志数据的时间,默认保存7天
    log.retention.hours=168
    
    # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
    # segments don't drop below log.retention.bytes. 1G
    # 基于大小的策略,1G
    #log.retention.bytes=1073741824
    
    # The maximum size of a log segment file. When this size is reached a new log segment will be created.
    # 数据分片策略
    log.segment.bytes=1073741824
    
    # The interval at which log segments are checked to see if they can be deleted according
    # to the retention policies 5分钟
    # 每隔多长时间检测数据是否达到删除条件
    log.retention.check.interval.ms=300000
    
    ############################# Zookeeper #############################
    
    # Zookeeper connection string (see zookeeper docs for details).
    # This is a comma separated host:port pairs, each corresponding to a zk
    # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
    # You can also append an optional chroot string to the urls to specify the
    # root directory for all kafka znodes.
    zookeeper.connect=uplooking01:2181,uplooking02:2181,uplooking03:2181
    
    # Timeout in ms for connecting to zookeeper
    zookeeper.connection.timeout.ms=6000

     文章整合至:https://www.cnblogs.com/panpanwelcome/p/12580506.htmlhttps://blog.csdn.net/qq_22041375/article/details/106180415https://www.cnblogs.com/toutou/p/linux_install_kafka.html

  • 相关阅读:
    xampp 80端口被占用后这么办??解决了
    XAMPP配置基于虚拟目录、多域名的环境
    mysql 主从同步
    jquery插件
    Css绘制箭头实现代码
    Ubuntu下mount命令的好用处
    linux下IPTABLES配置详解
    java程序员网站
    1.Hibernate介绍
    1. Mybatis介绍
  • 原文地址:https://www.cnblogs.com/nhdlb/p/13933808.html
Copyright © 2020-2023  润新知