• kafka安装教程


    今天需要在新机器上安装一个kafka集群,其实kafka我已经装了十个不止了,但是没有一个是为生产考虑的,因此比较汗颜,今天好好地把kafka的安装以及配置梳理一下;

    1,kafka版本选取;

    现在我写博客的时候kafka的最新版本是1.1.0,如果最新版本稳定我就直接用最新的了,但是不一定稳定,因此,我先观望一下,kafka地址:http://kafka.apache.org/downloads;

    2,zookeeper版本选取;

    去zookeeper的官网分析了一下zookeeper的版本,也没选出来个所以然,刚想下载3.4.10版本,去了下载页面没想到有一个目录就放着稳定版本:http://mirror.bit.edu.cn/apache/zookeeper/stable/,因此直接就选他啦,3.4.12版本;

    3,服务器环境调试;

    发现没有装jdk,装上;、

    防火墙先关闭;

    selinux关闭;

    4,zookeeper安装;

    我把zookeeper的压缩文件放到了root目录下了,先到/opt目录下,然后:

    执行:tar -zxvf /root/zookeeper-3.4.12.tar.gz 将文件解压;

    执行: mv zookeeper-3.4.12 zookeeper 将文件重命名,主要是为了方便;

    执行: cd /opt/zookeeper/conf 进入zookeeper的配置目录;

    执行:mv zoo_sample.cfg zoo.cfg 将示例配置文件重命名;

    执行:vi zoo.cfg 开始配置zookeeper,其实没什么需要改的,人家本来的配置就够用的了,如果说有需要的话,那就把允许的最大连接数改大,我改成了300,直接上配置文件,我这里是配置了三台zookeeper,如果多的话自己加上自己的机器就好,把文件中的地址改掉就好了:

    # The number of milliseconds of each tick
    tickTime=2000
    # The number of ticks that the initial
    # synchronization phase can take
    initLimit=10
    # The number of ticks that can pass between
    # sending a request and getting an acknowledgement
    syncLimit=5
    # the directory where the snapshot is stored.
    # do not use /tmp for storage, /tmp here is just
    # example sakes.
    dataDir=/data1/zookeeper
    # the port at which the clients will connect
    clientPort=2181
    # the maximum number of client connections.
    # increase this if you need to handle more clients
    maxClientCnxns=300
    #
    # Be sure to read the maintenance section of the
    # administrator guide before turning on autopurge.
    #
    # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
    #
    # The number of snapshots to retain in dataDir
    #autopurge.snapRetainCount=3
    # Purge task interval in hours
    # Set to "0" to disable auto purge feature
    #autopurge.purgeInterval=1
    server.1=10.16.26.110:2888:3888
    
    server.2=10.16.26.116:2888:3888
    
    server.3=10.16.26.127:2888:3888

    执行:mkdir /data1/zookeeper 先把zookeeper的文件目录创建出来;

    执行:cd /data1/zookeeper 进入此目录;

    执行:vi myid 创建一个叫myid的文件,此文件中用来表示本台机器的id是多少,放在咱们的集群里肯定就是1/2/3啦;

    这样zookeeper就配置完了,按照此步骤把几台机器都配置好;

    4,kafka参数配置;

    新版本的kafka已经很好用了,不需要做 太多的配置;

      1 # Licensed to the Apache Software Foundation (ASF) under one or more
      2 # contributor license agreements.  See the NOTICE file distributed with
      3 # this work for additional information regarding copyright ownership.
      4 # The ASF licenses this file to You under the Apache License, Version 2.0
      5 # (the "License"); you may not use this file except in compliance with
      6 # the License.  You may obtain a copy of the License at
      7 #
      8 #    http://www.apache.org/licenses/LICENSE-2.0
      9 #
     10 # Unless required by applicable law or agreed to in writing, software
     11 # distributed under the License is distributed on an "AS IS" BASIS,
     12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     13 # See the License for the specific language governing permissions and
     14 # limitations under the License.
     15 
     16 # see kafka.server.KafkaConfig for additional details and defaults
     17 
     18 ############################# Server Basics #############################
     19 
     20 # The id of the broker. This must be set to a unique integer for each broker.
     21 broker.id=0
     22 
     23 ############################# Socket Server Settings #############################
     24 
     25 # The address the socket server listens on. It will get the value returned from 
     26 # java.net.InetAddress.getCanonicalHostName() if not configured.
     27 #   FORMAT:
     28 #     listeners = listener_name://host_name:port
     29 #   EXAMPLE:
     30 #     listeners = PLAINTEXT://your.host.name:9092
     31 listeners=PLAINTEXT://10.16.26.110:9092
     32 
     33 # Hostname and port the broker will advertise to producers and consumers. If not set, 
     34 # it uses the value for "listeners" if configured.  Otherwise, it will use the value
     35 # returned from java.net.InetAddress.getCanonicalHostName().
     36 #advertised.listeners=PLAINTEXT://your.host.name:9092
     37 
     38 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
     39 #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
     40 
     41 # The number of threads that the server uses for receiving requests from the network and sending responses to the network
     42 num.network.threads=3
     43 
     44 # The number of threads that the server uses for processing requests, which may include disk I/O
     45 num.io.threads=8
     46 
     47 # The send buffer (SO_SNDBUF) used by the socket server
     48 socket.send.buffer.bytes=102400
     49 
     50 # The receive buffer (SO_RCVBUF) used by the socket server
     51 socket.receive.buffer.bytes=102400
     52 
     53 # The maximum size of a request that the socket server will accept (protection against OOM)
     54 socket.request.max.bytes=104857600
     55 
     56 
     57 ############################# Log Basics #############################
     58 
     59 # A comma separated list of directories under which to store log files
     60 log.dirs=/data1/kafka-logs
     61 
     62 # The default number of log partitions per topic. More partitions allow greater
     63 # parallelism for consumption, but this will also result in more files across
     64 # the brokers.
     65 num.partitions=12
     66 
     67 delete.topic.enable=true
     68 
     69 default.replication.factor=2
     70 
     71 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
     72 # This value is recommended to be increased for installations with data dirs located in RAID array.
     73 num.recovery.threads.per.data.dir=1
     74 
     75 ############################# Internal Topic Settings  #############################
     76 # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
     77 # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
     78 offsets.topic.replication.factor=1
     79 transaction.state.log.replication.factor=1
     80 transaction.state.log.min.isr=1
     81 
     82 ############################# Log Flush Policy #############################
     83 
     84 # Messages are immediately written to the filesystem but by default we only fsync() to sync
     85 # the OS cache lazily. The following configurations control the flush of data to disk.
     86 # There are a few important trade-offs here:
     87 #    1. Durability: Unflushed data may be lost if you are not using replication.
     88 #    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
     89 #    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
     90 # The settings below allow one to configure the flush policy to flush data after a period of time or
     91 # every N messages (or both). This can be done globally and overridden on a per-topic basis.
     92 
     93 # The number of messages to accept before forcing a flush of data to disk
     94 #log.flush.interval.messages=10000
     95 
     96 # The maximum amount of time a message can sit in a log before we force a flush
     97 #log.flush.interval.ms=1000
     98 
     99 ############################# Log Retention Policy #############################
    100 
    101 # The following configurations control the disposal of log segments. The policy can
    102 # be set to delete segments after a period of time, or after a given size has accumulated.
    103 # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
    104 # from the end of the log.
    105 
    106 # The minimum age of a log file to be eligible for deletion due to age
    107 log.retention.hours=168
    108 
    109 # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
    110 # segments drop below log.retention.bytes. Functions independently of log.retention.hours.
    111 #log.retention.bytes=1073741824
    112 
    113 # The maximum size of a log segment file. When this size is reached a new log segment will be created.
    114 log.segment.bytes=1073741824
    115 
    116 # The interval at which log segments are checked to see if they can be deleted according
    117 # to the retention policies
    118 log.retention.check.interval.ms=300000
    119 
    120 ############################# Zookeeper #############################
    121 
    122 # Zookeeper connection string (see zookeeper docs for details).
    123 # This is a comma separated host:port pairs, each corresponding to a zk
    124 # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
    125 # You can also append an optional chroot string to the urls to specify the
    126 # root directory for all kafka znodes.
    127 zookeeper.connect=10.16.26.110:2181,10.16.26.126:2181,10.16.26.127:2181
    128 
    129 # Timeout in ms for connecting to zookeeper
    130 zookeeper.connection.timeout.ms=6000
    131 
    132 
    133 ############################# Group Coordinator Settings #############################
    134 
    135 # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
    136 # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
    137 # The default value for this is 3 seconds.
    138 # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
    139 # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
    140 group.initial.rebalance.delay.ms=0
    kafka 配置

    5,kafka安装;

    和zookeeper查不多,解压然后根据上一步的配置配置好就可以了;

    6,zookeeper以及kafka启动;

    zookeeper启动,进入zookeeper的bin目录下然后执行:./zkServer.sh start ;即可,然后其他两台也如此执行;

    kakfa启动:进入kafka的bin目录执行:./kafka-server-start.sh  -daemon ../config/server.properties &

  • 相关阅读:
    企业级工作流解决方案(七)--微服务Tcp消息传输模型之消息编解码
    企业级工作流解决方案(一)--总体介绍
    将博客搬至CSDN
    利用LiveGBS通过GB28181实现PC、手机WEB页面对监控摄像头直播以及语音对讲
    如何利用LiveQing流媒体服务搭建视频快照直播监控-配置定时快照、实时检索
    LiveGBS-GB28181国标流媒体接入安防摄像头或平台时目录结构设备树状展示图
    LiveNVR实现安防摄像头RTSP WEB无插件直播中ONVIF预制位接口的使用说明
    LiveGBS GB28181国标流媒体对几万路摄像头接入时如何配置切换成Mysql_Mariadb数据库
    安防摄像通过GB28181协议实现云端录像存储与回放
    GB28181实现摄像头语音对讲
  • 原文地址:https://www.cnblogs.com/zhengcj/p/9335730.html
Copyright © 2020-2023  润新知