• kafka 0.10.2 cetos6.5 集群部署


    安装 zookeeper http://www.cnblogs.com/xiaojf/p/6572351.html
    安装 scala http://www.cnblogs.com/xiaojf/p/6568432.html
    [root@m1 jar]# tar zxvf kafka_2.11-0.10.2.0.tgz -C ../
    [root@m1 jar]# cd ..
    [root@m1 soft]# ll
    total 24
    drwxr-xr-x.  2 root root 4096 Mar 17 18:18 jar
    drwxr-xr-x.  8 uucp  143 4096 Dec 12 16:50 jdk
    drwxr-xr-x.  6 root root 4096 Feb 14 09:29 kafka_2.11-0.10.2.0
    drwxrwxr-x.  6 1001 1001 4096 Mar  4  2016 scala-2.11.8
    drwxr-xr-x.  3 root root 4096 Mar 17 18:30 tmp
    drwxr-xr-x. 10 1001 1001 4096 Aug 23  2016 zookeeper-3.4.9
    [root@m1 soft]# mv kafka_2.11-0.10.2.0 kafka
    [root@m1 soft]# cd kafka/config/
    [root@m1 config]# ll
    total 60
    -rw-r--r--. 1 root root  906 Feb 14 09:26 connect-console-sink.properties
    -rw-r--r--. 1 root root  909 Feb 14 09:26 connect-console-source.properties
    -rw-r--r--. 1 root root 2760 Feb 14 09:26 connect-distributed.properties
    -rw-r--r--. 1 root root  883 Feb 14 09:26 connect-file-sink.properties
    -rw-r--r--. 1 root root  881 Feb 14 09:26 connect-file-source.properties
    -rw-r--r--. 1 root root 1074 Feb 14 09:26 connect-log4j.properties
    -rw-r--r--. 1 root root 2061 Feb 14 09:26 connect-standalone.properties
    -rw-r--r--. 1 root root 1199 Feb 14 09:26 consumer.properties
    -rw-r--r--. 1 root root 4369 Feb 14 09:26 log4j.properties
    -rw-r--r--. 1 root root 1900 Feb 14 09:26 producer.properties
    -rw-r--r--. 1 root root 5631 Feb 14 09:26 server.properties
    -rw-r--r--. 1 root root 1032 Feb 14 09:26 tools-log4j.properties
    -rw-r--r--. 1 root root 1023 Feb 14 09:26 zookeeper.properties
    [root@m1 config]# vi server.properties 

    # Licensed to the Apache Software Foundation (ASF) under one or more
    # contributor license agreements. See the NOTICE file distributed with
    # this work for additional information regarding copyright ownership.
    # The ASF licenses this file to You under the Apache License, Version 2.0
    # (the "License"); you may not use this file except in compliance with
    # the License. You may obtain a copy of the License at
    #
    # http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.

    # see kafka.server.KafkaConfig for additional details and defaults

    ############################# Server Basics #############################

    # The id of the broker. This must be set to a unique integer for each broker.
    broker.id=0

    # Switch to enable topic deletion or not, default value is false
    #delete.topic.enable=true

    ############################# Socket Server Settings #############################

    # The address the socket server listens on. It will get the value returned from
    # java.net.InetAddress.getCanonicalHostName() if not configured.
    # FORMAT:
    # listeners = listener_name://host_name:port
    # EXAMPLE:
    # listeners = PLAINTEXT://your.host.name:9092
    #listeners=PLAINTEXT://:9092

    # Hostname and port the broker will advertise to producers and consumers. If not set,
    # it uses the value for "listeners" if configured. Otherwise, it will use the value
    # returned from java.net.InetAddress.getCanonicalHostName(). 此处要修改为当前节点的真实ip地址,否则外部java无法访问9092端口
    advertised.listeners=PLAINTEXT://192.168.59.130:9092

    # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
    #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

    # The number of threads handling network requests
    num.network.threads=3

    # The number of threads doing disk I/O
    num.io.threads=8

    # The send buffer (SO_SNDBUF) used by the socket server
    socket.send.buffer.bytes=102400

    # The receive buffer (SO_RCVBUF) used by the socket server
    socket.receive.buffer.bytes=102400

    # The maximum size of a request that the socket server will accept (protection against OOM)
    socket.request.max.bytes=104857600


    ############################# Log Basics #############################

    # A comma seperated list of directories under which to store log files
    log.dirs=/usr/local/soft/tmp/kafka/logs

    # The default number of log partitions per topic. More partitions allow greater
    # parallelism for consumption, but this will also result in more files across
    # the brokers.
    num.partitions=1

    # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
    # This value is recommended to be increased for installations with data dirs located in RAID array.
    num.recovery.threads.per.data.dir=1

    ############################# Log Flush Policy #############################

    # Messages are immediately written to the filesystem but by default we only fsync() to sync
    # the OS cache lazily. The following configurations control the flush of data to disk.
    # There are a few important trade-offs here:
    # 1. Durability: Unflushed data may be lost if you are not using replication.
    # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
    # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
    # The settings below allow one to configure the flush policy to flush data after a period of time or
    # every N messages (or both). This can be done globally and overridden on a per-topic basis.

    # The number of messages to accept before forcing a flush of data to disk
    #log.flush.interval.messages=10000

    # The maximum amount of time a message can sit in a log before we force a flush
    #log.flush.interval.ms=1000

    ############################# Log Retention Policy #############################

    # The following configurations control the disposal of log segments. The policy can
    # be set to delete segments after a period of time, or after a given size has accumulated.
    # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
    # from the end of the log.

    # The minimum age of a log file to be eligible for deletion due to age
    log.retention.hours=168

    # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
    # segments don't drop below log.retention.bytes. Functions independently of log.retention.hours.
    #log.retention.bytes=1073741824

    # The maximum size of a log segment file. When this size is reached a new log segment will be created.
    log.segment.bytes=1073741824

    # The interval at which log segments are checked to see if they can be deleted according
    # to the retention policies
    log.retention.check.interval.ms=300000

    ############################# Zookeeper #############################

    # Zookeeper connection string (see zookeeper docs for details).
    # This is a comma separated host:port pairs, each corresponding to a zk
    # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
    # You can also append an optional chroot string to the urls to specify the
    # root directory for all kafka znodes.
    zookeeper.connect=m1:2181,s1:2181,s2:2181

    # Timeout in ms for connecting to zookeeper
    zookeeper.connection.timeout.ms=6000

    创建日志文件夹  /usr/local/soft/tmp/kafka/logs

    [root@m1 config]# mkdir -p /usr/local/soft/tmp/kafka/logs

    分发代码到其他服务器,并修改broker.id

    s1  broker.id = 1

    s2 broker.id = 2

    设置环境变量

    [root@s2 config]# vi /etc/profile
    export KAFKA_HOME=/usr/local/soft/kafka/
    export PATH=$PATH:$KAFKA_HOME/bin
    source /etc/profile

    启动kafka server, 确认各个服务已经创建 /usr/local/soft/tmp/kafka/logs 目录

    [root@s2 config]# kafka-server-start.sh /usr/local/soft/kafka/config/server.properties &

    关闭kafka

    [root@s2 config]# kafka-server-stop.sh 

    如果上面命令无效,直接kiil -9 kafka的进程号

    创建topic

    [root@s2 config]# kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
    Created topic "test".

    创建消息生产者

    [root@s2 config]# kafka-console-producer.sh --broker-list localhost:9092 --topic test
    xiaojf

    创建消息消费者

    [root@s1 ~]# kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

    [2017-03-21 07:15:32,611] INFO [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.GroupMetadataManager)
    [2017-03-21 07:15:32,649] INFO [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-42 in 38 milliseconds. (kafka.coordinator.GroupMetadataManager)
    [2017-03-21 07:15:32,649] INFO [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.GroupMetadataManager)
    [2017-03-21 07:15:32,681] INFO [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-4 in 32 milliseconds. (kafka.coordinator.GroupMetadataManager)
    [2017-03-21 07:15:32,681] INFO [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.GroupMetadataManager)
    [2017-03-21 07:15:32,718] INFO [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-48 in 37 milliseconds. (kafka.coordinator.GroupMetadataManager)
    [2017-03-21 07:15:32,718] INFO [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.GroupMetadataManager)
    [2017-03-21 07:15:32,757] INFO [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-10 in 39 milliseconds. (kafka.coordinator.GroupMetadataManager)
    [2017-03-21 07:15:32,757] INFO [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.GroupMetadataManager)
    [2017-03-21 07:15:32,805] INFO [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-16 in 48 milliseconds. (kafka.coordinator.GroupMetadataManager)
    xiaojf

    至此,安装全部完成,接着测试多个broker 代理的例子

    创建topic,replicat 数量根据当前集群kafka节点数据量相关

    [root@s2 config]# kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
    Error while executing topic command : replication factor: 3 larger than available brokers: 2
    [2017-03-21 07:18:02,515] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: replication factor: 3 larger than available brokers: 2
     (kafka.admin.TopicCommand$)
    [root@s2 config]# kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 2 --topic my-replicated-topic
    Created topic "my-replicated-topic".

    查看topic状态

    [root@s2 config]# kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
    Topic:my-replicated-topic    PartitionCount:2    ReplicationFactor:2    Configs:
        Topic: my-replicated-topic    Partition: 0    Leader: 2    Replicas: 2,1    Isr: 2,1
        Topic: my-replicated-topic    Partition: 1    Leader: 1    Replicas: 1,2    Isr: 1,2
    partitions 为2 ,所以有两行的分区信息
    Replicas 是指当前分区分别在哪个kafka节点上, 数字标识broker.id
    Leader 分区的leader节点
    Isr 存活的kafka replicat节点,并且如果当前leader挂掉后, 依次选举为心的leader节点

    创建消息生产者

    [root@s2 config]# kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic
    xiaojf muli^Hti topic

    创建消息消费者

    [root@s2 ~]# kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
    xiaojf multi topic

    结束

  • 相关阅读:
    人人学IoT 助学思维导图
    基于netty4.x开发时间服务器
    JAVA实现的截屏程序
    java获取硬盘ID以及MAC地址
    神经网络joone_engin模式识别示范,eclipse
    神经网络/人工智能 开源库
    双目测距
    OpenCV学习笔记(27)KAZE 算法原理与源码分析(一)非线性扩散滤波
    一个java 开源神经网络引擎 joone
    用Java开源项目JOONE实现人工智能编程
  • 原文地址:https://www.cnblogs.com/xiaojf/p/6597179.html
Copyright © 2020-2023  润新知