• kafka_2.9.2-0.8.1.1分布式集群搭建代码开发实例


    准备3台虚拟机, 系统是RHEL64服务版. 1) 每台机器配置如下:
    $ cat /etc/hosts
        # zookeeper hostnames:       192.168.8.182       zk1       192.168.8.183       zk2       192.168.8.184       zk3  
    2) 每台机器上安装jdk, zookeeper, kafka, 配置如下:
    $ vi /etc/profile            # jdk, zookeeper, kafka       export KAFKA_HOME=/usr/local/lib/kafka/kafka_2.9.2-0.8.11       export ZK_HOME=/usr/local/lib/zookeeper/zookeeper-3.4.6       export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar       export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$KAFKA_HOME/bin:$ZK_HOME/bin:$PATH  
    3) 每台机器上运行:
    $ source /etc/profile
    $ mkdir -p /var/lib/zookeeper
    $ cd $ZK_HOME/conf
    $ cp zoo_sample.cfg zoo.cfg
    $ vi zoo.cfg            dataDir=/var/lib/zookeeper              # the port at which the clients will connect       clientPort=2181              # zookeeper cluster       server.1=zk1:2888:3888       server.2=zk2:2888:3888       server.3=zk3:2888:3888  
    4) 每台机器上生成myid:
    zk1:
    $ echo "1" > /var/lib/zookeeper/myid
    zk2:
    $ echo "2" > /var/lib/zookeeper/myid
    zk3:
    $ echo "3" > /var/lib/zookeeper/myid 5) 每台机器上运行setup关闭防火墙
    Firewall:
    [   ] enabled 6) 每台机器上启动zookeeper:
    $ zkServer.sh start
    查看状态:
    $ zkServer.sh status
    1)下载KAFKA
        $ wget http://apache.fayea.com/apache-mirror/kafka/0.8.1.1/kafka_2.9.2-0.8.1.1.tgz
    安装和配置参考上一篇文章:
    http://blog.csdn.net/ubuntu64fan/article/details/26678877
    2)配置$KAFKA_HOME/config/server.properties
    我们安装3个broker,分别在3个vm上:zk1,zk2,zk3:
    zk1:
    $ vi /etc/sysconfig/network
        NETWORKING=yes       HOSTNAME=zk1  
    $ vi $KAFKA_HOME/config/server.properties
        broker.id=0       port=9092       host.name=zk1       advertised.host.name=zk1       ...       num.partitions=2       ...       zookeeper.contact=zk1:2181,zk2:2181,zk3:2181  
    zk2:
    $ vi /etc/sysconfig/network
        NETWORKING=yes       HOSTNAME=zk2  
    $ vi $KAFKA_HOME/config/server.properties
        broker.id=1       port=9092       host.name=zk2       advertised.host.name=zk2       ...       num.partitions=2       ...       zookeeper.contact=zk1:2181,zk2:2181,zk3:2181  
    zk3:
    $ vi /etc/sysconfig/network
        NETWORKING=yes       HOSTNAME=zk3  
    $ vi $KAFKA_HOME/config/server.properties
        broker.id=2       port=9092       host.name=zk3       advertised.host.name=zk3       ...       num.partitions=2       ...       zookeeper.contact=zk1:2181,zk2:2181,zk3:2181  
    3)启动zookeeper服务, 在zk1,zk2,zk3上分别运行:
    $ zkServer.sh start 4)启动kafka服务, 在zk1,zk2,zk3上分别运行:
    $ kafka-server-start.sh $KAFKA_HOME/config/server.properties 5) 新建一个TOPIC(replication-factor=num of brokers)
    $ kafka-topics.sh --create --topic test --replication-factor 3 --partitions 2 --zookeeper zk1:2181 6)假设我们在zk2上,开一个终端,发送消息至kafka(zk2模拟producer)
    $ kafka-console-producer.sh --broker-list zk1:9092 --sync --topic test
    在发送消息的终端输入:Hello Kafka
    7)假设我们在zk3上,开一个终端,显示消息的消费(zk3模拟consumer)
    $ kafka-console-consumer.sh --zookeeper zk1:2181 --topic test --from-beginning 在消费消息的终端显示:Hello Kafka
    项目准备开发
    项目基于maven构建,不得不说kafka java客户端实在是太糟糕了;构建环境会遇到很多麻烦。建议参考如下pom.xml;其中各个依赖包必须版本协调一致。如果kafka client的版
    本和kafka server的版本不一致,将会有很多异常,比如"broker id not exists"等;因为kafka从0.7升级到0.8之后(正名为2.8.0),client与server通讯的protocol已经改变.
      

    Xml代码  收藏代码
    1. <dependencies>   
    2.        <dependency>   
    3.            <groupId>log4j</groupId>   
    4.            <artifactId>log4j</artifactId>   
    5.            <version>1.2.14</version>   
    6.        </dependency>   
    7.        <dependency>   
    8.            <groupId>org.apache.kafka</groupId>   
    9.            <artifactId>kafka_2.8.2</artifactId>   
    10.            <version>0.8.0</version>   
    11.            <exclusions>   
    12.                <exclusion>   
    13.                    <groupId>log4j</groupId>   
    14.                    <artifactId>log4j</artifactId>   
    15.                </exclusion>   
    16.            </exclusions>   
    17.        </dependency>   
    18.        <dependency>   
    19.            <groupId>org.scala-lang</groupId>   
    20.            <artifactId>scala-library</artifactId>   
    21.            <version>2.8.2</version>   
    22.        </dependency>   
    23.        <dependency>   
    24.            <groupId>com.yammer.metrics</groupId>   
    25.            <artifactId>metrics-core</artifactId>   
    26.            <version>2.2.0</version>   
    27.        </dependency>   
    28.        <dependency>   
    29.            <groupId>com.101tec</groupId>   
    30.            <artifactId>zkclient</artifactId>   
    31.            <version>0.3</version>   
    32.        </dependency>   
    33.    </dependencies>    

     
    Producer端代码
        1) producer.properties文件:此文件放在/resources目录下
      

    Xml代码  收藏代码
    1. #partitioner.class=   
    2.    ##broker列表可以为kafka server的子集,因为producer需要从broker中获取metadata   
    3.    ##尽管每个broker都可以提供metadata,此处还是建议,将所有broker都列举出来   
    4.    ##此值,我们可以在spring中注入过来   
    5.    ##metadata.broker.list=127.0.0.1:9092,127.0.0.1:9093   
    6.    ##,127.0.0.1:9093   
    7.    ##同步,建议为async   
    8.    producer.type=sync   
    9.    compression.codec=0   
    10.    serializer.class=kafka.serializer.StringEncoder   
    11.    ##在producer.type=async时有效   
    12.    #batch.num.messages=100    

     
        2) KafkaProducerClient.java代码样例
      

    Java代码  收藏代码
    1. import java.util.ArrayList;   
    2.    import java.util.Collection;   
    3.    import java.util.List;   
    4.    import java.util.Properties;   
    5.       
    6.    import kafka.javaapi.producer.Producer;   
    7.    import kafka.producer.KeyedMessage;   
    8.    import kafka.producer.ProducerConfig;   
    9.       
    10.    public class KafkaProducerClient {   
    11.       
    12.        private Producer<String, String> inner;   
    13.           
    14.        private String brokerList;//for metadata discovery,spring setter   
    15.        private String location = "kafka-producer.properties";//spring setter   
    16.           
    17.        private String defaultTopic;//spring setter   
    18.       
    19.        public void setBrokerList(String brokerList) {   
    20.            this.brokerList = brokerList;   
    21.        }   
    22.       
    23.        public void setLocation(String location) {   
    24.            this.location = location;   
    25.        }   
    26.       
    27.        public void setDefaultTopic(String defaultTopic) {   
    28.            this.defaultTopic = defaultTopic;   
    29.        }   
    30.       
    31.        public KafkaProducerClient(){}   
    32.           
    33.        public void init() throws Exception {   
    34.            Properties properties = new Properties();   
    35.            properties.load(Thread.currentThread().getContextClassLoader().getResourceAsStream(location));   
    36.               
    37.               
    38.            if(brokerList != null) {   
    39.                properties.put("metadata.broker.list", brokerList);   
    40.            }   
    41.       
    42.            ProducerConfig config = new ProducerConfig(properties);   
    43.            inner = new Producer<String, String>(config);   
    44.        }   
    45.       
    46.        public void send(String message){   
    47.            send(defaultTopic,message);   
    48.        }   
    49.           
    50.        public void send(Collection<String> messages){   
    51.            send(defaultTopic,messages);   
    52.        }   
    53.           
    54.        public void send(String topicName, String message) {   
    55.            if (topicName == null || message == null) {   
    56.                return;   
    57.            }   
    58.            KeyedMessage<String, String> km = new KeyedMessage<String, String>(topicName,message);   
    59.            inner.send(km);   
    60.        }   
    61.       
    62.        public void send(String topicName, Collection<String> messages) {   
    63.            if (topicName == null || messages == null) {   
    64.                return;   
    65.            }   
    66.            if (messages.isEmpty()) {   
    67.                return;   
    68.            }   
    69.            List<KeyedMessage<String, String>> kms = new ArrayList<KeyedMessage<String, String>>();   
    70.            int i= 0;   
    71.            for (String entry : messages) {   
    72.                KeyedMessage<String, String> km = new KeyedMessage<String, String>(topicName,entry);   
    73.                kms.add(km);   
    74.                i++;   
    75.                if(i % 20 == 0){   
    76.                    inner.send(kms);   
    77.                    kms.clear();   
    78.                }   
    79.            }   
    80.               
    81.            if(!kms.isEmpty()){   
    82.                inner.send(kms);   
    83.            }   
    84.        }   
    85.       
    86.        public void close() {   
    87.            inner.close();   
    88.        }   
    89.       
    90.        /** 
    91.         * @param args 
    92.         */   
    93.        public static void main(String[] args) {   
    94.            KafkaProducerClient producer = null;   
    95.            try {   
    96.                producer = new KafkaProducerClient();   
    97.                //producer.setBrokerList("");   
    98.                int i = 0;   
    99.                while (true) {   
    100.                    producer.send("test-topic", "this is a sample" + i);   
    101.                    i++;   
    102.                    Thread.sleep(2000);   
    103.                }   
    104.            } catch (Exception e) {   
    105.                e.printStackTrace();   
    106.            } finally {   
    107.                if (producer != null) {   
    108.                    producer.close();   
    109.                }   
    110.            }   
    111.       
    112.        }   
    113.       
    114.    }   

      Consumer端
         1) consumer.properties:文件位于/resources目录下

    Xml代码  收藏代码
    1. ## 此值可以配置,也可以通过spring注入   
    2.    ##zookeeper.connect=127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183   
    3.    ##,127.0.0.1:2182,127.0.0.1:2183   
    4.    # timeout in ms for connecting to zookeeper   
    5.    zookeeper.connectiontimeout.ms=1000000   
    6.    #consumer group id   
    7.    group.id=test-group   
    8.    #consumer timeout   
    9.    #consumer.timeout.ms=5000   
    10.    auto.commit.enable=true   
    11.    auto.commit.interval.ms=60000    

     
        2) KafkaConsumerClient.java代码样例
      

    Java代码  收藏代码
    1. package com.test.kafka;   
    2.    import java.nio.ByteBuffer;   
    3.    import java.nio.CharBuffer;   
    4.    import java.nio.charset.Charset;   
    5.    import java.util.HashMap;   
    6.    import java.util.List;   
    7.    import java.util.Map;   
    8.    import java.util.Properties;   
    9.    import java.util.concurrent.ExecutorService;   
    10.    import java.util.concurrent.Executors;   
    11.       
    12.    import kafka.consumer.Consumer;   
    13.    import kafka.consumer.ConsumerConfig;   
    14.    import kafka.consumer.ConsumerIterator;   
    15.    import kafka.consumer.KafkaStream;   
    16.    import kafka.javaapi.consumer.ConsumerConnector;   
    17.    import kafka.message.Message;   
    18.    import kafka.message.MessageAndMetadata;   
    19.       
    20.    public class KafkaConsumerClient {   
    21.       
    22.        private String groupid; //can be setting by spring   
    23.        private String zkConnect;//can be setting by spring   
    24.        private String location = "kafka-consumer.properties";//配置文件位置   
    25.        private String topic;   
    26.        private int partitionsNum = 1;   
    27.        private MessageExecutor executor; //message listener   
    28.        private ExecutorService threadPool;   
    29.           
    30.        private ConsumerConnector connector;   
    31.           
    32.        private Charset charset = Charset.forName("utf8");   
    33.       
    34.        public void setGroupid(String groupid) {   
    35.            this.groupid = groupid;   
    36.        }   
    37.       
    38.        public void setZkConnect(String zkConnect) {   
    39.            this.zkConnect = zkConnect;   
    40.        }   
    41.       
    42.        public void setLocation(String location) {   
    43.            this.location = location;   
    44.        }   
    45.       
    46.        public void setTopic(String topic) {   
    47.            this.topic = topic;   
    48.        }   
    49.       
    50.        public void setPartitionsNum(int partitionsNum) {   
    51.            this.partitionsNum = partitionsNum;   
    52.        }   
    53.       
    54.        public void setExecutor(MessageExecutor executor) {   
    55.            this.executor = executor;   
    56.        }   
    57.       
    58.        public KafkaConsumerClient() {}   
    59.       
    60.        //init consumer,and start connection and listener   
    61.        public void init() throws Exception {   
    62.            if(executor == null){   
    63.                throw new RuntimeException("KafkaConsumer,exectuor cant be null!");   
    64.            }   
    65.            Properties properties = new Properties();   
    66.            properties.load(Thread.currentThread().getContextClassLoader().getResourceAsStream(location));   
    67.               
    68.            if(groupid != null){   
    69.                properties.put("groupid", groupid);   
    70.            }   
    71.            if(zkConnect != null){   
    72.                properties.put("zookeeper.connect", zkConnect);   
    73.            }   
    74.            ConsumerConfig config = new ConsumerConfig(properties);   
    75.       
    76.            connector = Consumer.createJavaConsumerConnector(config);   
    77.            Map<String, Integer> topics = new HashMap<String, Integer>();   
    78.            topics.put(topic, partitionsNum);   
    79.            Map<String, List<KafkaStream<byte[], byte[]>>> streams = connector.createMessageStreams(topics);   
    80.            List<KafkaStream<byte[], byte[]>> partitions = streams.get(topic);   
    81.            threadPool = Executors.newFixedThreadPool(partitionsNum * 2);   
    82.               
    83.            //start   
    84.            for (KafkaStream<byte[], byte[]> partition : partitions) {   
    85.                threadPool.execute(new MessageRunner(partition));   
    86.            }   
    87.        }   
    88.       
    89.        public void close() {   
    90.            try {   
    91.                threadPool.shutdownNow();   
    92.            } catch (Exception e) {   
    93.                //   
    94.            } finally {   
    95.                connector.shutdown();   
    96.            }   
    97.       
    98.        }   
    99.       
    100.        class MessageRunner implements Runnable {   
    101.            private KafkaStream<byte[], byte[]> partition;   
    102.       
    103.            MessageRunner(KafkaStream<byte[], byte[]> partition) {   
    104.                this.partition = partition;   
    105.            }   
    106.       
    107.            public void run() {   
    108.                ConsumerIterator<byte[], byte[]> it = partition.iterator();   
    109.                while (it.hasNext()) {   
    110.                    // connector.commitOffsets();手动提交offset,当autocommit.enable=false时使用   
    111.                    MessageAndMetadata<byte[], byte[]> item = it.next();   
    112.                    try{   
    113.                        executor.execute(new String(item.message(),charset));// UTF-8,注意异常   
    114.                    }catch(Exception e){   
    115.                        //   
    116.                    }   
    117.                }   
    118.            }   
    119.               
    120.            public String getContent(Message message){   
    121.                ByteBuffer buffer = message.payload();   
    122.                if (buffer.remaining() == 0) {   
    123.                    return null;   
    124.                }   
    125.                CharBuffer charBuffer = charset.decode(buffer);   
    126.                return charBuffer.toString();   
    127.            }   
    128.        }   
    129.       
    130.        public static interface MessageExecutor {   
    131.       
    132.            public void execute(String message);   
    133.        }   
    134.       
    135.        /** 
    136.         * @param args 
    137.         */   
    138.        public static void main(String[] args) {   
    139.            KafkaConsumerClient consumer = null;   
    140.            try {   
    141.                MessageExecutor executor = new MessageExecutor() {   
    142.       
    143.                    public void execute(String message) {   
    144.                        System.out.println(message);   
    145.                    }   
    146.                };   
    147.                consumer = new KafkaConsumerClient();   
    148.                   
    149.                consumer.setTopic("test-topic");   
    150.                consumer.setPartitionsNum(2);   
    151.                consumer.setExecutor(executor);   
    152.                consumer.init();   
    153.            } catch (Exception e) {   
    154.                e.printStackTrace();   
    155.            } finally {   
    156.                 if(consumer != null){   
    157.                     consumer.close();   
    158.                 }   
    159.            }   
    160.       
    161.        }   
    162.       
    163.    }    

     
        需要提醒的是,上述LogConsumer类中,没有太多的关注异常情况,必须在MessageExecutor.execute()方法中抛出异常时的情况.
        在测试时,建议优先启动consumer,然后再启动producer,这样可以实时的观测到最新的消息。

  • 相关阅读:
    docker-compose常用命令-详解
    Docker的4种网络模式
    docker-compose常用命令-详解
    windows 10安装nodejs(npm,cnpm),在谷歌浏览器上安装vue开发者工具 vue Devtools
    @Pointcut注解
    leetcode做题总结
    win10右键添加在此处打开powershell
    怎样从 bat 批处理文件调用 PowerShell 脚本
    Android Google Play app signing 最终完美解决方式
    526. Beautiful Arrangement
  • 原文地址:https://www.cnblogs.com/tonychai/p/4528372.html
Copyright © 2020-2023  润新知