• kafka---->kafka stream的使用(一)


      kafka stream的简单使用,这里是官方文档上面的例子。

    kafka的简单使用

    一、启动Kafka server

    huhx@gohuhx:~/server/kafka_2.11-1.1.0$ bin/zookeeper-server-start.sh config/zookeeper.properties
    huhx@gohuhx:~/server/kafka_2.11-1.1.0$ bin/kafka-server-start.sh config/server.properties

    二、创建两个主题streams-plaintext-input与streams-wordcount-output

    huhx@gohuhx:~/server/kafka_2.11-1.1.0$ bin/kafka-topics.sh --create 
        --zookeeper localhost:2181 
        --replication-factor 1 
        --partitions 1 
        --topic streams-plaintext-input
    
    huhx@gohuhx:~/server/kafka_2.11-1.1.0$ bin/kafka-topics.sh --create 
        --zookeeper localhost:2181 
        --replication-factor 1 
        --partitions 1 
        --topic streams-wordcount-output 
        --config cleanup.policy=compact

    可以使用bin/kafka-topics.sh --zookeeper localhost:2181 --describe查看创建的主题描述。

    三、开启kafka里面自带的WordCount程序

    huhx@gohuhx:~/server/kafka_2.11-1.1.0$ bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo

    启动console producer去写入一些record,启动console consumer去接受处理之后的消息。

    huhx@gohuhx:~/server/kafka_2.11-1.1.0$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-plaintext-input
    
    huhx@gohuhx:~/server/kafka_2.11-1.1.0$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 
        --topic streams-wordcount-output 
        --from-beginning 
        --formatter kafka.tools.DefaultMessageFormatter 
        --property print.key=true 
        --property print.value=true 
        --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer 
        --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer

    向kafka-console-producer.sh的窗口输入一些数据

    all streams lead to kafka

    可以在kafka-console-consumer.sh的窗口里面看到如下的输出

    all     1
    streams 1
    lead    1
    to      1
    kafka   1

    继续在producer中输入数据,可以在consumer的窗口看到相应的输出。程序的结束可以按键Ctrl-C。

    四、一个关于pipe的例子

    package com.linux.huhx.stream;
    
    import org.apache.kafka.common.serialization.Serdes;
    import org.apache.kafka.streams.KafkaStreams;
    import org.apache.kafka.streams.StreamsBuilder;
    import org.apache.kafka.streams.StreamsConfig;
    import org.apache.kafka.streams.Topology;
    
    import java.util.Properties;
    import java.util.concurrent.CountDownLatch;
    
    /**
     * user: huxhu
     * date: 2018/8/12 8:59 PM
     **/
    public class PipeStream {
        public static void main(String[] args) throws Exception {
            Properties props = new Properties();
            props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-pipe");
            props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.101:9092");
            props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
            props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
    
            final StreamsBuilder builder = new StreamsBuilder();
    
            builder.stream("streams-plaintext-input").to("streams-pipe-output");
    
            final Topology topology = builder.build();
    
            final KafkaStreams streams = new KafkaStreams(topology, props);
            final CountDownLatch latch = new CountDownLatch(1);
    
            // attach shutdown handler to catch control-c
            Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
                @Override
                public void run() {
                    streams.close();
                    latch.countDown();
                }
            });
    
            try {
                streams.start();
                latch.await();
            } catch (Throwable e) {
                System.exit(1);
            }
            System.exit(0);
        }
    }

    运行以下命令执行程序

    mvn clean package
    mvn exec:java -Dexec.mainClass=com.linux.huhx.stream.PipeStream

    看到以下的输出,说明正确启动了程序。注意程序没有结束,可以按ctrl+C终止程序。

    [WARNING]
    [WARNING] Some problems were encountered while building the effective settings
    [WARNING] Unrecognised tag: 'snapshotPolicy' (position: START_TAG seen ...</layout>
              <snapshotPolicy>... @267:27)  @ /usr/local/Cellar/maven/3.5.4/libexec/conf/settings.xml, line 267, column 27
    [WARNING] Unrecognised tag: 'snapshotPolicy' (position: START_TAG seen ...
              <snapshotPolicy>... @203:27)  @ /Users/huxhu/.m2/settings.xml, line 203, column 27
    [WARNING]
    [INFO] Scanning for projects...
    [WARNING]
    [WARNING] Some problems were encountered while building the effective model for com.linux.huhx:KafkaLearn:jar:1.0-SNAPSHOT
    [WARNING] 'build.plugins.plugin.version' for org.apache.maven.plugins:maven-compiler-plugin is missing. @ line 42, column 21
    [WARNING]
    [WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
    [WARNING]
    [WARNING] For this reason, future Maven versions might no longer support building such malformed projects.
    [WARNING]
    [INFO]
    [INFO] ---------------------< com.linux.huhx:KafkaLearn >----------------------
    [INFO] Building KafkaLearn 1.0-SNAPSHOT
    [INFO] --------------------------------[ jar ]---------------------------------
    [INFO]
    [INFO] --- exec-maven-plugin:1.6.0:java (default-cli) @ KafkaLearn ---

    我们需要订阅上面声明的主题streams-pipe-output。

    huhx@gohuhx:~/server/kafka_2.11-1.1.0$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092     --topic streams-pipe-output     --from-beginning

    streams-plaintext-input窗口输入数据,可以在streams-pipe-output窗口看到相应的输出。

     五、一个关于LineSplit的例子

    package com.linux.huhx.stream;
    
    import org.apache.kafka.common.serialization.Serdes;
    import org.apache.kafka.streams.KafkaStreams;
    import org.apache.kafka.streams.StreamsBuilder;
    import org.apache.kafka.streams.StreamsConfig;
    import org.apache.kafka.streams.Topology;
    import org.apache.kafka.streams.kstream.KStream;
    
    import java.util.Arrays;
    import java.util.Properties;
    import java.util.concurrent.CountDownLatch;
    
    public class LineSplit {
     
        public static void main(String[] args) {
            Properties props = new Properties();
            props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-linesplit");
            props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "192.168.1.101:9092");
            props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
            props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass());
     
            final StreamsBuilder builder = new StreamsBuilder();
     
            KStream<String, String> source = builder.stream("streams-plaintext-input");
            source.flatMapValues(value -> Arrays.asList(value.split("\W+"))).to("streams-linesplit-output");
     
            final Topology topology = builder.build();
            final KafkaStreams streams = new KafkaStreams(topology, props);
            final CountDownLatch latch = new CountDownLatch(1);
    
            // attach shutdown handler to catch control-c
            Runtime.getRuntime().addShutdownHook(new Thread("streams-shutdown-hook") {
                @Override
                public void run() {
                    streams.close();
                    latch.countDown();
                }
            });
    
            try {
                streams.start();
                latch.await();
            } catch (Throwable e) {
                System.exit(1);
            }
            System.exit(0);
        }
    }

    在idea中直接运行上述程序,然后订阅上面声明的主题

    huhx@gohuhx:~/server/kafka_2.11-1.1.0$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092     --topic streams-linesplit-output     --from-beginning 

    在producer和consumer的窗口,可以看到对应的操作如下:

    友情链接

  • 相关阅读:
    Kafka 配置
    Zookeeper的Watcher机制
    Ubuntu18.04下希捷移动硬盘Seagate Backup Plus读写慢
    Spring Boot 使用Jar打包发布, 并使用 Embedded Jetty/Tomcat 容器
    再谈C#委托与事件
    C#委托和事件例析
    PHP:session无法使用
    C++:实现类似MFC的IsKindOf功能
    C++:复制构造函数
    C++:运算符重载
  • 原文地址:https://www.cnblogs.com/huhx/p/baseusekafkastream1.html
Copyright © 2020-2023  润新知