• kafka producer consumer demo(三)


      我们在前面把集群搭建起来了,也设置了kafka broker的配置,下面我们用代码来实现一下客户端向kafka发送消息,consumer端从kafka消费数据。大家先不要着急着了解

    各种参数的配置,先感受一下整个流程,我会跟大家逐一深入的学习和探讨。

    1.引入的maven依赖

    <dependency>
        <groupId>org.apache.kafka</groupId>
        <artifactId>kafka-clients</artifactId>
        <version>0.11.0.1</version>
    </dependency>

    2.创建topic

    bin/kafka-topics.sh --create --topic hello_kafka --partitions 3 --replication-factor 3 --zookeeper 192.168.1.11:2181/kafka 
    #在zookeeper集群/kafka目录下创建 hello_kafka 主题,partitions 是3 副本3个

    3.producer端代码

    public class ProducerDemo {
        
        private static final Logger LOG = LoggerFactory.getLogger(ProducerDemo.class);
        
        public static void main(String[] args) throws InterruptedException, ExecutionException {
            //1.加载配置信息
            Properties prop = loadProperties();
            
            //2.创建生产者
            KafkaProducer<String,String> producer = new KafkaProducer<>(prop);
            
            String sendContent = "hello_kafka";
            ProducerRecord<String,String> record = new ProducerRecord<>("hello_kafka",sendContent);
            
            Future<RecordMetadata> future = producer.send(record);
            
            RecordMetadata recordMetadata = future.get();
            
            LOG.info("发送的数据是 {},offset 是{}",sendContent,recordMetadata.offset());
            
        }
        
        //配置文件的设置
        public static Properties loadProperties() {
            Properties prop = new Properties();
            prop.put("bootstrap.servers", "192.168.1.11:9092,192.168.1.12:9092,192.168.1.13:9092");
            prop.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
            prop.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
            
            return prop;
        }
    }

    // 2019-08-05 23:17:22|INFO |com.zzh.hello.ProducerDemo 39|发送的数据是 hello_kafka,offset 是1

    4.consumer端代码

    public class ConsumerDemo {
        
        private static final Logger LOG = LoggerFactory.getLogger(ConsumerDemo.class);
        
        
        public static void main(String[] args) {
            //1.加载配置信息
            Properties prop = loadProperties();
            
            KafkaConsumer<String, String> consumer = new KafkaConsumer<>(prop);
            
            //1.订阅消息
            consumer.subscribe(Collections.singletonList("hello_kafka"));
            
            //2.读取消息
            for(;;) {
                
                ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
                records.forEach(items->
                {
                    LOG.info("===============> offset:{},value:{}",items.offset(),items.value());
                }
                );
            }
        }
        
        private static Properties loadProperties() {
            Properties prop = new Properties();
            prop.put("bootstrap.servers", "192.168.1.11:9092,192.168.1.12:9092,192.168.1.13:9092");
            prop.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
            prop.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
            prop.put("group.id", "hello_2");
            prop.put("client.id", "demo-consumer-client");
            prop.put("auto.offset.reset", "earliest");        // earliest(最早) latest(最晚)
            
            return prop;
        }
    }
  • 相关阅读:
    Qt音视频开发8-ffmpeg保存裸流
    Qt音视频开发7-ffmpeg音频播放
    Qt音视频开发6-ffmpeg解码处理
    Qt音视频开发5-vlc事件订阅
    Qt音视频开发4-vlc读取和控制
    Qt音视频开发3-vlc录像存储
    Qt音视频开发2-vlc回调处理
    Qt音视频开发1-vlc解码播放
    【日拱一卒】链表——两个有序的链表合并
    osg计算线段与球面的交点
  • 原文地址:https://www.cnblogs.com/MrRightZhao/p/11161334.html
Copyright © 2020-2023  润新知