• 8、Spring-Kafka Recving Messages


    Record Listeners

    The @KafkaListener annotation provides a mechanism for simple POJO listeners. The following example shows how to use it:

    public class Listener {
    
        @KafkaListener(id = "foo", topics = "myTopic", clientIdPrefix = "myClientId")
        public void listen(String data) {
            ...
        }
    
    }
    

    This mechanism requires an @EnableKafka annotation on one of your @Configuration classes and a listener container factory, which is used to configure the underlying ConcurrentMessageListenerContainer. By default, a bean with name kafkaListenerContainerFactory is expected. The following example shows how to use ConcurrentMessageListenerContainer:

    @Configuration
    @EnableKafka
    public class KafkaConfig {
    
        @Bean
        KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<Integer, String>>
                            kafkaListenerContainerFactory() {
            ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
                                    new ConcurrentKafkaListenerContainerFactory<>();
            factory.setConsumerFactory(consumerFactory());
            factory.setConcurrency(3);
            factory.getContainerProperties().setPollTimeout(3000);
            return factory;
        }
    
        @Bean
        public ConsumerFactory<Integer, String> consumerFactory() {
            return new DefaultKafkaConsumerFactory<>(consumerConfigs());
        }
    
        @Bean
        public Map<String, Object> consumerConfigs() {
            Map<String, Object> props = new HashMap<>();
            props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, embeddedKafka.getBrokersAsString());
            ...
            return props;
        }
    }
    

    Notice that, to set container properties, you must use the getContainerProperties() method on the factory.
    It is used as a template for the actual properties injected into the container.

    Starting with version 2.1.1, you can now set the client.id property for consumers created by the annotation.
    The clientIdPrefix is suffixed with -n, where n is an integer representing the container number when using concurrency.

    Starting with version 2.2, you can now override the container factory’s concurrency and autoStartup properties by using properties on the annotation itself.
    The properties can be simple values, property placeholders, or SpEL expressions. The following example shows how to do so:

    @KafkaListener(id = "myListener", topics = "myTopic",
            autoStartup = "${listen.auto.start:true}", concurrency = "${listen.concurrency:3}")
    public void listen(String data) {
        ...
    }
    

    You can also configure POJO listeners with explicit topics and partitions (and, optionally, their initial offsets).
    The following example shows how to do so:

    @KafkaListener(id = "thing2", topicPartitions =
            { @TopicPartition(topic = "topic1", partitions = { "0", "1" }),
              @TopicPartition(topic = "topic2", partitions = "0",
                 partitionOffsets = @PartitionOffset(partition = "1", initialOffset = "100"))
            })
    public void listen(ConsumerRecord<?, ?> record) {
        ...
    }
    

    You can specify each partition in the partitions or partitionOffsets attribute but not both.

    When using manual AckMode, you can also provide the listener with the Acknowledgment.
    The following example also shows how to use a different container factory.

    @KafkaListener(id = "cat", topics = "myTopic",
              containerFactory = "kafkaManualAckListenerContainerFactory")
    public void listen(String data, Acknowledgment ack) {
        ...
        ack.acknowledge();
    }
    

    Finally, metadata about the message is available from message headers.
    You can use the following header names to retrieve the headers of the message:

    KafkaHeaders.RECEIVED_MESSAGE_KEY
    
    KafkaHeaders.RECEIVED_TOPIC
    
    KafkaHeaders.RECEIVED_PARTITION_ID
    
    KafkaHeaders.RECEIVED_TIMESTAMP
    
    KafkaHeaders.TIMESTAMP_TYPE
    

    The following example shows how to use the headers:

    @KafkaListener(id = "qux", topicPattern = "myTopic1")
    public void listen(@Payload String foo,
            @Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) Integer key,
            @Header(KafkaHeaders.RECEIVED_PARTITION_ID) int partition,
            @Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
            @Header(KafkaHeaders.RECEIVED_TIMESTAMP) long ts
            ) {
        ...
    }
    
    Batch listeners

    Starting with version 1.1, you can configure @KafkaListener methods to receive the entire batch of consumer records received from the consumer poll.
    To configure the listener container factory to create batch listeners, you can set the batchListener property. The following example shows how to do so:

    @Bean
    public KafkaListenerContainerFactory<?> batchFactory() {
        ConcurrentKafkaListenerContainerFactory<Integer, String> factory =
                new ConcurrentKafkaListenerContainerFactory<>();
        factory.setConsumerFactory(consumerFactory());
        factory.setBatchListener(true);  // <<<<<<<<<<<<<<<<<<<<<<<<<
        return factory;
    }
    The following example shows how to receive a list of payloads:
    
    @KafkaListener(id = "list", topics = "myTopic", containerFactory = "batchFactory")
    public void listen(List<String> list) {
        ...
    }
    The topic, partition, offset, and so on are available in headers that parallel the payloads. The following example shows how to use the headers:
    
    @KafkaListener(id = "list", topics = "myTopic", containerFactory = "batchFactory")
    public void listen(List<String> list,
            @Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) List<Integer> keys,
            @Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions,
            @Header(KafkaHeaders.RECEIVED_TOPIC) List<String> topics,
            @Header(KafkaHeaders.OFFSET) List<Long> offsets) {
        ...
    }
    

    Alternatively, you can receive a List of Message objects with each offset and other details in each message, but it must be the only parameter (aside from optional Acknowledgment, when using manual commits, and/or Consumer parameters) defined on the method.
    The following example shows how to do so:

    @KafkaListener(id = "listMsg", topics = "myTopic", containerFactory = "batchFactory")
    public void listen14(List<Message<?>> list) {
        ...
    }
    
    @KafkaListener(id = "listMsgAck", topics = "myTopic", containerFactory = "batchFactory")
    public void listen15(List<Message<?>> list, Acknowledgment ack) {
        ...
    }
    
    @KafkaListener(id = "listMsgAckConsumer", topics = "myTopic", containerFactory = "batchFactory")
    public void listen16(List<Message<?>> list, Acknowledgment ack, Consumer<?, ?> consumer) {
        ...
    }
    

    No conversion is performed on the payloads in this case.

    If the BatchMessagingMessageConverter is configured with a RecordMessageConverter, you can also add a generic type to the Message parameter and the payloads are converted. See Payload Conversion with Batch Listeners for more information.

    You can also receive a list of ConsumerRecord objects, but it must be the only parameter (aside from optional Acknowledgment, when using manual commits and Consumer parameters) defined on the method. The following example shows how to do so:

    @KafkaListener(id = "listCRs", topics = "myTopic", containerFactory = "batchFactory")
    public void listen(List<ConsumerRecord<Integer, String>> list) {
        ...
    }
    
    @KafkaListener(id = "listCRsAck", topics = "myTopic", containerFactory = "batchFactory")
    public void listen(List<ConsumerRecord<Integer, String>> list, Acknowledgment ack) {
        ...
    }
    

    Starting with version 2.2, the listener can receive the complete ConsumerRecords object returned by the poll() method,
    letting the listener access additional methods, such as partitions() (which returns the TopicPartition instances in the list) and records(TopicPartition) (which gets selective records).
    Again, this must be the only parameter (aside from optional Acknowledgment, when using manual commits or Consumer parameters) on the method.
    The following example shows how to do so:

    @KafkaListener(id = "pollResults", topics = "myTopic", containerFactory = "batchFactory")
    public void pollResults(ConsumerRecords<?, ?> records) {
        ...
    }
    

    If the container factory has a RecordFilterStrategy configured, it is ignored for ConsumerRecords listeners,
    with a WARN log message emitted. Records can only be filtered with a batch listener if the <List<?>> form of listener is used.
    Annotation Properties
    Starting with version 2.0, the id property (if present) is used as the Kafka consumer group.id property,
    overriding the configured property in the consumer factory, if present.
    You can also set groupId explicitly or set idIsGroup to false to restore the previous behavior of using the consumer factory group.id.

    You can use property placeholders or SpEL expressions within most annotation properties, as the following example shows:

    @KafkaListener(topics = "${some.property}")
    
    @KafkaListener(topics = "#{someBean.someProperty}",
        groupId = "#{someBean.someProperty}.group")
    

    Starting with version 2.1.2, the SpEL expressions support a special token: __listener.
    It is a pseudo bean name that represents the current bean instance within which this annotation exists.

    Consider the following example:

    @Bean
    public Listener listener1() {
        return new Listener("topic1");
    }
    
    @Bean
    public Listener listener2() {
        return new Listener("topic2");
    }
    Given the beans in the previous example, we can then use the following:
    
    public class Listener {
    
        private final String topic;
    
        public Listener(String topic) {
            this.topic = topic;
        }
    
        @KafkaListener(topics = "#{__listener.topic}",
            groupId = "#{__listener.topic}.group")
        public void listen(...) {
            ...
        }
    
        public String getTopic() {
            return this.topic;
        }
    
    }
    

    If, in the unlikely event that you have an actual bean called __listener,
    you can change the expression token byusing the beanRef attribute. The following example shows how to do so:

    @KafkaListener(beanRef = "__x", topics = "#{__x.topic}",
        groupId = "#{__x.topic}.group")
    

    Starting with version 2.2.4, you can specify Kafka consumer properties directly on the annotation, these will override any properties with the same name configured in the consumer factory. You cannot specify the group.id and client.id properties this way; they will be ignored; use the groupId and clientIdPrefix annotation properties for those.

    The properties are specified as individual strings with the normal Java Properties file format: foo:bar, foo=bar, or foo bar.

    @KafkaListener(topics = "myTopic", groupId="group", properties= {
        "max.poll.interval.ms:60000",
        ConsumerConfig.MAX_POLL_RECORDS_CONFIG + "=100"
    })
    
  • 相关阅读:
    [pixhawk笔记]8-半物理仿真环境
    Python超参数自动搜索模块GridSearchCV上手
    Oriented Response Networks 阅读笔记(一)
    聚类算法评价指标学习笔记
    基于sklearn的常用分类任务指标Python实现
    使用h5py库读写超过内存的大数据
    基于MXNet使用自己的图像数据集训练网络--准备数据与撰写预处理脚本
    在Ubuntu操作系统中添加环境变量
    Jetson TK1 开发板初用体会
    一条脚本搞定OpenCV
  • 原文地址:https://www.cnblogs.com/xidianzxm/p/10736474.html
Copyright © 2020-2023  润新知