• 使用Python读写Kafka


    本篇会给出如何使用python来读写kafka, 包含生产者和消费者.

    以下使用kafka-python客户端

    生产者

    爬虫大多时候作为消息的发送端, 在消息发出去后最好能记录消息被发送到了哪个分区, offset是多少, 这些记录在很多情况下可以帮助快速定位问题, 所以需要在send方法后加入callback函数, 包括成功和失败的处理

    # -*- coding: utf-8 -*-
    
    '''
    callback也是保证分区有序的, 比如2条消息, a先发送, b后发送, 对于同一个分区, 那么会先回调a的callback, 再回调b的callback
    '''
    
    import json
    from kafka import KafkaProducer
    
    topic = 'demo'
    
    
    def on_send_success(record_metadata):
        print(record_metadata.topic)
        print(record_metadata.partition)
        print(record_metadata.offset)
    
    
    def on_send_error(excp):
        print('I am an errback: {}'.format(excp))
    
    
    def main():
        producer = KafkaProducer(
            bootstrap_servers='localhost:9092'
        )
        producer.send(topic, value=b'{"test_msg":"hello world"}').add_callback(on_send_success).add_callback(
            on_send_error)
        # close() 方法会阻塞等待之前所有的发送请求完成后再关闭 KafkaProducer
        producer.close()
    
    
    def main2():
        '''
        发送json格式消息
        :return:
        '''
        producer = KafkaProducer(
            bootstrap_servers='localhost:9092',
            value_serializer=lambda m: json.dumps(m).encode('utf-8')
        )
        producer.send(topic, value={"test_msg": "hello world"}).add_callback(on_send_success).add_callback(
            on_send_error)
        # close() 方法会阻塞等待之前所有的发送请求完成后再关闭 KafkaProducer
        producer.close()
    
    
    if __name__ == '__main__':
        # main()
        main2()
    
    

    消费者

    kafka的消费模型比较复杂, 我会分以下几种情况来进行说明

    1.不使用消费组(group_id=None)

    不使用消费组的情况下可以启动很多个消费者, 不再受限于分区数, 即使消费者数量 > 分区数, 每个消费者也都可以收到消息

    # -*- coding: utf-8 -*-
    
    '''
    消费者: group_id=None
    '''
    
    from kafka import KafkaConsumer
    
    topic = 'demo'
    
    
    def main():
        consumer = KafkaConsumer(
            topic,
            bootstrap_servers='localhost:9092',
            auto_offset_reset='latest',
            # auto_offset_reset='earliest',
        )
        for msg in consumer:
            print(msg)
            print(msg.value)
    
        consumer.close()
    
    
    if __name__ == '__main__':
        main()
    
    

    2.指定消费组

    以下使用pool方法来拉取消息

    • pool 每次拉取只能拉取一个分区的消息, 比如有2个分区1个consumer, 那么会拉取2次
    • pool 是如果有消息马上进行拉取, 如果timeout_ms内没有新消息则返回空dict, 所以可能出现某次拉取了1条消息, 某次拉取了max_records条
    # -*- coding: utf-8 -*-
    
    '''
    消费者: 指定group_id
    '''
    
    from kafka import KafkaConsumer
    
    topic = 'demo'
    group_id = 'test_id'
    
    
    def main():
        consumer = KafkaConsumer(
            topic,
            bootstrap_servers='localhost:9092',
            auto_offset_reset='latest',
            group_id=group_id,
    
        )
        while True:
            try:
                # return a dict
                batch_msgs = consumer.poll(timeout_ms=1000, max_records=2)
                if not batch_msgs:
                    continue
                '''
                {TopicPartition(topic='demo', partition=0): [ConsumerRecord(topic='demo', partition=0, offset=42, timestamp=1576425111411, timestamp_type=0, key=None, value=b'74', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=2, serialized_header_size=-1)]}
                '''
                for tp, msgs in batch_msgs.items():
                    print('topic: {}, partition: {} receive length: '.format(tp.topic, tp.partition, len(msgs)))
                    for msg in msgs:
                        print(msg.value)
            except KeyboardInterrupt:
                break
    
        consumer.close()
    
    
    if __name__ == '__main__':
        main()
    
    

    关于消费组

    我们根据配置参数分为以下几种情况

    • group_id=None
      • auto_offset_reset='latest': 每次启动都会从最新出开始消费, 重启后会丢失重启过程中的数据
      • auto_offset_reset='latest': 每次从最新的开始消费, 不会管哪些任务还没有消费
    • 指定group_id
      • 全新group_id
        • auto_offset_reset='latest': 只消费启动后的收到的数据, 重启后会从上次提交offset的地方开始消费
        • auto_offset_reset='earliest': 从最开始消费全量数据
      • 旧group_id(即kafka集群中还保留着该group_id的提交记录)
        • auto_offset_reset='latest': 从上次提交offset的地方开始消费
        • auto_offset_reset='earliest': 从上次提交offset的地方开始消费

    性能测试

    以下是在本地进行的测试, 如果要在线上使用kakfa, 建议提前进行性能测试

    • producer
    # -*- coding: utf-8 -*-
    
    '''
    producer performance
    
    environment:
        mac
        python3.7
        broker 1
        partition 2
    '''
    
    import json
    import time
    from kafka import KafkaProducer
    
    topic = 'demo'
    nums = 1000000
    
    
    def main():
        producer = KafkaProducer(
            bootstrap_servers='localhost:9092',
            value_serializer=lambda m: json.dumps(m).encode('utf-8')
        )
        st = time.time()
        cnt = 0
        for _ in range(nums):
            producer.send(topic, value=_)
            cnt += 1
            if cnt % 10000 == 0:
                print(cnt)
    
        producer.flush()
    
        et = time.time()
        cost_time = et - st
        print('send nums: {}, cost time: {}, rate: {}/s'.format(nums, cost_time, nums // cost_time))
    
    
    if __name__ == '__main__':
        main()
    
    '''
    send nums: 1000000, cost time: 61.89236712455749, rate: 16157.0/s
    send nums: 1000000, cost time: 61.29534196853638, rate: 16314.0/s
    '''
    
    
    • consumer
    # -*- coding: utf-8 -*-
    
    '''
    consumer performance
    '''
    
    import time
    from kafka import KafkaConsumer
    
    topic = 'demo'
    group_id = 'test_id'
    
    
    def main1():
        nums = 0
        st = time.time()
    
        consumer = KafkaConsumer(
            topic,
            bootstrap_servers='localhost:9092',
            auto_offset_reset='latest',
            group_id=group_id
        )
        for msg in consumer:
            nums += 1
            if nums >= 500000:
                break
        consumer.close()
    
        et = time.time()
        cost_time = et - st
        print('one_by_one: consume nums: {}, cost time: {}, rate: {}/s'.format(nums, cost_time, nums // cost_time))
    
    
    def main2():
        nums = 0
        st = time.time()
    
        consumer = KafkaConsumer(
            topic,
            bootstrap_servers='localhost:9092',
            auto_offset_reset='latest',
            group_id=group_id
        )
        running = True
        batch_pool_nums = 1
        while running:
            batch_msgs = consumer.poll(timeout_ms=1000, max_records=batch_pool_nums)
            if not batch_msgs:
                continue
            for tp, msgs in batch_msgs.items():
                nums += len(msgs)
                if nums >= 500000:
                    running = False
                    break
    
        consumer.close()
    
        et = time.time()
        cost_time = et - st
        print('batch_pool: max_records: {} consume nums: {}, cost time: {}, rate: {}/s'.format(batch_pool_nums, nums,
                                                                                               cost_time,
                                                                                               nums // cost_time))
    
    
    if __name__ == '__main__':
        # main1()
        main2()
    
    '''
    one_by_one: consume nums: 500000, cost time: 8.018627166748047, rate: 62354.0/s
    one_by_one: consume nums: 500000, cost time: 7.698841094970703, rate: 64944.0/s
    
    
    batch_pool: max_records: 1 consume nums: 500000, cost time: 17.975456953048706, rate: 27815.0/s
    batch_pool: max_records: 1 consume nums: 500000, cost time: 16.711708784103394, rate: 29919.0/s
    
    batch_pool: max_records: 500 consume nums: 500369, cost time: 6.654940843582153, rate: 75187.0/s
    batch_pool: max_records: 500 consume nums: 500183, cost time: 6.854053258895874, rate: 72976.0/s
    
    batch_pool: max_records: 1000 consume nums: 500485, cost time: 6.504687070846558, rate: 76942.0/s
    batch_pool: max_records: 1000 consume nums: 500775, cost time: 7.047331809997559, rate: 71058.0/s
    '''
    
    

    该公众号是我个人维护的, 初衷是可以适当备份自己的笔记, 之后想可能会帮到更多的人. 发文时间不固定, 尽可能每篇文章直奔主题, 不去占用大家宝贵的时间. 如觉得对你有用, 欢迎关注交流.

  • 相关阅读:
    MyBatis的分页插件PageHelper
    Mybatis的插件 PageHelper 分页查询使用方法
    textarea还剩余字数统计,支持复制粘贴的时候统计字数
    Mybatis——oracle——sql语句结尾不加;
    window.location.href和window.open的几种用法和区别
    window.location.href的用法(动态输出跳转)
    mybatis rownum Oracle 随机
    java 回车替换换行
    session 取
    socket编程浅知识
  • 原文地址:https://www.cnblogs.com/zlone/p/12116817.html
Copyright © 2020-2023  润新知