• kafka基本操作


    kafka基本操作

    一、kafka基本概念:

    • Producer: 生产者,制造数据,数据的来源的提供者
    • Consumer:消费者,消费数据,处理数据,获取kafka的数据
    • Consumer Group/Consumers:消费者组,同组内轮询,不同组广播
    • Broker:代表我们的kafka的服务
    • Topic:我们发数据或者接受数据的主题,这个数据是那一个类型,标贴
    • Partition:分区,可以实现数据负载均衡,同一个类型的数据,可以存放在多个分区
    • Replica:副本集,通过冗余数据,实现数据不丢失,备份数据
    • Offset:偏移量,就是当前数据的唯一标识,比如生产者生产当前的数据顺序号,消费者消费数据的顺序号,记录当前消费到哪里

    2. 生产者:

    发送数据的时候,没有直接把数据发送到broke里面去,而是把数据先存储到内存一个空间里面,然后后台有一个线程检测当前内存空间数据量的大小,然根据配置来判断是否要把数据发送给broke(比如,配置,数据条数,或者数据大小多少个字节)

    3. rabbmit和kafka

      rabbmit是高并发,为什么,他是erlang开发,这个语言是开发电话交换机,这个语言本身是一个把高并发玩的很溜的一个语言,自身是不能实现负载均衡,可以利用一些工具,而且集群不好用,用镜像。

      rabbmit的确认消费机制是在消费端,而Kafka的的确认消费机制是在发送端。但是都可以使用CAP原理进行实现一致性。
      kakfa:高吞吐,1.打包发送,2 内存零复制,利用内核,写的内核底层代码3,磁盘顺序读写,自身能实现复制均衡

    4. zk

      zk主要是用来管理kafka,对其进行leader选举。利用其强一致性,协调broker工作。

    二、代码使用:

    生产者代码:

      

    static async System.Threading.Tasks.Task Main(string[] args)
            {
                Console.WriteLine("Hello World!");
                while (1 == 1)
                {
    
                    Console.WriteLine("请输入发送的内容");
                    var message = Console.ReadLine();
                    //NetKafka.Push("mytopic1", message);
                    //string brokerList = "192.168.1.201:9092,192.168.1.202:9092,192.168.1.203:9092";
                    string brokerList = "192.168.1.201:9092";
                    //string brokerList = "192.168.1.201:9092";
                    //string brokerList = "192.168.1.201:9093,192.168.1.201:9092,192.168.1.201:9092";
                    //string brokerList = "localhost:9092";
                    ConfulentKafka.Produce(brokerList, "Test", message);
    
                }
    
            }
    class ConfulentKafka
        {
    
            /// <summary>
            /// 发送事件
            /// </summary>
            /// <param name="event"></param>
            public static async Task Produce(string brokerlist, string topicname, string content)
            {
                string brokerList = brokerlist;
                string topicName = topicname;
                var config = new ProducerConfig
                {
                    BootstrapServers = brokerList,
                    //幂等性 针对于我们的一个分区
                    //EnableIdempotence = true,
                    Acks = Acks.All,
                    // 你可以自己配置,鱼和熊掌不可兼得 
                    //LingerMs = 10000, //信息发送完,多久吧数据发送到broker里面去
                    BatchNumMessages = 2,//控制条数,当内存的数据条数达到了,立刻马上发送过去
                                         // 只要上面的两个要求符合一个,则后台线程立刻马上把数据推送给broker
                    MessageSendMaxRetries = 3,//补偿重试,发送失败了则重试
                                              //  Partitioner = Partitioner.Random
    
    
                };
    
                using (var producer = new ProducerBuilder<string, string>(config)
                    //.SetValueSerializer(new CustomStringSerializer<string>())
                    //    .SetStatisticsHandler((o, json) =>
                    //{
                    //    Console.WriteLine("json");
                    //    Console.WriteLine(json);
                    //})
                    .Build())
                {
    
                    Console.WriteLine("\n-----------------------------------------------------------------------");
                    Console.WriteLine($"Producer {producer.Name} producing on topic {topicName}.");
                    Console.WriteLine("-----------------------------------------------------------------------");
                    try
                    {
                
                        // 信息是键值对
                        // key 可以根据key实现负载均衡,判断当前数据发送到那个分区里面
                        // 为了给大家做实验,则key和内容是一样,的,开发中,不是这么玩,根据需要来设置key
                        var deliveryReport = await producer.ProduceAsync(
                        topicName, new Message<string, string> { Key = content, Value = content });
                        Console.WriteLine($"delivered to: {deliveryReport.TopicPartitionOffset}");
    
                    }
                    catch (ProduceException<string, string> e)
                    {
                        Console.WriteLine($"failed to deliver message: {e.Message} [{e.Error.Code}]");
                    }
                }
            }
        }

    消费端:

    static void Main(string[] args)
            {
    
                //.net coer 看看源代码
                //FileStream fileStream = new FileStream("", FileMode.OpenOrCreate, FileAccess.Write, FileShare.ReadWrite);
                //fileStream.Write(Encoding.UTF8.GetBytes("abc"));
    
                 
                //StreamWriter streamWriter = new StreamWriter(fileStream);
    
                //StreamReader streamread= new StreamReader(fileStream);
                // 然而啥都有
                //java 
                // 文件流,字节流,字符流// socket 流//通过网络,实现通讯
                // iso websocket >http的性能
                //netty 200 
                //kafka  netty
                //es  netty
                //NetKafka.Pull();
                Console.WriteLine("Hello World!");
                string brokerList = "localhost:9092";//,localhost:9093
                                                                                       //string brokerList = "localhost:9092"; // 
     
                //原理+实战
                var topics = new List<string> { "Test"};
                Console.WriteLine("请输入组名称");
                string groupname = Console.ReadLine();
                ConfulentKafka.Consumer(brokerList, topics, groupname);
            }
    class ConfulentKafka
        {
    
    
            /// <summary>
            ///  消费端拿到数据,告诉kafka数据我已经消费完了
            /// </summary>
            /// <param name="brokerList"></param>
            /// <param name="topics"></param>
            /// <param name="group"></param>
            public static void Run_Consume(string brokerList, List<string> topics, string group)
            {
                var config = new ConsumerConfig
                {
                    BootstrapServers = brokerList,
                    GroupId = group,
    
                    // 由程序员来提交
                    // // 提交方式不由你来做主,我们后台线程帮我去做
                    EnableAutoCommit = false,
                    /// 为什么不要后台主动给我提交偏移量 
                    //StatisticsIntervalMs = 5000,//相当于死循环 ,0毫秒
                    // 拿去最新的
                    AutoOffsetReset = AutoOffsetReset.Latest,
                    // 讲组名
                    //EnablePartitionEof = true,
                    //PartitionAssignmentStrategy = PartitionAssignmentStrategy.Range,
                    //FetchMaxBytes =,
                    //FetchWaitMaxMs=1,   
    
                    // 回话超时//心跳,如果机器挂了
                    SessionTimeoutMs = 6000,
                    //// 超时和超时的大小有么有限制
                    ///// 消费超时
                    MaxPollIntervalMs = 6000,
                     
    
    
                };
    
                const int commitPeriod = 1;
                // 提交偏移量的时候,也可以批量去提交
                using (var consumer = new ConsumerBuilder<Ignore, string>(config)
                    .SetErrorHandler((_, e) => Console.WriteLine($"Error: {e.Reason}"))
                    .SetPartitionsAssignedHandler((c, partitions) =>
                    {
                        //自定义存储偏移量
                        // 1.每次消费完成,把相应的分区id和offset写入到mysql数据库存储
                        //2.从指定分区和偏移量开始拉取数据
                        //分配的时候调用
                        Console.WriteLine($"Assigned partitions: [{string.Join(", ", partitions)}]");
                        #region 指定分区消费
                        // 之前可以自动均衡,现在不可以了 
                        //List<TopicPartitionOffset> topics = new List<TopicPartitionOffset>();
                        //// 我当前读取所有的分区里面的从10开始
                        //foreach (var item in partitions)
                        //{
                        //    topics.Add(new TopicPartitionOffset(item.Topic, item.Partition, new Offset(10)));
                        //}
     
                        //return topics;
                        #endregion
                    })
                    .SetPartitionsRevokedHandler((c, partitions) =>
                    {
                        //新加入消费者的时候调用 
                        Console.WriteLine($"Revoking assignment: [{string.Join(", ", partitions)}]");
                    })
                    .Build())
                {
                    //消费者会影响在平衡分区,当同一个组新加入消费者时,分区会在分配
                    consumer.Subscribe(topics);
                    try
                    {
    
                        while (true)
                        {
    
                            try
                            {
                                var consumeResult = consumer.Consume();
    
                                if (consumeResult.IsPartitionEOF)
                                {
                                    continue;
                                }
                                //当前拿到数据的偏移量是多少 ?
                                //你可以存在mysql,,redis、
                                // topic ,分区的id,偏移量
    
                                // 抓取到数据
                                Console.WriteLine($": {consumeResult.TopicPartitionOffset}::{consumeResult.Message.Value}");
    
                                // 会重复消费
                                if (consumeResult.Offset % commitPeriod == 0)
                                {
                                    try
                                    {
                                        //超时没有提交//转交数据
                                        //Thread.Sleep(2000000);
                                        // 提交机制 
                                        consumer.Commit(consumeResult);
                                        Console.WriteLine("提交");
                                    }
                                    catch (KafkaException e)
                                    {
                                        Console.WriteLine($"Commit error: {e.Error.Reason}");
                                    }
                                }
                            }
                            catch (ConsumeException e)
                            {
                                Console.WriteLine($"Consume error: {e.Error.Reason}");
                            }
                        }
                    }
                    catch (OperationCanceledException)
                    {
                        Console.WriteLine("Closing consumer.");
                        consumer.Close();
                    }
                }
            }
    
            /// <summary>
            ///     In this example
            ///         - consumer group functionality (i.e. .Subscribe + offset commits) is not used.
            ///         - the consumer is manually assigned to a partition and always starts consumption
            ///           from a specific offset (0).
            /// </summary>
            public static void Run_ManualAssign(string brokerList, List<string> topics, CancellationToken cancellationToken)
            {
                var config = new ConsumerConfig
                {
                    // the group.id property must be specified when creating a consumer, even 
                    // if you do not intend to use any consumer group functionality.
                    GroupId = new Guid().ToString(),
                    BootstrapServers = brokerList,
                    // partition offsets can be committed to a group even by consumers not
                    // subscribed to the group. in this example, auto commit is disabled
                    // to prevent this from occurring.
                    EnableAutoCommit = true
                };
    
                using (var consumer =
                    new ConsumerBuilder<Ignore, string>(config)
                        .SetErrorHandler((_, e) => Console.WriteLine($"Error: {e.Reason}")).SetPartitionsAssignedHandler((c, partitions) =>
                        {
                            Console.WriteLine($"Assigned partitions: [{string.Join(", ", partitions)}]");
                        })
                        .Build())
                {
                    List<TopicPartitionOffset> topicsss = new List<TopicPartitionOffset>();
                    consumer.Assign(
                        // 我指定读取那个一个,,哪一个偏移量
                        topics.Select(topic => new TopicPartitionOffset(topic, 1, 6)).ToList()
                        
                        );
                    try
                    {
                        while (true)
                        {
                            try
                            {
                                var consumeResult = consumer.Consume(cancellationToken);
                                Console.WriteLine($"Received message at {consumeResult.TopicPartitionOffset}: ${consumeResult.Message.Value}");
                            }
                            catch (ConsumeException e)
                            {
                                Console.WriteLine($"Consume error: {e.Error.Reason}");
                            }
                        }
                    }
                    catch (OperationCanceledException)
                    {
                        Console.WriteLine("Closing consumer.");
                        consumer.Close();
                    }
                }
            }
    
            private static void PrintUsage()
                => Console.WriteLine("Usage: .. <subscribe|manual> <broker,broker,..> <topic> [topic..]");
    
            public static void Consumer(string brokerlist, List<string> topicname, string groupname)
            {
    
                var mode = "subscribe";
                var brokerList = brokerlist;
                var topics = topicname;
                Console.WriteLine($"Started consumer, Ctrl-C to stop consuming");
                CancellationTokenSource cts = new CancellationTokenSource();
                Console.CancelKeyPress += (_, e) =>
                {
                    e.Cancel = true; // prevent the process from terminating.
                    cts.Cancel();
                };
    
                switch (mode)
                {
                    case "subscribe":
                        Run_Consume(brokerList, topics, groupname);
                        break;
                    case "manual":
                        Run_ManualAssign(brokerList, topics, cts.Token);
                        break;
                    default:
                        PrintUsage();
                        break;
                }
            }
        }

    待验证文章:https://www.jianshu.com/p/6b6696233f47#comments

    如果不懂具体细节可以留言。

    点赞,关注,共同进步!!!

  • 相关阅读:
    putty配色方案
    LDAP
    cmder显示UTF-8字体
    CentOS Linux release 7.2.1511 (Core)
    扩展欧几里得算法
    Chinese remainder theorem
    弹琴吧
    RSA DH
    iOS 和 Android 的后台推送原理各是什么?有什么区别?
    Codelite安装详解
  • 原文地址:https://www.cnblogs.com/wangjinya/p/15522404.html
Copyright © 2020-2023  润新知