• Kafka 消费者相关配置


    消费者相关配置类为  org.apache.kafka.clients.consumer.ConsumerConfig

    具有以下配置参数

    1. GROUP_ID_CONFIG = "group.id";

       消费者分组ID,分组内的消费者只能消费该消息一次,不同分组内的消费者可以重复消费该消息。简单讲就是一条消息会被发送到不同的分组,每个分组是否消费该消息不会互相影响,但是,分组内的消息只能被其中一个消费者消费一次。Kafka利用这个分组来实现单播和多播的功能。

    2. MAX_POLL_RECORDS_CONFIG = "max.poll.records";
      消费者每次poll数据时的最大数量。

    3. MAX_POLL_INTERVAL_MS_CONFIG = "max.poll.interval.ms";
    The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. ";

    4. SESSION_TIMEOUT_MS_CONFIG = "session.timeout.ms"
    会话响应的时间,超过这个时间kafka可以选择放弃消费或者消费下一条消息
    "The timeout used to detect consumer failures when using Kafka's group management facility. The consumer sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this consumer from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by <code>group.min.session.timeout.ms</code> and <code>group.max.session.timeout.ms</code>.";5. HEARTBEAT_INTERVAL_MS_CONFIG = "heartbeat.interval.ms";
    The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than <code>session.timeout.ms</code>, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.";

    6. BOOTSTRAP_SERVERS_CONFIG = "bootstrap.servers";
    broker 地址 "host1:port1,host2:port2",-->
    "localhost:9092,localhost:9093";
    7. CLIENT_DNS_LOOKUP_CONFIG = "client.dns.lookup";

    8. ENABLE_AUTO_COMMIT_CONFIG = "enable.auto.commit";
      为true则自动提交偏移量

    9. AUTO_COMMIT_INTERVAL_MS_CONFIG = "auto.commit.interval.ms";
    自动提交偏移量周期

    10. PARTITION_ASSIGNMENT_STRATEGY_CONFIG = "partition.assignment.strategy";
    The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used";

    11. AUTO_OFFSET_RESET_CONFIG = "auto.offset.reset";
    重置消费偏移量策略
    当偏移量没有初始值时或者参数非法时,比如数据被删除时的重置策略。
    earliest:automatically reset the offset to the earliest offset
    latest:automatically reset the offset to the latest offset
    其他:抛出异常


    <ul><li>earliest: <li>latest: </li>
    <li>none: throw exception to the consumer if no previous offset is found for the consumer's group</li><li>anything else: throw exception to the consumer.</li></ul>";

    12. FETCH_MIN_BYTES_CONFIG = "fetch.min.bytes";
    The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.";

    13. FETCH_MAX_BYTES_CONFIG = "fetch.max.bytes";
    The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). Note that the consumer performs multiple fetches in parallel.";

    14. DEFAULT_FETCH_MAX_BYTES = 52428800;

    15. FETCH_MAX_WAIT_MS_CONFIG = "fetch.max.wait.ms";
    The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.";

    16. METADATA_MAX_AGE_CONFIG = "metadata.max.age.ms";

    17. MAX_PARTITION_FETCH_BYTES_CONFIG = "max.partition.fetch.bytes";
    The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). See fetch.max.bytes for limiting the consumer request size.";

    18. DEFAULT_MAX_PARTITION_FETCH_BYTES = 1048576;

    19. SEND_BUFFER_CONFIG = "send.buffer.bytes";

    20. RECEIVE_BUFFER_CONFIG = "receive.buffer.bytes";

    21. CLIENT_ID_CONFIG = "client.id";

    22. RECONNECT_BACKOFF_MS_CONFIG = "reconnect.backoff.ms";

    23. RECONNECT_BACKOFF_MAX_MS_CONFIG = "reconnect.backoff.max.ms";

    24. RETRY_BACKOFF_MS_CONFIG = "retry.backoff.ms";

    25. METRICS_SAMPLE_WINDOW_MS_CONFIG = "metrics.sample.window.ms";

    26. METRICS_NUM_SAMPLES_CONFIG = "metrics.num.samples";

    27. METRICS_RECORDING_LEVEL_CONFIG = "metrics.recording.level";

    28. METRIC_REPORTER_CLASSES_CONFIG = "metric.reporters";

    29. CHECK_CRCS_CONFIG = "check.crcs";
    Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.";

    30. KEY_DESERIALIZER_CLASS_CONFIG = "key.deserializer";
    KEY序列化类配置

    31. VALUE_DESERIALIZER_CLASS_CONFIG = "value.deserializer";
    VALUE序列化类配置

    32. CONNECTIONS_MAX_IDLE_MS_CONFIG = "connections.max.idle.ms";

    33. String REQUEST_TIMEOUT_MS_CONFIG = "request.timeout.ms";
    REQUEST_TIMEOUT_MS_DOC = "The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.";

    34. DEFAULT_API_TIMEOUT_MS_CONFIG = "default.api.timeout.ms";
    Specifies the timeout (in milliseconds) for consumer APIs that could block. This configuration is used as the default timeout for all consumer operations that do not explicitly accept a <code>timeout</code> parameter.";

    35. INTERCEPTOR_CLASSES_CONFIG = "interceptor.classes";
    A list of classes to use as interceptors. Implementing the <code>org.apache.kafka.clients.consumer.ConsumerInterceptor</code> interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors.";

    36. EXCLUDE_INTERNAL_TOPICS_CONFIG = "exclude.internal.topics";
    Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to <code>true</code> the only way to receive records from an internal topic is subscribing to it.";

    37. DEFAULT_EXCLUDE_INTERNAL_TOPICS = true;

    38. LEAVE_GROUP_ON_CLOSE_CONFIG = "internal.leave.group.on.close";

    39. ISOLATION_LEVEL_CONFIG = "isolation.level";
    <p>Controls how to read messages written transactionally. If set to <code>read_committed</code>, consumer.poll() will only return transactional messages which have been committed. If set to <code>read_uncommitted</code>' (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode.</p> <p>Messages will always be returned in offset order. Hence, in <code>read_committed</code> mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, <code>read_committed</code> consumers will not be able to read up to the high watermark when there are in flight transactions.</p><p> Further, when in <code>read_committed</code> the seekToEnd method will return the LSO";

    40. DEFAULT_ISOLATION_LEVEL;
  • 相关阅读:
    mysql查看所有触发器以及存储过程等操作集合【转】
    Hutool之Http工具类使用
    SpringCloud之Sentinel
    SpringCloud之Gateway
    com.netflix.discovery.shared.transport.TransportException: Cannot execute request on any known server
    [AWS DA Guru] SQS
    [AWS DA Guru] Kinesis
    [AWS DA Guru] SNS & SES
    [Typescript] Prevent Type Widening of Object Literals with TypeScript's const Assertions
    [AWS] Updating Elastic Beans Talks & RDS
  • 原文地址:https://www.cnblogs.com/lgjlife/p/10570671.html
Copyright © 2020-2023  润新知