Java Kafka 消费者偏移超出范围,没有为分区配置重置策略

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/37320643/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-11 19:12:16  来源:igfitidea点击:

Kafka consumer offsets out of range with no configured reset policy for partitions

javaapache-kafkakafka-consumer-api

提问by basit raza

I'm receiving exception when start Kafka consumer.

我在启动 Kafka 消费者时收到异常。

org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no configured reset policy for partitions{test-0=29898318}

org.apache.kafka.clients.consumer.OffsetOutOfRangeException:偏移量超出范围,分区没有配置重置策略{test-0=29898318}

I'm using Kafka version 9.0.0 with Java 7.

我在 Java 7 中使用 Kafka 9.0.0 版。

回答by avr

So you are trying to access offset(29898318) in topic(test) partition(0) which is not available right now.

因此,您正在尝试访问当前不可用的29898318topic( test) partition( 0) 中的offset( ) 。

There could be two cases for this

这可能有两种情况

  1. Your topic partition 0may not have those many messages
  2. Your message at offset 29898318might have already deleted by retention period
  1. 您的主题分区0可能没有那么多消息
  2. 您的offset消息29898318可能已按保留期限删除

To avoid this you can do one of following:

为避免这种情况,您可以执行以下操作之一:

  1. Set auto.offset.resetconfig to either smallestor largest. You can find more info regarding this here
  2. You can get smallest offsetavailable for a topic partition by running following Kafka command line tool
  1. auto.offset.resetconfig设置为smallestlargest。您可以在此处找到有关此的更多信息
  2. 您可以smallest offset通过运行以下 Kafka 命令行工具来获取主题分区

command:

命令:

bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list <broker-ip:9092> --topic <topic-name> --time -2

Hope this helps!

希望这可以帮助!

回答by Tim Van Laer

I hit this SO question when running a Kafka Streams state store with a specific changelog topic config:

在运行具有特定更改日志主题配置的 Kafka Streams 状态存储时,我遇到了这个 SO 问题:

  • cleanup.policy=cleanup,delete
  • retention of 4 days
  • cleanup.policy=cleanup,delete
  • 保留 4 天

If Kafka Streams still has a snapshot file pointing to an offset that doesn't exist anymore, the restore consumer is configured to fail. It doesn't fall back to the earliest offset. This scenario can happen when very few data comes in or when the application is down. In both cases, when there's no commit within the changelog retention period, the snapshot file won't be updated. (This is on partition basis)

如果 Kafka Streams 仍然有一个快照文件指向不再存在的偏移量,则还原使用者配置为 fail。它不会回退到最早的偏移量。当传入的数据很少或应用程序关闭时,就会发生这种情况。在这两种情况下,如果在更改日志保留期内没有提交,快照文件将不会更新。(这是基于分区的)

Easiest way to resolve this issue is to stop your kafka streams application, remove its local state directory and restart your application.

解决此问题的最简单方法是停止您的 kafka 流应用程序,删除其本地状态目录并重新启动您的应用程序。