This exception occurred when submitting a cluster on a storm task today: Storm.kafka.UpdateOffsetException
Java.lang.RuntimeException:storm.kafka.UpdateOffsetException at Backtype.storm.utils.DisruptorQueue.consumeBatchToCursor (disruptorqueue.java:135) at Backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable (disruptorqueue.java:106) at Backtype.storm.disruptor$consume_batch_when_available.invoke (disruptor.clj:80) at Backtype.storm.daemon.executor $FN __5694$fn__5707$fn__5758.invoke (executor.clj:819) at Backtype.storm.util$async_loop$fn__545.invoke (UTIL.CLJ : 479) at Clojure.lang.AFn.run (afn.java:22) at Java.lang.Thread.run (thread.java:745) caused by: Storm.kafka.UpdateOffsetException at Storm.kafka.KafkaUtils.fetchMessages (kafkautils.java:186) at Storm.kafka.trident.TridentKafkaEmitter.fetchMessages (tridentkafkaemitter.java:132) at Storm.kafka.trident.TridentKafkaEmitter.doEmitNewPartitionBatch (tridentkafkaemitter.java:113) at Storm.kafka.trident.TridentKafkaEmitter.failFastEmitNewPartitionBatch (tridentkafkaemitter.java:72) at Storm.kafka.trident.TridentKafkaEmitter.emitNewPartitionBatCH (tridentkafkaemitter.java:79) at storm.kafka.trident.tridentkafkaemitter.access$000 (TridentKafkaEmitter.java : Storm.kafka.trident.tridentkafkaemitter$1.emitpartitionbatch (tridentkafkaemitter.java:204) at Storm.kafka.trident.tridentkafkaemitter$1.emitpartitionbatch (tridentkafkaemitter.java:194) at Storm.trident.spout.opaquepartitionedtridentspoutexecutor$emitter.emitbatch ( opaquepartitionedtridentspoutexecutor.java:127) at Storm.trident.spout.TridentSpoutExecutor.execute ( tridentspoutexecutor.java:82) at Storm.trident.topology.TridentBoltExecutor.execute (tridentboltexecutor.java:370 ) at Backtype.storm.daemon.executor$fn__5694$tuple_action_fn__5696.invoke (executor.clj:690) at Backtype.storm.daemon.executor$mk_task_receiver$fn__5615.invoke (executor.clj:436) at backtype.storm.disruptor$ Clojure_handler$reify__5189.onevent (disruptor.clj:58) at Backtype.storm.utils.DisruptorQueue.consumeBatchToCursor (disruptorqueue.java:132) ... 6 more
This is said to take the message is no longer, may be related to the configuration of Kafka, such as the expiration of the message is deleted, or the topic of the message is too large to cause some messages to be deleted, will start reading from Userstartoffsettimeifoffsetoutofrange.
Because zookeeper already has the corresponding file directory
Workaround: Start zkcli.sh into transactional delete the folder that corresponds to the stream file name, you can
The exception that occurs in the Kafka-storm-hbase example