Pipeline restarting automatically with Kafka Multitopic Consumer

asked 2020-06-04 10:21:01 -0500

srinath_222 gravatar image

Hi,

I have my pipeline with Kafka Multitopic Consumer ->Jython Evaluator->HTTP Client. Pipeline is running fine for long time but suddenly pipeline getting restarted automatically with the below stack trace:

org.apache.kafka.common.errors.InterruptException: java.lang.InterruptedException at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.closeHeartbeatThread(AbstractCoordinator.java:344) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.close(AbstractCoordinator.java:697) at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.close(ConsumerCoordinator.java:499) at org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1737) at org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1705) at org.apache.kafka.clients.consumer.KafkaConsumer.close(KafkaConsumer.java:1680) at com.streamsets.pipeline.stage.origin.multikafka.v0_10.loader.Kafka0_10ConsumerLoader$WrapperKafkaConsumer.close(Kafka0_10ConsumerLoader.java:175) at com.streamsets.pipeline.stage.origin.multikafka.MultiKafkaSource$MultiTopicCallable.call(MultiKafkaSource.java:183) at com.streamsets.pipeline.stage.origin.multikafka.MultiKafkaSource$MultiTopicCallable.call(MultiKafkaSource.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.InterruptedException at java.lang.Object.wait(Native Method) at java.lang.Thread.join(Thread.java:1249) at java.lang.Thread.join(Thread.java:1323) at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.closeHeartbeatThread(AbstractCoordinator.java:341) ... 12 more

Similar issue was reported in https://issues.streamsets.com/browse/.... That issue happens only during Validating pipeline or making some change in groovy evaluator. In my scneario...it is throwing errors for continuously running pipeline. We get around 700-800k kafka messages per day (just wanted to check if its because of volume).

Thanks, Srinath

edit retag flag offensive close merge delete

Comments

Can you update your question and include the full stack trace?

iamontheinet gravatar imageiamontheinet ( 2020-06-09 10:02:43 -0500 )edit