Ask Your Question
0

javax.management.InstanceAlreadyExistsException

asked 2018-05-31 20:31:24 -0500

casel.chen gravatar image

updated 2018-06-01 14:03:40 -0500

metadaddy gravatar image

When I use sdc I encounter the following exception and sdc exit. What's wrong?

There are two almost the "same" data pipelines running. The copied one is created by export from original one and change some environment parameters then imported. I am afraid that some "ID" like pipelineId or uuid is not changed. Would it be reason?

2018-05-31 15:14:05,751 [user:admin] [pipeline:dev_RISK_INVOCATION_HISTORY/devRISKINVOCATIONHISTORY2b3586ce-4229-4d65-a410-eb07528a80e9] [runner:0] [thread:ProductionPipelineRunnable-devRISKINVOCATIONHISTORY2b3586ce-4229-4d65-a410-eb07528a80e9-dev_RISK_INVOCATION_HISTORY] WARN AppInfoParser - Error registering AppInfo mbean javax.management.InstanceAlreadyExistsException: kafka.producer:type=app-info,id=producer-1 at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324) at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522) at org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:58) at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:328) at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:188) at com.streamsets.pipeline.kafka.impl.KafkaProducer09.createKafkaProducer(KafkaProducer09.java:69) at com.streamsets.pipeline.kafka.impl.BaseKafkaProducer09.init(BaseKafkaProducer09.java:43)
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2018-06-03 23:09:11 -0500

Mufy gravatar image

This message is harmless and you don't have to worry about it.

This is an unfortunate behavior of Kafka. The consumer will always insist on using JMX metrics reporter when starting a new consumer. It's safe to ignore because we're not exposing those metrics that Kafka itself generates and rather expose our own metrics that are properly tied to the individual pipeline.

edit flag offensive delete link more
Login/Signup to Answer

Question Tools

1 follower

Stats

Asked: 2018-05-31 20:31:24 -0500

Seen: 32 times

Last updated: Jun 03