Ask Your Question

MongoDB to Hive error

asked 2018-06-13 03:48:11 -0500

this post is marked as community wiki

This post is a wiki. Anyone with karma >75 is welcome to improve it.

I'm trying to read MongoDB data to Hive in realtime, and I get this error when I use MongoDB oplog mode:

com.streamsets.pipeline.api.base.OnRecordErrorException: HIVE_19 - Unsupported Type: MAP
    at com.streamsets.pipeline.stage.processor.hive.HiveMetadataProcessor.process(
    at com.streamsets.pipeline.api.base.RecordProcessor.process(
    at com.streamsets.pipeline.api.base.configurablestage.DProcessor.process(
    at com.streamsets.datacollector.runner.StageRuntime.lambda$execute$2(
    at com.streamsets.datacollector.runner.StageRuntime.execute(
    at com.streamsets.datacollector.runner.StageRuntime.execute(
    at com.streamsets.datacollector.runner.StagePipe.process(
    at com.streamsets.datacollector.runner.preview.PreviewPipelineRunner.lambda$runSourceLessBatch$0(
    at com.streamsets.datacollector.runner.PipeRunner.executeBatch(
    at com.streamsets.datacollector.runner.preview.PreviewPipelineRunner.runSourceLessBatch(
    at com.streamsets.datacollector.runner.preview.PreviewPipelineRunner.runPollSource(
    at com.streamsets.datacollector.execution.preview.sync.SyncPreviewer.start(
    at com.streamsets.datacollector.execution.preview.async.AsyncPreviewer.lambda$start$0(
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.lambda$call$0(
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$SafeCallable.lambda$call$0(
    at com.streamsets.pipeline.lib.executor.SafeScheduledExecutorService$
    at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(
    at java.util.concurrent.ScheduledThreadPoolExecutor$
    at com.streamsets.datacollector.metrics.MetricSafeScheduledExecutorService$
    at java.util.concurrent.ThreadPoolExecutor.runWorker(
    at java.util.concurrent.ThreadPoolExecutor$
edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted

answered 2018-06-13 20:55:00 -0500

metadaddy gravatar image

updated 2018-06-14 09:05:01 -0500

It looks like you need to flatten your input record to be able to write it to Hive. Use preview, or write the data as JSON to the Local FS destination to see how it's arriving from the MongoDB oplog origin, then use Field Flattener and the other processors to get it into the shape you need.

This article might be helpful: Transform Data in StreamSets Data Collector

See also the tutorial and video: Ingesting Drifting Data into Hive and Impala

edit flag offensive delete link more


Thank you you help!And i try to use mongodb not mongodb oplog mode to do this and refer to "Drift Synchronization Solution for Hive", i can see the input and output data in "hive metadata" assembly,but can not see the input data in "hadoop fs" and "hive metasotre",i don't know what's wrong.

supersujj gravatar imagesupersujj ( 2018-06-13 23:09:30 -0500 )edit

do you have some example?

supersujj gravatar imagesupersujj ( 2018-06-13 23:10:20 -0500 )edit

I added another useful link to the answer. You could export your pipeline to JSON, remove any passwords, and post it in a question to the Google Group at

metadaddy gravatar imagemetadaddy ( 2018-06-14 09:06:18 -0500 )edit
Login/Signup to Answer

Question Tools

1 follower


Asked: 2018-06-13 03:48:11 -0500

Seen: 15 times

Last updated: Jun 14