Using Hive Metastore, the scale/precision assigned are incorrect

asked 2018-05-30 09:53:54 -0500

updated 2018-05-30 10:36:06 -0500

metadaddy gravatar image

My pipeline gets data from a JDBC consumer and stores it in Hive and HDFS. It also stores the AVRO schema in HDFS.

My Hive table is defined a DECIMAL(3,2) whereas my source was (5,0). The AVRO schema is stored correctly (5,0) and in the data preview the registers have the right scale/precision too. For some reason the Hive Metastore generates the wrong tables. The pipeline executes correctly and when I re-run it, it triggers and error for wrong column definition.

Scale is configured as ${record:attribute(str:concat(str:concat('jdbc.', field:field()), '.scale'))<'0'?'0':record:attribute(str:concat(str:concat('jdbc.', field:field()), '.scale'))}

Precision same as above, just replacing scale by precision.

edit retag flag offensive close merge delete


How have you configured scale and precision in the Hive Metastore processor?

metadaddy gravatar imagemetadaddy ( 2018-05-30 10:04:12 -0500 )edit

Yes: SCALE ${record:attribute(str:concat(str:concat('jdbc.', field:field()), '.scale'))<'0'?'0':record:attribute(str:concat(str:concat('jdbc.', field:field()), '.scale'))} PRECISION same as above, just replacing scale by precision gravatar ( 2018-05-30 10:17:35 -0500 )edit

Preview the pipeline with 'Show Record/Field Header' enabled and you should see what the database is giving you for scale and precision. You could also use an Expression Evaluator to set a couple of fields to the scale/precision expressions just for debugging.

metadaddy gravatar imagemetadaddy ( 2018-05-30 10:38:05 -0500 )edit

Yes, I did that, and it is showing the right scale and precision but for some reason, when it created the HIVE table, it assigns the wrong Scale/Precision. The AVRO file generated is also correct. gravatar ( 2018-05-30 10:42:39 -0500 )edit