Avro to Parquet conversion
I am working on creating a pipeline with hive drift solution, with snappy compression and i verified the hive table details but table is creating with out Snappy compression and also hdfs file is created with /.avro/sdc-fa8c663d-b55a-11e9-b76d-d31fde076fc7_5150dff1-eada-499d-b226-49170a3ef6c7. Please help me with below.
Why is table is not creating with parquet.compression = SNAPPY
when i create hive table directly in hive with parquet it is storing the respective data in hdfs file with /hivetabledir/ 000000_0 where as through streamsets it is like /.avro/sdc-fa8c663d-b55a-11e9-b76d-d31fde076fc7_5150dff1-eada-499d-b226-49170a3ef6c7. why is hdfs file is creating with two different ways?
Can you post a screenshot of the Hadoop FS destination config?
Please find the requested details as attachment.
I don't see an attachment
Please let me know if you see them now
I see the uploads - I edited the question to make them visible as images.