JDBC MultiTable consumer - Not able to bring the incremental load

asked 2017-11-01 11:35:14 -0600

Roh gravatar image

updated 2017-12-18 13:19:13 -0600

metadaddy gravatar image

I'm using the timestamp column as offset, by setting some initial offset value in milliseconds and for the first run it is bringing the proper records and counts, but from the next run, it was not able to pick any records. Data in redshift (source) gets records added for every 15 minutes. My query interval was {15 * minutes} and I see that it's running a job accordingly, Validated in the pipeline summary.

Surprisingly if I stop and start the pipeline it is picking up the rows from the last offset value and again the same behavior. Below are my table configuration, and I tried the transaction isolation of default, read committed and serializable. I'm happy to provide some other details if required. Table Properties

image description

image description

edit retag flag offensive close merge delete


Are you using partitioning? If so, then that 15 minute query interval will be per partition per table, so you might want to try reducing the query interval to see if the data starts flowing.

metadaddy gravatar imagemetadaddy ( 2017-11-01 11:43:23 -0600 )edit

I'm not using the partition, and the multitable consumer is configured with only one table.

Roh gravatar imageRoh ( 2017-11-01 11:48:00 -0600 )edit

Do you see the query running in sdc.log? You'll need to set log level to DEBUG.

metadaddy gravatar imagemetadaddy ( 2017-11-01 12:00:41 -0600 )edit

I checked the logs by setting in the DEBUG, I can't see any logs appearing other than in the query run time.

Roh gravatar imageRoh ( 2017-11-01 12:17:47 -0600 )edit

What is the offset committed value after it successfully runs once (before you restart)? Should be visible in pipeline history.

jeff gravatar imagejeff ( 2017-11-01 12:29:07 -0600 )edit