Ask Your Question

Revision history [back]

How to copy tables from one redshift database to another?

Hello,

Sorry if this question is already asked, I couldn't able to find the solution I am looking for. May be there's a better way to achieve it.

Here's what I am trying to do: I have couple of tables in source database, I want to copy them to 4 different databases on a different redshift cluster. On the destination redshift side, I dont want to touch the existing tables, instead I am trying to create staging tables and then load the data in them and then flip them at the end to make them the main tables.

Here's what I have so far: One main SDC pipeline which has origin "JDBC Multitable Consumer", destination "JDBC Producer" and "Pipeline Finisher Executor" to stop the pipeline at the end and reset the origin. On this pipeline, I have also created pipeline start event and stop event which both should kick off separate pipelines to do the following:

  1. Pipeline start event: It should kick off another pipeline this will create stage tables in all the destination databases.
  2. Pipeline stop event: It should kick off another pipeline this will flip the stage tables to main tables and drop the old tables.

Am I doing this the correct way?

Then I tried to test this solution, but it gave me an error saying couldnt connect to the start event pipeline / stop event pipeline because they were not running.

Instead of using creating the start event pipeline / stop event pipeline, what other option do I have so that I dont have to create a separate pipeline for the above mentioned tasks?

Please let me know if I can provide any additional details.

Thank you Parth

How to copy tables from one redshift database to another?

Hello,

Sorry if this question is already asked, I couldn't able to find the solution I am looking for. May be there's a better way to achieve it.

Here's what I am trying to do: I have couple of tables in source database, I want to copy them to 4 different databases on a different redshift cluster. On the destination redshift side, I dont want to touch the existing tables, instead I am trying to create staging tables and then load the data in them and then flip them at the end to make them the main tables.

Here's what I have so far: One main SDC pipeline which has origin "JDBC Multitable Consumer", destination "JDBC Producer" and "Pipeline Finisher Executor" to stop the pipeline at the end and reset the origin. On this pipeline, I have also created pipeline start event and stop event which both should kick off separate pipelines to do the following:

  1. Pipeline start event: It should kick off another pipeline this will create stage tables in all the destination databases.
  2. Pipeline stop event: It should kick off another pipeline this will flip the stage tables to main tables and drop the old tables.

Am I doing this the correct way?

Then I tried to test this solution, but it gave me an error saying couldnt connect to the start event pipeline / stop event pipeline because they were not running.

Instead of using creating the start event pipeline / stop event pipeline, what other option do I have so that I dont have to create a separate pipeline for the above mentioned tasks?

Please let me know if I can provide any additional details.

Thank you Parth