The first pipeline will store the files in S3 (origin is S3) depending on the the column value (date) so it will be partioned by Year, month, day. Then the second pipeline (origin is S3) will perform the ETL and store it in the DW.
I have managed to do the first pipeline, only problem I got is that the output of the pipeline changed the filename and the second pipeline cannot read or ingest the data to the 2nd pipeline