The Local FS destination can generate events that you can use in an event stream. When you enable event generation, the destination generates event records each time the destination closes a file or completes streaming a whole file.
The event can be connected to the pipeline finisher executor, Configure a precondition for the Pipeline Finisher
In the executor, add a precondition to allow only a no-more-data event into the stage to trigger the executor. You can use the following expression:
${record:eventType() == 'no-more-data'}
Tip: Records dropped because of a precondition are handled based on the stage error handling configuration. So to avoid racking up error records, you might also configure the Pipeline Finisher executor to discard error records.
Use this method when pipeline logic allows you to discard other event types generated by the origin.
How can we know if all the files are loaded or not ??
I'm not completely sure how you can monitor this without checking the file count or going to the UI, but you can trigger an email in case of pipeline failure/error while it's writing to the hdfs, that way you will be notified on issues at least.
Condition to add in the email executor :
`${record:eventType() == 'ERROR'}`
This can be email body : Pipeline ${pipeline:title()} encountered an error.
At ${time:millisecondsToDateTime(record:eventCreation() * 1000)}, Writing to HDFS failed: ${record:value('/id')}
You can always customize the email body based on the information you need. Please follow [this] for more understanding of email executor
More on how the pipeline finisher works
Your end product should look something like this :
