Ask Your Question

Revision history [back]

At present there is no way to automatically partition directory contents across multiple data collectors.

You could run similar pipelines on multiple data collectors and manually partition the data in the origin using a character range in the File Name Pattern configuration. For example, if you had two data collectors, and your file names were distributed across the alphabet, the first instance might process [a-m]* and the second [n-z]*.

At present there is no way to automatically partition directory contents across multiple data collectors.

You could run similar pipelines on multiple data collectors and manually partition the data in the origin using a different character range ranges in the File Name Pattern configuration. configurations. For example, if you had two data collectors, and your file names were distributed across the alphabet, the first instance might process [a-m]* and the second [n-z]*.

One way to do this would be by setting File Name Pattern to a runtime parameter - for example ${FileNamePattern}. You would then set the value for the pattern in the pipeline's parameters tab, or when starting the pipeline via the CLI, API, UI or Control Hub.