The pipes and filter pattern can be well suited for this.
- The acquisition filter needs to be serial in order.
- The processing filter can run in parallel.
- The transport-to-robot filter needs to be serial in order.
In order to accomplish this with pre-existing technology, I have seen realtime applications processing large amounts of data use Intel's Threading Building Blocks (TBB). In the Thread Building Blocks Tutorial, the "Working on the Assembly Line: pipeline" section describes a similar problem:
A simple text processing example will be used to demonstrate the usage of pipeline and filter to perform parallel formatting. The example reads a text file, squares each decimal numeral in the text, and writes the modified text to a new file. [...] Assume that the raw file I/O is sequential. The squaring filter can be done in parallel. That is, if you can serially read n
chunks very quickly, you can transform each of the n
chunks in parallel, as long as they are written in the proper order to the output file.
And the accompanying code:
void RunPipeline( int ntoken, FILE* input_file, FILE* output_file ) {
tbb::parallel_pipeline(
ntoken,
tbb::make_filter<void,TextSlice*>(
tbb::filter::serial_in_order, MyInputFunc(input_file) )
& tbb::make_filter<TextSlice*,TextSlice*>(
tbb::filter::parallel, MyTransformFunc() )
& tbb::make_filter<TextSlice*,void>(
tbb::filter::serial_in_order, MyOutputFunc(output_file) ) );
}
Regardless of whether or not TBB is used, it can serve as a great implementation reference for a pipe and filter pattern that decouples the pattern from the algorithms, while providing the ability to control data order/threading for the filters.