So I have been playing around with running streams in parallel and monitoring their behaviour based on the API documentation and other supporting material I have read.
I create two parallel streams and run distinct()
, one where the stream is ordered and one where it is unordered. I then print the results using forEachOrdered()
(to ensure I see the resulting encounter order of the stream after distinct has run), and can clearly see that the unordered version does not maintain the original ordering, but with a large dataset, would obviously enhance parallel performance.
There are API notes suggesting that the limit()
and skip()
operations should also run more efficiently in parallel when the stream is unordered as rather than having to retrieve the first n
elements, you can get any n
elements. I have tried to simulate this in the same way as above, but the result when ran in parallel with both ordered and unordered streams is always the same. In other words, when I print out the result after running limit, even for an unordered (parallel) stream, it has still always picked for first n elements?
Can anyone explain this? I tried varying the size of my input dataset and the value of n and it made no difference. I would have thought that it would grab any n elements and optimise for parallel performance? Has anyone actually seen this happen in practice, and could possibly provide a solution that showcases this behaviour consistently?