Note that in most cases you can retrace what the built-in collectors do and provide these functions directly to a stream of primitive values, e.g.:
Map<Long, Long> map = LongStream.rangeClosed(1, limit).parallel()
.collect(HashMap::new, (m,l)->m.put(l, properDivsSum(l)), Map::putAll);
This only differs in the treatment of key collisions, but since we know that there won’t be any collisions, that’s irrelevant here.
However, you should ask yourself, why are you storing long
values in a Map
? That’s a really bad data structure for this task. Instead, consider:
public class AmicablePairs {
public static void main(String[] args) {
final int limit = 20_000;
long[] map = LongStream.rangeClosed(1, limit).parallel()
.map(AmicablePairs::properDivsSum).toArray();
IntStream.rangeClosed(1, limit).parallel()
.forEach(n -> {
long m = map[n-1];
if(m > n && m <= limit && map[(int)m-1] == n)
System.out.printf("%s %s %n", n, m);
});
}
public static Long properDivsSum(long n) {
return LongStream.rangeClosed(1, (n+1)/2).filter(i -> n%i == 0).sum();
}
}
Note that, since the range streams have a predictable size, the array generation will be much more efficient than the toMap
collector which doesn’t know the expected size. That’s especially relevant for the parallel processing as with a known size, the toArray
operation doesn’t require intermediate storage, that has to be merged afterwards. Plus, there’s no boxing conversion required.
By the way, the second operation, which will print the values, is unlikely to become accelerated by parallel processing as the internal synchronation of System.out.printf
will negate most potential benefit of parallel processing. I’d remove the .parallel()
from it.
Another option is to separate the arithmetic, which could benefit from parallel processing, from the printing, i.e.
long[] map = LongStream.rangeClosed(1, limit).parallel()
.map(AmicablePairs::properDivsSum).toArray();
int[] found = IntStream.rangeClosed(1, limit).parallel()
.filter(n -> {
long m = map[n-1];
return m > n && m <= limit && map[(int)m-1] == n;
}).toArray();
Arrays.stream(found).forEach(n -> System.out.printf("%s %s %n", n, map[n-1]));
but I don’t know whether it will improve the performance, as the operations of the second stream are possibly too simple to compensate the initial overhead of parallel processing.