I'd say the most natural solution is to first group the list by equal times, then sum up each group's values. In Haskell,
tradesAccum = sortBy (compare`on`time)
>>> groupBy ((==)`on`time)
>>> map (map value >>> sum)
In case you try this and don't know where to find the necessary standard functions:
import Data.List (sortBy, groupBy)
import Data.Function (on)
import Control.Arrow ((>>>))
We can also make this nicely parallelisable and as efficient as with Map
, but still use only lists. This is basically a variation of the above, but completely implemented as a prune-enabled, parallel merge sort:
import Control.Parallel.Strategies
uniqueFstFoldSnd :: (Ord a, Semigroup b) => [(a, b)] -> [(a, b)]
uniqueFstFoldSnd [] = []
uniqueFstFoldSnd [x] = [x]
uniqueFstFoldSnd l = uncurry merge .
(withStrategy $
if len>100 then parTuple2 (evalList r0) (evalList r0)
else r0
) $ uniqueFstFoldSnd *** uniqueFstFoldSnd $ splitAt (len `quot` 2) l
where merge [] ys = ys
merge xs [] = xs
merge ((k, u):xs) ((l, v):ys)
| k < l = (k, u ) : merge xs ((l,v):ys)
| k > l = (l, v ) : merge ((k,u):xs) ys
| otherwise = (k, u<>v) : merge xs ys
len = length l
Note that the parallelism doesn't yet give a significant performance improvement; I'm still experimenting with Strategies
...