When choosing to parallelize tasks I normally use Spark. Reading articles about parallel processing in Akka such as http://blog.knoldus.com/2011/09/19/power-of-parallel-processing-in-akka/ it seems using Akka to parallelize is at a lower level. It seems Spark abstracts some of the lower level concepts from the user, such as map reduce. Spark provides high level abstractions for grouping and filtering data. Is Akka a competitor to Spark for parallelising tasks or are they solving different problems ?
Before deciding which to use what considerations should I make ?