If a
and b
are methods that return futures and perform some computationally expensive operation, the applicativeCombine
version will run them in parallel while monadCombine
won't, which means |@|
could be close to twice as fast.
If a
and b
are options, on the other hand—or even futures that aren't doing computationally expensive work—it's possible that the for
version may be faster, since its desugared form involves a couple fewer allocations:
scala> import scalaz._, Scalaz._
import scalaz._
import Scalaz._
scala> import scala.reflect.runtime.universe._
import scala.reflect.runtime.universe._
scala> showCode(reify((Option(1) |@| Option(2))(_ + _)).tree)
res0: String = Scalaz.ToApplyOps(Option.apply(1))(Scalaz.optionInstance).|@|(Option.apply(2)).apply(((x$1, x$2) => x$1.+(x$2)))(Scalaz.optionInstance)
scala> showCode(reify(for { a <- Option(1); b <- Option(2) } yield a + b).tree)
res1: String = Option.apply(1).flatMap(((a) => Option.apply(2).map(((b) => a.+(b)))))
(But this is unlikely to make a meaningful difference in most programs.)
If this is Cats and the a
and b
are eithers and you have the "wrong" imports, so that you're getting FlatMapSyntax
and EitherInstances
instead of EitherSyntax
, the |@|
version might again be faster, since the for
version would result in at least a couple of extra Unapply
instances and some other stuff.
So the short answer is yes, it depends, and if you're really concerned about performance you'll need to do careful benchmarking for your particular use case.
The best reason to prefer the applicative version doesn't have anything to do with performance, though—it's that it's a more precise representation of the computation you want to perform. Your a
and b
don't depend on each other, but the for
-comprehension says they do. See my answer here for more discussion.