There's a general solution to this problem, which is function memoization; for a pure function (one that has no side-effects - it will not work for non-pure functions), the result of a function call should always be the same for the same set of argument values. Therefore, an optimization is to cache the value on the first call and to return it for subsequent calls.
You can achieve this with something like the following (a memoization class for pure functions with a single argument, updated—see comment below—to make it thread-safe):
/** Memoize a pure function `f(A): R`
*
* @constructor Create a new memoized function.
* @tparam A Type of argument passed to function.
* @tparam R Type of result received from function.
* @param f Pure function to be memoized.
*/
final class Memoize1[A, R](f: A => R) extends (A => R) {
// Cached function call results.
private val result = scala.collection.mutable.Map.empty[A, R]
/** Call memoized function.
*
* If the function has not been called with the specified argument value, then the
* function is called and the result cached; otherwise the previously cached
* result is returned.
*
* @param a Argument value to be passed to `f`.
* @return Result of `f(a)`.
*/
def apply(a: A) = synchronized(result.getOrElseUpdate(a, f(a)))
}
/** Memoization companion */
object Memoize1 {
/** Memoize a specific function.
*
* @tparam A Type of argument passed to function.
* @tparam R Type of result received from function.
* @param f Pure function to be memoized.
*/
def apply[A, R](f: A => R) = new Memoize1(f)
}
Assuming that the function you're memoizing is hydrateImpl
, you can then define and use runOnce
as follows (note that it becomes a val
not a def
):
val runOnce = Memoize1(hydrateImpl)
runOnce(someRequest) // Executed on first call with new someRequest value, cached result subsequently.
UPDATE: Regarding thread-safety.
In reply to the comment from user1913596
, the answer is "no"; scala.collection.mutable.Map.getOrElseUpdate
is not thread-safe. However, it's fairly trivial to synchronize access, and I have updated the original code accordingly (embedding the call within sychronized(...)
).
The performance hit of locking access should be negated by the improved execution time (assuming that f
is nontrivial).