29

I am confused about what the difference is between client.persist() and client.compute() both seem (in some cases) to start my calculations and both return asynchronous objects, however not in my simple example:

In this example

from dask.distributed import Client
from dask import delayed
client = Client()

def f(*args):
    return args

result = [delayed(f)(x) for x in range(1000)]

x1 = client.compute(result)
x2 = client.persist(result)

Here x1 and x2 are different but in a less trivial calculation where result is also a list of Delayed objects, using client.persist(result) starts the calculation just like client.compute(result) does.

johnbaltis
  • 1,413
  • 4
  • 14
  • 26

1 Answers1

47

Relevant doc page is here: http://distributed.readthedocs.io/en/latest/manage-computation.html#dask-collections-to-futures

As you say, both Client.compute and Client.persist take lazy Dask collections and start them running on the cluster. They differ in what they return.

  1. Client.persist returns a copy for each of the dask collections with their previously-lazy computations now submitted to run on the cluster. The task graphs of these collections now just point to the currently running Future objects.

    So if you persist a dask dataframe with 100 partitions you get back a dask dataframe with 100 partitions, with each partition pointing to a future currently running on the cluster.

  2. Client.compute returns a single Future for each collection. This future refers to a single Python object result collected on one worker. This typically used for small results.

    So if you compute a dask.dataframe with 100 partitions you get back a Future pointing to a single Pandas dataframe that holds all of the data

More pragmatically, I recommend using persist when your result is large and needs to be spread among many computers and using compute when your result is small and you want it on just one computer.

In practice I rarely use Client.compute, preferring instead to use persist for intermediate staging and dask.compute to pull down final results.

df = dd.read_csv('...')
df = df[df.name == 'alice']
df = df.persist()  # compute up to here, keep results in memory

>>> df.value.max().compute()
100

>>> df.value.min().compute()
0

When using delayed

Delayed objects only have one "partition" regardless, so compute and persist are more interchangble. Persist will give you back a lazy dask.delayed object while compute will give you back an immediate Future object.

MRocklin
  • 55,641
  • 23
  • 163
  • 235
  • 1
    is there any difference between df.persist() and client.persist(df)? – jerrytim Feb 11 '20 at 04:15
  • 2
    No. I recommend using `df.persist` today – MRocklin Feb 12 '20 at 00:00
  • 1
    I found this video by @MRocklin very useful. persist() returns a Dask object (e.g. Dask DF or Array). compute() returns a non-dask object (like numpy array or pandas DF). Use compute() for small results and persist() with large, intermediate results. Link: https://youtu.be/MsnzpzFZAoQ – Arnab Biswas Dec 16 '20 at 12:45
  • 1
    @MRocklin In terms of memory usage, does this mean that there is no difference between `.persist()` and `.compute()` when using a *single machine*? – Michael Jun 28 '22 at 19:28