How can I transform my resulting dask.DataFrame into pandas.DataFrame (let's say I am done with heavy lifting, and just want to apply sklearn to my aggregate result)?
3 Answers
You can call the .compute() method to transform a dask.dataframe to a pandas dataframe:
df = df.compute()

- 55,641
- 23
- 163
- 235
-
9Would it be possible to rename it to something more intuitive, e.g. `to_pandas()`? – Alexey Grigorev Aug 18 '16 at 08:16
-
2Probably not, no. `.compute()` is pretty standard among all dask collections. – MRocklin Aug 18 '16 at 12:07
-
1`.to_compute()` is actually quite intuitive to anyone working with dask. – NirIzr Aug 18 '16 at 13:20
-
@MRocklin, I am reading all the csv's from a folder and I cannot explicitly mention each column names and its dtypes. and merging all df's to single df on a common column. when I m doing df.compute, I get `ValueError: The columns in the computed data do not match the columns in the provided metadata`, how to handle this – Pyd Nov 26 '18 at 15:45
-
1@pyd, check the `meta` in `read_csv`, which can be provided by a normal `pandas.read_csv()`; but you need to make sure such `meta` info is consistent across all the files you are reading in. – sunt05 Mar 20 '19 at 14:12
MRocklin's answer is correct and this answer gives more details on when it's appropriate to convert from a Dask DataFrame to and Pandas DataFrame (and how to predict when it'll cause problems).
Each partition in a Dask DataFrame is a Pandas DataFrame. Running df.compute()
will coalesce all the underlying partitions in the Dask DataFrame into a single Pandas DataFrame. That'll cause problems if the size of the Pandas DataFrame is bigger than the RAM on your machine.
If df
has 30 GB of data and your computer has 16 GB of RAM, then df.compute()
will blow up with a memory error. If df
only has 1 GB of data, then you'll be fine.
You can run df.memory_usage(deep=True).sum()
to compute the amount of memory that your DataFrame is using. This'll let you know if your DataFrame is sufficiently small to be coalesced into a single Pandas DataFrame.
Repartioning changes the number of underlying partitions in a Dask DataFrame. df.repartition(1).partitions[0]
is conceptually similar to df.compute()
.
Converting to a Pandas DataFrame is especially possible after performing a big filtering operation. If you filter a 100 billion row dataset down to 10 thousand rows, then you can probably just switch to the Pandas API.

- 18,150
- 10
- 103
- 108
pd_df = pd.DataFrame(dsk_df)
Here you go. It's faster than dsk_df.compute()
.

- 4,410
- 2
- 23
- 40
-
3In my experience this just returns a dataframe with only the columns names transposed in a single row. – closedloop Jun 22 '21 at 13:41