When performing the operation: Dask.dataframe.to_parquet(data)
, if data
was read via Dask
with a given number of partitions, and you try to save it in parquet format after having removed some columns, it fails with e.g. the following error:
FileNotFoundError: [Errno 2] No such file or directory: part.0.parquet'
Anyone encountered the same issue?
Here is a minimal example - note that way 1 works as expected, while way 2 does NOT:
import numpy as np
import pandas as pd
import dask.dataframe as dd
# -------------
# way 1 - works
# -------------
print('way 1 - start')
A = np.random.rand(200,300)
cols = np.arange(0, A.shape[1])
cols = [str(col) for col in cols]
df = pd.DataFrame(A, columns=cols)
ddf = dd.from_pandas(df, npartitions=11)
# compute and resave
ddf.drop(cols[0:11], axis=1)
dd.to_parquet(
ddf, 'error.parquet', engine='auto', compression='default',
write_index=True, overwrite=True, append=False)
print('way 1 - end')
# ----------------------
# way 2 - does NOT work
# ----------------------
print('way 2 - start')
ddf = dd.read_parquet('error.parquet')
# compute and resave
ddf.drop(cols[0:11], axis=1)
dd.to_parquet(
ddf, 'error.parquet', engine='auto', compression='default',
write_index=True, overwrite=True, append=False)
print('way 2 - end')