19

I am trying to convert one column of my dataframe to datetime. Following the discussion here https://github.com/dask/dask/issues/863 I tried the following code:

import dask.dataframe as dd
df['time'].map_partitions(pd.to_datetime, columns='time').compute()

But I am getting the following error message

ValueError: Metadata inference failed, please provide `meta` keyword

What exactly should I put under meta? should I put a dictionary of ALL the columns in df or only of the 'time' column? and what type should I put? I have tried dtype and datetime64 but none of them work so far.

Thank you and I appreciate your guidance,

Update

I will include here the new error messages:

1) Using Timestamp

df['trd_exctn_dt'].map_partitions(pd.Timestamp).compute()

TypeError: Cannot convert input to Timestamp

2) Using datetime and meta

meta = ('time', pd.Timestamp)
df['time'].map_partitions(pd.to_datetime,meta=meta).compute()
TypeError: to_datetime() got an unexpected keyword argument 'meta'

3) Just using date time: gets stuck at 2%

    In [14]: df['trd_exctn_dt'].map_partitions(pd.to_datetime).compute()
[                                        ] | 2% Completed |  2min 20.3s

Also, I would like to be able to specify the format in the date, as i would do in pandas:

pd.to_datetime(df['time'], format = '%m%d%Y'

Update 2

After updating to Dask 0.11, I no longer have problems with the meta keyword. Still, I can't get it past 2% on a 2GB dataframe.

df['trd_exctn_dt'].map_partitions(pd.to_datetime, meta=meta).compute()
    [                                        ] | 2% Completed |  30min 45.7s

Update 3

worked better this way:

def parse_dates(df):
  return pd.to_datetime(df['time'], format = '%m/%d/%Y')

df.map_partitions(parse_dates, meta=meta)

I'm not sure whether it's the right approach or not

dleal
  • 2,244
  • 6
  • 27
  • 49
  • what dask version are you on? – MRocklin Sep 20 '16 at 22:39
  • MRocklin, you were right, I updated to 0.11 version and now don't get any problems with the meta keyword. Still, it does 1 and 2% in less than 30 seconds, but it got stuck there for an hour. Any suggestions? – dleal Sep 21 '16 at 02:17
  • I think I semi-solved it using defining a function to parse the dates and applying it using map partitions – dleal Sep 22 '16 at 17:31
  • From our experience using the `format` keyword always results in enhanced performance. – skibee May 11 '19 at 19:32

5 Answers5

24

Use astype

You can use the astype method to convert the dtype of a series to a NumPy dtype

df.time.astype('M8[us]')

There is probably a way to specify a Pandas style dtype as well (edits welcome)

Use map_partitions and meta

When using black-box methods like map_partitions, dask.dataframe needs to know the type and names of the output. There are a few ways to do this listed in the docstring for map_partitions.

You can supply an empty Pandas object with the right dtype and name

meta = pd.Series([], name='time', dtype=pd.Timestamp)

Or you can provide a tuple of (name, dtype) for a Series or a dict for a DataFrame

meta = ('time', pd.Timestamp)

Then everything should be fine

df.time.map_partitions(pd.to_datetime, meta=meta)

If you were calling map_partitions on df instead then you would need to provide the dtypes for everything. That isn't the case in your example though.

MRocklin
  • 55,641
  • 23
  • 163
  • 235
  • 1
    Thank you MRocklin! please see my updates in the question – dleal Sep 20 '16 at 21:58
  • 7
    does not work for me anymore with pandas 0.20, I get `dtype not understood`. Works however with `meta = ('time', np.datetime64)` – architectonic May 31 '17 at 13:52
  • 12
    It worked for me with no warning for future deprecation with `meta = ('time', 'datetime64[ns]')` – FlorianGD Jul 19 '17 at 16:35
  • Here is a link to the [numpy datetime documentations](https://docs.scipy.org/doc/numpy/reference/arrays.datetime.html) – skibee May 13 '19 at 07:49
20

Dask also come with to_timedelta so this should work as well.

df['time']=dd.to_datetime(df.time,unit='ns')

The values unit takes is the same as pd.to_timedelta in pandas. This can be found here.

Arundathi
  • 476
  • 3
  • 8
7

I'm not sure if it this is the right approach, but mapping the column worked for me:

df['time'] = df['time'].map(lambda x: pd.to_datetime(x, errors='coerce'))
tmsss
  • 1,979
  • 19
  • 23
6

This worked for me

ddf["Date"] = ddf["Date"].map_partitions(pd.to_datetime,format='%d/%m/%Y',meta = ('datetime64[ns]'))

citynorman
  • 4,918
  • 3
  • 38
  • 39
1

If the datetime is in a non ISO format then map_partition yields better results:

import dask
import pandas as pd
from dask.distributed import Client
client = Client()

ddf = dask.datasets.timeseries()
ddf = ddf.assign(datetime=ddf.index.astype(object))
ddf = (ddf.assign(datetime_nonISO = ddf['datetime'].astype(str).str.split(' ')
                                 .apply(lambda x: x[1]+' '+x[0], meta=('object'))) 

%%timeit
ddf.datetime = ddf.datetime.astype('M8[s]')
ddf.compute()

11.3 s ± 719 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

ddf = dask.datasets.timeseries()
ddf = ddf.assign(datetime=ddf.index.astype(object))
ddf = (ddf.assign(datetime_nonISO = ddf['datetime'].astype(str).str.split(' ')
                                 .apply(lambda x: x[1]+' '+x[0], meta=('object'))) 


%%timeit
ddf.datetime_nonISO = (ddf.datetime_nonISO.map_partitions(pd.to_datetime
                       ,  format='%H:%M:%S %Y-%m-%d', meta=('datetime64[s]')))
ddf.compute()

8.78 s ± 599 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

ddf = dask.datasets.timeseries()
ddf = ddf.assign(datetime=ddf.index.astype(object))
ddf = (ddf.assign(datetime_nonISO = ddf['datetime'].astype(str).str.split(' ')
                                 .apply(lambda x: x[1]+' '+x[0], meta=('object'))) 

%%timeit
ddf.datetime_nonISO = ddf.datetime_nonISO.astype('M8[s]')
ddf.compute()

1min 8s ± 3.65 s per loop (mean ± std. dev. of 7 runs, 1 loop each)

skibee
  • 1,279
  • 1
  • 17
  • 37