1

I have csv file with "Date" and "Time" columns.

        Date      Time    Asset  Qty     Price Operation           Order   Fee
0  09.08.2020  10:26:11  Si-6.20    1  68675.00       Buy  26010327752252  1.06
1  09.08.2020  10:28:34  BR-7.20    2     40.80      Sell  26010327909139  2.48
2  09.08.2020  10:31:10  BR-7.20    2     40.68      Sell  26010328155020  2.48
3  09.08.2020  13:01:42  Si-6.20    4  68945.00      Sell  26010337903445  4.24
4  09.08.2020  13:01:48  BR-7.20    1     40.04       Buy  26010337907162  1.24

What I am trying to do is to convert Date, Time columns in one DateTime column.

            DateTIme    Asset  Qty     Price Operation           Order   Fee
0 2020-09-08 10:26:11  Si-6.20    1  68675.00       Buy  26010327752252  1.06
1 2020-09-08 10:28:34  BR-7.20    2     40.80      Sell  26010327909139  2.48
2 2020-09-08 10:31:10  BR-7.20    2     40.68      Sell  26010328155020  2.48
3 2020-09-08 13:01:42  Si-6.20    4  68945.00      Sell  26010337903445  4.24
4 2020-09-08 13:01:48  BR-7.20    1     40.04       Buy  26010337907162  1.24

Here this the code that I used

    df = pd.read_csv('table.csv', sep=';', dtype=dtypes)
    dt = pd.to_datetime(df['Date'] + ' ' + df['Time'])
    df.drop(['Date','Time'], axis=1, inplace=True)
    df.insert(0, 'DateTime', dt)

Is there a more elegant way to do this? I mean convert date and time columns in one datetime column when read csv file.

iamgm
  • 41
  • 1
  • 8

2 Answers2

2

You could use a apply+lambda combo which is very popular (and typically quite fast) in pandas

I also used an f-string which I find more compact and readable but are only available in Python 3.6+

df = pd.read_csv('table.csv', sep=';', dtype=dtypes)
df["DateTime"] = df.apply(lambda row: pd.to_datetime(f'{row["Date"]} {row["Time"]}'), axis="columns")
df.drop(['Date','Time'], axis=1, inplace=True)

And if you want to get extra fancy you could chain them:

df = pd.read_csv('table.csv', sep=';', dtype=dtypes)
df["DateTime"] = df.apply(lambda row: pd.to_datetime(f'{row["Date"]} {row["Time"]}'), axis="columns")\
                   .drop(['Date','Time'], axis=1)
wfgeo
  • 2,716
  • 4
  • 30
  • 51
  • this is a worse solution of what OP suggested albeit with method chaining. why would you use `apply` here? – Umar.H Nov 02 '20 at 16:23
  • Would you like to perhaps elaborate on why you think it is worse? From my understanding, `apply` is generally the preferred method for performing bulk operations on dataframes. – wfgeo Nov 03 '20 at 12:14
  • see [this answer](https://stackoverflow.com/questions/54432583/when-should-i-not-want-to-use-pandas-apply-in-my-code#:~:text=apply%20is%20usually%20fine%20here,may%20still%20offer%20reasonable%20performance.) which goes over `apply` in great detail when it should and shouldn't be used. Apply is prefered if there is no other option. There is no need for a row level operation here. – Umar.H Nov 03 '20 at 13:17
0

Since you're not sure what order your columns come in we can just use a simple assign after you read your csv, then drop and rename.

import pandas as pd
df = pd.read_csv(file)
df = df.assign(Date=pd.to_datetime(df["Date"] + " " + df["Time"])).drop("Time", 1).rename(
                                                              columns={"Date": "DateTime"})

print(df)
             DateTime    Asset  Qty     Price Operation           Order   Fee
0 2020-09-08 10:26:11  Si-6.20    1  68675.00       Buy  26010327752252  1.06
1 2020-09-08 10:28:34  BR-7.20    2     40.80      Sell  26010327909139  2.48
2 2020-09-08 10:31:10  BR-7.20    2     40.68      Sell  26010328155020  2.48
3 2020-09-08 13:01:42  Si-6.20    4  68945.00      Sell  26010337903445  4.24
4 2020-09-08 13:01:48  BR-7.20    1     40.04       Buy  26010337907162  1.24
Umar.H
  • 22,559
  • 7
  • 39
  • 74