31

I've got a dataframe with the following information:

    filename    val1    val2
t                   
1   file1.csv   5       10
2   file1.csv   NaN     NaN
3   file1.csv   15      20
6   file2.csv   NaN     NaN
7   file2.csv   10      20
8   file2.csv   12      15

I would like to interpolate the values in the dataframe based on the indices, but only within each file group.

To interpolate, I would normally do

df = df.interpolate(method="index")

And to group, I do

grouped = df.groupby("filename")

I would like the interpolated dataframe to look like this:

    filename    val1    val2
t                   
1   file1.csv   5       10
2   file1.csv   10      15
3   file1.csv   15      20
6   file2.csv   NaN     NaN
7   file2.csv   10      20
8   file2.csv   12      15

Where the NaN's are still present at t = 6 since they are the first items in the file2 group.

I suspect I need to use "apply", but haven't been able to figure out exactly how...

grouped.apply(interp1d)
...
TypeError: __init__() takes at least 3 arguments (2 given)

Any help would be appreciated.

R. W.
  • 357
  • 1
  • 5
  • 9

3 Answers3

30
>>> df.groupby('filename').apply(lambda group: group.interpolate(method='index'))
    filename  val1  val2
t                       
1  file1.csv     5    10
2  file1.csv    10    15
3  file1.csv    15    20
6  file2.csv   NaN   NaN
7  file2.csv    10    20
8  file2.csv    12    15
Alexander
  • 105,104
  • 32
  • 201
  • 196
  • 2
    This seems to work on a small dataframe, but takes a *long* time on a large one (orders of magnitude more time than juts using .interpolate() ). Would you expect the efficiency to be that much worse? The number of groups is small compared to the number number of rows (roughly 1000 groups compared to 10^7 rows). – R. W. May 06 '16 at 17:57
11

I ran into this as well. Instead of using apply, you can use transform, which will reduce your run time by more than 25% if you have on the order of 1000 groups:

import numpy as np
import pandas as pd

np.random.seed(500)
test_df = pd.DataFrame({
    'a': np.random.randint(low=0, high=1000, size=10000),
    'b': np.random.choice([1, 2, 4, 7, np.nan], size=10000, p=([0.2475]*4 + [0.01]))
})

Tests:

%timeit test_df.groupby('a').transform(pd.DataFrame.interpolate)

Output: 566 ms ± 27.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit test_df.groupby('a').apply(pd.DataFrame.interpolate)

Output: 788 ms ± 10.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit test_df.groupby('a').apply(lambda group: group.interpolate())

Output: 787 ms ± 17.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

%timeit test_df.interpolate()

Output: 918 µs ± 16.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

You will still see a significant increase in run-time compared to a fully vectorized call to interpolate on the full DataFrame, but I don't think you can do much better in pandas.

PMende
  • 5,171
  • 2
  • 19
  • 26
1

Considering the long running time of above methods, I suggest use a for loop and interpolate(), which is no more than few lines of codes, but much faster in speed.

for i in range(len(df.filename.unique())):
      mask = df.loc[:,'filename']==df.filename.unique()[i]
      df[mask]=dfs[mask].interpolate(method='index')
Abhishek Gurjar
  • 7,426
  • 10
  • 37
  • 45
WkCui
  • 21
  • 2
  • I just ran a comparison of the `groupby` approach and the for loop approach and they actually seem similarly performant on a 6000 row DataFrame with 130 groups. Both of my interpolations were running on the linear method though, I admit. – emigre459 Jan 10 '20 at 15:54