12

Give such a data frame df:

id_      val     
11111    12
12003    22
88763    19
43721    77
...

I wish to add a column diff to df, and each row of it equals to, let's say, the val in that row minus the diff in the previous row and multiply 0.4 and then add diff in the previous day:

diff = (val - diff_previousDay) * 0.4 + diff_previousDay

And the diff in the first row equals to val * 4 in that row. That is, the expected df should be:

id_      val     diff   
11111    12      4.8
12003    22      11.68
88763    19      14.608
43721    77      ...

And I have tried:

mul = 0.4
df['diff'] = df.apply(lambda row: (row['val'] - df.loc[row.name, 'diff']) * mul + df.loc[row.name, 'diff'] if int(row.name) > 0 else row['val'] * mul, axis=1) 

But got such as error:

TypeError: ("unsupported operand type(s) for -: 'float' and 'NoneType'", 'occurred at index 1')

Do you know how to solve this problem? Thank you in advance!

jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252
user5779223
  • 1,460
  • 3
  • 21
  • 42

4 Answers4

10

You can use:

df.loc[0, 'diff'] = df.loc[0, 'val'] * 0.4

for i in range(1, len(df)):
    df.loc[i, 'diff'] = (df.loc[i, 'val'] - df.loc[i-1, 'diff']) * 0.4  + df.loc[i-1, 'diff']

print (df)
     id_  val     diff
0  11111   12   4.8000
1  12003   22  11.6800
2  88763   19  14.6080
3  43721   77  39.5648

The iterative nature of the calculation where the inputs depend on results of previous steps complicates vectorization. You could perhaps use apply with a function that does the same calculation as the loop, but behind the scenes this would also be a loop.

Charles R
  • 1,621
  • 1
  • 8
  • 25
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252
  • I think it is the only solution, given that the vectorization method is not available. But surprisingly the speed is quite fast :) – user5779223 Jun 27 '16 at 02:01
  • 3
    I'm sorry @user5779223, but it's not fast! I have a 1,7 M rows x 11 columns dataset, which I need to `groupby` on a column with about 80k distinct values, and `apply` this kind of running aggregate (with a little `if` inside). `cumsum` and `cumcount` run in 800 and 300 microseconds respectively. The applied callback, doing `iterrows` on the `GroupByDataframe` runs in 4 minutes. I'm currently checking if `numba` can help me out here. – Tomasz Gandor Apr 15 '19 at 19:57
  • @TomaszGandor asking 1 year late, but did numba work for you? I have 70 M rows and I am attempting to generate new variables based on recursive values and conditional statements. I want to know how can I speed the process as much as I can. – Pleastry Aug 11 '20 at 10:07
  • @Turtle - I don't remember how it ended ;) Faced with this today, I'd install modin (https://modin.readthedocs.io/en/latest/using_modin.html) and checked if it helps. – Tomasz Gandor Aug 12 '20 at 11:10
  • @TomaszGandor oh might have a look at it for later projects. I tried numba on big nested loops and time is reduced to less than half the execution time of the regular python code. Converting all inputs from pandas to numpy was a bit annoying though – Pleastry Aug 13 '20 at 12:44
5

Recursive functions are not easily vectorisable. However, you can optimize your algorithm with numba. This should be preferable to a regular loop.

from numba import jit

@jit(nopython=True)
def foo(val):
    diff = np.zeros(val.shape)
    diff[0] = val[0] * 0.4
    for i in range(1, diff.shape[0]):
        diff[i] = (val[i] - diff[i-1]) * 0.4 + diff[i-1]
    return diff

df['diff'] = foo(df['val'].values)

print(df)

     id_  val     diff
0  11111   12   4.8000
1  12003   22  11.6800
2  88763   19  14.6080
3  43721   77  39.5648
jpp
  • 159,742
  • 34
  • 281
  • 339
1

if you are using apply in pandas, you should not be using the dataframe again within the lambda function.

your object in all cases within the lambda function should be 'row'.

Michael Tamillow
  • 415
  • 4
  • 11
  • but how can I extract the data like the row before the current one – user5779223 Jul 24 '16 at 15:02
  • you can't in an apply if the axis = 1. Each row is treated as an isolated data structure, and the order of the rows is not important. If you want to extract a previous value, you can create a new column using .shift() and then apply across the new row and subtract within the row. – Michael Tamillow Jul 25 '16 at 16:22
1

I just want to add another alternative to jezrael's answer. My answer is similar but I found to be much faster:

def calc_diff(val: pd.Series) -> pd.Series:
    diff = pd.Series(0.0, index=range(len(val)))
    diff[0] = val[0]
    for i in range(1, len(val)):
        result[i] = (val[i] - diff[i-1]) * 0.4 + diff[i-1]
    return result
df['diff'] = calc_diff(df['val'])

I tested using 10,000 rows of random numbers and the result is 194ms vs 4s for jezrael's method.

Abang F.
  • 876
  • 8
  • 12