154

I have the following dataframe:

Index_Date    A   B     C    D
================================
2015-01-31    10   10   Nan   10
2015-02-01     2    3   Nan   22 
2015-02-02    10   60   Nan  280
2015-02-03    10  100   Nan  250

Require:

Index_Date    A   B    C     D
================================
2015-01-31    10   10    10   10
2015-02-01     2    3    23   22
2015-02-02    10   60   290  280
2015-02-03    10  100  3000  250

Column C is derived for 2015-01-31 by taking value of D.

Then I need to use the value of C for 2015-01-31 and multiply by the value of A on 2015-02-01 and add B.

I have attempted an apply and a shift using an if else by this gives a key error.

EnzoMolion
  • 949
  • 8
  • 25
ctrl-alt-delete
  • 3,696
  • 2
  • 24
  • 37
  • 18
    This is a good question. I have a similar need for a vectorized solution. It would be nice if pandas provided version of `apply()` where the user's function is able to access one or more values from the previous row as part of its calculation or at least return a value that is then passed 'to itself' on the next iteration. Wouldn't this allow some efficiency gains compared to a for loop? – Bill Oct 22 '18 at 19:41

7 Answers7

109

First, create the derived value:

df.loc[0, 'C'] = df.loc[0, 'D']

Then iterate through the remaining rows and fill the calculated values:

for i in range(1, len(df)):
    df.loc[i, 'C'] = df.loc[i-1, 'C'] * df.loc[i, 'A'] + df.loc[i, 'B']


  Index_Date   A   B    C    D
0 2015-01-31  10  10   10   10
1 2015-02-01   2   3   23   22
2 2015-02-02  10  60  290  280
Stefan
  • 41,759
  • 13
  • 76
  • 81
  • 78
    is there a function in pandas to do this without the loop? – ctrl-alt-delete Jan 18 '16 at 15:03
  • 6
    The iterative nature of the calculation where the inputs depend on results of previous steps complicates vectorization. You could perhaps use `apply` with a function that does the same calculation as the loop, but behind the scenes this would also be a loop. http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.DataFrame.apply.html – Stefan Jan 18 '16 at 15:13
  • If I use this loop and calculate on a merged dataframe and it finds a Nan it works but only to the row with Nan. No errors are thrown, If I try a fillNa I get AttributeError: 'numpy.float64' object has no attribute 'fillna' Is there any way to skip the row with Nan or set values to zero? – ctrl-alt-delete Jan 18 '16 at 16:04
  • Do you mean missing values in columns other than `C`? – Stefan Jan 18 '16 at 16:08
  • 1
    Yes your solution is fine. I just ensure I fill the Nans in the dataframe before the loop. – ctrl-alt-delete Jan 18 '16 at 16:53
  • I'm trying to implement this solution, however, this only applies to dataframes with sequential index (0,1,2,3). My DataFrame has no sequential index (0,3,4,9). Any ideas how to handle it? ..After I post this question, I found that this can be solved by iloc instead of loc. – Sodanetworks May 18 '20 at 23:35
  • This is a great solution that I needed for doing a mortgage calculation table. I was thinking of a shift as below, but this works very well, and with mortgages the max amount of rows is 360, so speed will never be an issue. – Bryan Butler Dec 09 '20 at 15:13
  • @Sodanetworks , I got also issues with no sequential index. I couldn't just replace loc by iloc. I finally ended by regenerating the index using df.reset_index(drop=True, inplace=True) – luney Dec 17 '20 at 09:31
  • @Stefan it is true that behind the scene if you use apply(func....) you'd still have a loop. However I'd argue that this makes the code much cleaner, easier to read & easier to maintain, especially if one has a bunch of columns to compute this way. – logicOnAbstractions Nov 16 '21 at 01:57
89

Given a column of numbers:

lst = []
cols = ['A']
for a in range(100, 105):
    lst.append([a])
df = pd.DataFrame(lst, columns=cols, index=range(5))
df

    A
0   100
1   101
2   102
3   103
4   104

You can reference the previous row with shift:

df['Change'] = df.A - df.A.shift(1)
df

    A   Change
0   100 NaN
1   101 1.0
2   102 1.0
3   103 1.0
4   104 1.0

You can fill the missing value with fill_value parameter

df['Change'] = df.A - df.A.shift(1, fill_value=df.A[0]) # fills in the missing value e.g. 100<br>
df

    A   Change
0   100 0.0
1   101 1.0
2   102 1.0
3   103 1.0
4   104 1.0
Community
  • 1
  • 1
kztd
  • 3,121
  • 1
  • 20
  • 18
  • 20
    This won't help in this situation because the value from the previous row is not known at the beginning. It has to be computed each iteration and then used in the next iteration. – Bill Oct 22 '18 at 19:27
  • 19
    I still am grateful for this answer because I stumbled across this, looking for a case where I do know the value from the previous row. So thanks @kztd – Kevin Pauli Apr 05 '20 at 22:33
  • 3
    Exactly what I was looking for. This also works faster because it has array operation instead of looping as suggested on other answers. – Dimanjan Feb 02 '22 at 18:23
  • 2
    `shift` is definitely the way to go. Use the `fill_value` parameter to provide a default value for that first row. – maccaroo Aug 02 '22 at 00:32
31

numba

For recursive calculations which are not vectorisable, numba, which uses JIT-compilation and works with lower level objects, often yields large performance improvements. You need only define a regular for loop and use the decorator @njit or (for older versions) @jit(nopython=True):

For a reasonable size dataframe, this gives a ~30x performance improvement versus a regular for loop:

from numba import jit

@jit(nopython=True)
def calculator_nb(a, b, d):
    res = np.empty(d.shape)
    res[0] = d[0]
    for i in range(1, res.shape[0]):
        res[i] = res[i-1] * a[i] + b[i]
    return res

df['C'] = calculator_nb(*df[list('ABD')].values.T)

n = 10**5
df = pd.concat([df]*n, ignore_index=True)

# benchmarking on Python 3.6.0, Pandas 0.19.2, NumPy 1.11.3, Numba 0.30.1
# calculator() is same as calculator_nb() but without @jit decorator
%timeit calculator_nb(*df[list('ABD')].values.T)  # 14.1 ms per loop
%timeit calculator(*df[list('ABD')].values.T)     # 444 ms per loop
jpp
  • 159,742
  • 34
  • 281
  • 339
  • 1
    It is wonderful! I have accelerated my function, which counts values ​​from previous values. Thanks! – Artem Malikov Apr 13 '20 at 16:28
  • How can I use `@jit(nopython=True)` in jupyter-notebook? – sergzemsk Jan 06 '21 at 14:06
  • 1
    @sergzemsk, Just as you've written it (and in my answer), it's called a [decorator](https://www.python.org/dev/peps/pep-0318/). Note later versions of numba support the shortcut `@njit`. – jpp Jan 06 '21 at 14:10
  • @jpp i have `if` condition so this improvement failed. I got an error "TypingError: Failed in nopython mode pipeline (step: nopython frontend)" – sergzemsk Jan 06 '21 at 14:29
  • @sergzemsk, I suggest you ask a new question, not clear to me where the `if` statement sits, why it's not being vectorised by numba. – jpp Jan 06 '21 at 14:30
27

Applying the recursive function on numpy arrays will be faster than the current answer.

df = pd.DataFrame(np.repeat(np.arange(2, 6),3).reshape(4,3), columns=['A', 'B', 'D'])
new = [df.D.values[0]]
for i in range(1, len(df.index)):
    new.append(new[i-1]*df.A.values[i]+df.B.values[i])
df['C'] = new

Output

      A  B  D    C
   0  1  1  1    1
   1  2  2  2    4
   2  3  3  3   15
   3  4  4  4   64
   4  5  5  5  325
  • 3
    This answer works perfectly for me with a similar calculation. I tried using a combination of cumsum and shift but this solution works much better. Thanks. – Simon Apr 16 '17 at 20:06
17

Although it has been a while since this question was asked, I will post my answer hoping it helps somebody.

Disclaimer: I know this solution is not standard, but I think it works well.

import pandas as pd
import numpy as np

data = np.array([[10, 2, 10, 10],
                 [10, 3, 60, 100],
                 [np.nan] * 4,
                 [10, 22, 280, 250]]).T
idx = pd.date_range('20150131', end='20150203')
df = pd.DataFrame(data=data, columns=list('ABCD'), index=idx)
df
               A    B     C    D
 =================================
 2015-01-31    10   10    NaN  10
 2015-02-01    2    3     NaN  22 
 2015-02-02    10   60    NaN  280
 2015-02-03    10   100   NaN  250

def calculate(mul, add):
    global value
    value = value * mul + add
    return value

value = df.loc['2015-01-31', 'D']
df.loc['2015-01-31', 'C'] = value
df.loc['2015-02-01':, 'C'] = df.loc['2015-02-01':].apply(lambda row: calculate(*row[['A', 'B']]), axis=1)
df
               A    B     C     D
 =================================
 2015-01-31    10   10    10    10
 2015-02-01    2    3     23    22 
 2015-02-02    10   60    290   280
 2015-02-03    10   100   3000  250

So basically we use a apply from pandas and the help of a global variable that keeps track of the previous calculated value.


Time comparison with a for loop:

data = np.random.random(size=(1000, 4))
idx = pd.date_range('20150131', end='20171026')
df = pd.DataFrame(data=data, columns=list('ABCD'), index=idx)
df.C = np.nan

df.loc['2015-01-31', 'C'] = df.loc['2015-01-31', 'D']

%%timeit
for i in df.loc['2015-02-01':].index.date:
    df.loc[i, 'C'] = df.loc[(i - pd.DateOffset(days=1)).date(), 'C'] * df.loc[i, 'A'] + df.loc[i, 'B']

3.2 s ± 114 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

data = np.random.random(size=(1000, 4))
idx = pd.date_range('20150131', end='20171026')
df = pd.DataFrame(data=data, columns=list('ABCD'), index=idx)
df.C = np.nan

def calculate(mul, add):
    global value
    value = value * mul + add
    return value

value = df.loc['2015-01-31', 'D']
df.loc['2015-01-31', 'C'] = value

%%timeit
df.loc['2015-02-01':, 'C'] = df.loc['2015-02-01':].apply(lambda row: calculate(*row[['A', 'B']]), axis=1)

1.82 s ± 64.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

So 0.57 times faster on average.

iipr
  • 1,190
  • 12
  • 17
2

It's an old question but the solution below (without a for loop) might be helpful:

def new_fun(df):
    prev_value = df.iloc[0]["C"]
    def func2(row):
        # non local variable ==> will use pre_value from the new_fun function
        nonlocal prev_value
        new_value =  prev_value * row['A'] + row['B']
        prev_value = row['C']
        return new_value
    # This line might throw a SettingWithCopyWarning warning
    df.iloc[1:]["C"] = df.iloc[1:].apply(func2, axis=1)
    return df

df = new_fun(df)
Wazaa
  • 136
  • 3
  • This makes some assumptions about `.apply` that may not be true: If `.apply` is parallelized or called in anything other than the order you expect the results will not be as expected. – feetwet Feb 14 '21 at 01:11
  • I agree with your concerns. The assumptions in this anwser are based on the question of this thread. Also, apply isn't parallelized by default ... – Wazaa Feb 15 '21 at 09:04
1

In general, the key to avoiding an explicit loop would be to join (merge) 2 instances of the dataframe on rowindex-1==rowindex.

Then you would have a big dataframe containing rows of r and r-1, from where you could do a df.apply() function.

However the overhead of creating the large dataset may offset the benefits of parallel processing...

feetwet
  • 3,248
  • 7
  • 46
  • 84
Martin Alley
  • 121
  • 1
  • 9