4

Let's suppose I have a series of integer values arranged in a numpy array like this.

nan = np.nan
arr = np.array([3, nan, nan, nan, 5, nan, nan, nan, nan, nan])

nan values should be filled with backward count from the first not null value to zero.

[3, 2, 1, 0, 5, 4, 3, 2, 1, 0]
cs95
  • 379,657
  • 97
  • 704
  • 746
Marco Fumagalli
  • 2,307
  • 3
  • 23
  • 41

3 Answers3

8

IMO, the simplest pandas way of doing this is using groupby and cumcount with ascending=False:

s = pd.Series(np.cumsum(~np.isnan(arr)))
s.groupby(s).cumcount(ascending=False)

0    3
1    2
2    1
3    0
4    5
5    4
6    3
7    2
8    1
9    0
dtype: int64
cs95
  • 379,657
  • 97
  • 704
  • 746
3

Here's a vectorized one with NumPy -

def backward_count(a):
    m = ~np.isnan(a)
    idx = np.flatnonzero(m)

    p = np.full(len(a), -1, dtype=a.dtype)
    p[idx[0]] = a[idx[0]]+idx[0]

    d = np.diff(idx)
    p[idx[1:]] = np.diff(a[m]) + d - 1
    out = p.cumsum()
    out[:idx[0]] = np.nan
    return out

Sample run with a more generic case -

In [238]: a
Out[238]: array([nan,  3., nan,  5., nan, 10., nan, nan,  4., nan, nan])

In [239]: backward_count(a)
Out[239]: array([nan,  3.,  2.,  5.,  4., 10.,  9.,  8.,  4.,  3.,  2.])

Benchmarking

Setup with scaling up the given sample by 10,000x -

In [240]: arr = np.array([3, nan, nan, nan, 5, nan, nan, nan, nan, nan])

In [241]: arr = np.tile(arr,10000)

# Pandas based one by @cs95
In [243]: %%timeit
     ...: s = pd.Series(np.cumsum(~np.isnan(arr)))
     ...: s.groupby(s).cumcount(ascending=False)
35.9 ms ± 258 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [245]: %timeit backward_count(arr)
3.04 ms ± 4.35 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Divakar
  • 218,885
  • 19
  • 262
  • 358
1
import pandas as pd
import numpy as np
import math

arr = pd.Series([3,np.nan,np.nan,np.nan,5,np.nan,np.nan,np.nan,np.nan,np.nan])

for i in range(len(arr)):
    # Check if each element is "NaN"
    if math.isnan(arr[i]):
        # If NaN then take the previous element and subtract 1
        arr[i] = arr[i-1]-1

# print the final array
print(arr)

Result:

0    3.0
1    2.0
2    1.0
3    0.0
4    5.0
5    4.0
6    3.0
7    2.0
8    1.0
9    0.0
dtype: float64
trker
  • 412
  • 3
  • 14
  • 1
    1) you have not explained your code (yes, it is obvious to me but not to everyone who reads it), and 2) using loops with numpy is a no-no. – cs95 May 27 '19 at 16:04
  • I'm new to numpy myself, why are loops bad? – trker May 27 '19 at 16:09
  • 1
    NumPy operations are implemented to be faster and more scalable than loops. The process is called ["vectorisation"](https://stackoverflow.com/questions/1422149/what-is-vectorization) and is a few steps above loops in terms of performance. – cs95 May 27 '19 at 16:15