12

For every pair of src and dest airport cities I want to return a percentile of column a given a value of column b.

I can do this manually as such:

example df with only 2 pairs of src/dest (I have thousands in my actual df):

dt  src dest    a   b
0   2016-01-01  YYZ SFO 548.12  279.28
1   2016-01-01  DFW PDX 111.35  -65.50
2   2016-02-01  YYZ SFO 64.84   342.35
3   2016-02-01  DFW PDX 63.81   61.64
4   2016-03-01  YYZ SFO 614.29  262.83

{'a': {0: 548.12,
  1: 111.34999999999999,
  2: 64.840000000000003,
  3: 63.810000000000002,
  4: 614.28999999999996,
  5: -207.49000000000001,
  6: 151.31999999999999,
  7: -56.43,
  8: 611.37,
  9: -296.62,
  10: 6417.5699999999997,
  11: -376.25999999999999,
  12: 465.12,
  13: -821.73000000000002,
  14: 1270.6700000000001,
  15: -1410.0899999999999,
  16: 1312.6600000000001,
  17: -326.25999999999999,
  18: 1683.3699999999999,
  19: -24.440000000000001,
  20: 583.60000000000002,
  21: -5.2400000000000002,
  22: 1122.74,
  23: 195.21000000000001,
  24: 97.040000000000006,
  25: 133.94},
 'b': {0: 279.27999999999997,
  1: -65.5,
  2: 342.35000000000002,
  3: 61.640000000000001,
  4: 262.82999999999998,
  5: 115.89,
  6: 268.63999999999999,
  7: 2.3500000000000001,
  8: 91.849999999999994,
  9: 62.119999999999997,
  10: 778.33000000000004,
  11: -142.78,
  12: 1675.53,
  13: -214.36000000000001,
  14: 983.80999999999995,
  15: -207.62,
  16: 632.13999999999999,
  17: -132.53,
  18: 422.36000000000001,
  19: 13.470000000000001,
  20: 642.73000000000002,
  21: -144.59999999999999,
  22: 213.15000000000001,
  23: -50.200000000000003,
  24: 338.27999999999997,
  25: -129.69},
 'dest': {0: 'SFO',
  1: 'PDX',
  2: 'SFO',
  3: 'PDX',
  4: 'SFO',
  5: 'PDX',
  6: 'SFO',
  7: 'PDX',
  8: 'SFO',
  9: 'PDX',
  10: 'SFO',
  11: 'PDX',
  12: 'SFO',
  13: 'PDX',
  14: 'SFO',
  15: 'PDX',
  16: 'SFO',
  17: 'PDX',
  18: 'SFO',
  19: 'PDX',
  20: 'SFO',
  21: 'PDX',
  22: 'SFO',
  23: 'PDX',
  24: 'SFO',
  25: 'PDX'},
 'dt': {0: Timestamp('2016-01-01 00:00:00'),
  1: Timestamp('2016-01-01 00:00:00'),
  2: Timestamp('2016-02-01 00:00:00'),
  3: Timestamp('2016-02-01 00:00:00'),
  4: Timestamp('2016-03-01 00:00:00'),
  5: Timestamp('2016-03-01 00:00:00'),
  6: Timestamp('2016-04-01 00:00:00'),
  7: Timestamp('2016-04-01 00:00:00'),
  8: Timestamp('2016-05-01 00:00:00'),
  9: Timestamp('2016-05-01 00:00:00'),
  10: Timestamp('2016-06-01 00:00:00'),
  11: Timestamp('2016-06-01 00:00:00'),
  12: Timestamp('2016-07-01 00:00:00'),
  13: Timestamp('2016-07-01 00:00:00'),
  14: Timestamp('2016-08-01 00:00:00'),
  15: Timestamp('2016-08-01 00:00:00'),
  16: Timestamp('2016-09-01 00:00:00'),
  17: Timestamp('2016-09-01 00:00:00'),
  18: Timestamp('2016-10-01 00:00:00'),
  19: Timestamp('2016-10-01 00:00:00'),
  20: Timestamp('2016-11-01 00:00:00'),
  21: Timestamp('2016-11-01 00:00:00'),
  22: Timestamp('2016-12-01 00:00:00'),
  23: Timestamp('2016-12-01 00:00:00'),
  24: Timestamp('2017-01-01 00:00:00'),
  25: Timestamp('2017-01-01 00:00:00')},
 'src': {0: 'YYZ',
  1: 'DFW',
  2: 'YYZ',
  3: 'DFW',
  4: 'YYZ',
  5: 'DFW',
  6: 'YYZ',
  7: 'DFW',
  8: 'YYZ',
  9: 'DFW',
  10: 'YYZ',
  11: 'DFW',
  12: 'YYZ',
  13: 'DFW',
  14: 'YYZ',
  15: 'DFW',
  16: 'YYZ',
  17: 'DFW',
  18: 'YYZ',
  19: 'DFW',
  20: 'YYZ',
  21: 'DFW',
  22: 'YYZ',
  23: 'DFW',
  24: 'YYZ',
  25: 'DFW'}}

I want the percentile per group of src and dest pairs. So there should only be 1 percentile value for each pair. I only want to perform the percentile given b where date = 2017-01-01 for each src and dest pair over the entire column a for each pair. Make sense?

I can do this manually for example for a specific pair i.e. src=YYZ and dest=SFT:

from scipy import stats
import datetime as dt
import pandas as pd

p0 = dt.datetime(2017,1,1)

# lets slice df for src=YYZ and dest = SFO
x = df[(df.src =='YYZ') &
(df.dest =='SFO') &
(df.dt ==p0)].b.values[0]

# given B, what percentile does it fall in for the entire column A for YYZ, SFO
stats.percentileofscore(df['a'],x)
61.53846153846154

In the above case, I did this manually for pairs YYZ and SFO. However, I have thousands of pairs in my df.

How do I vectorize this using pandas features rather than looping through every pair?

There must be a way to use groupby and use apply over a function?

My desired df should look something like:

    src dest  percentile
0   YYZ SFO   61.54
1   DFW PDX   23.07
2   XXX YYY   blahblah1
3   AAA BBB   blahblah2
...

UPDATE:

I implemented the following:

def b_percentile_a(df,x,y,b):
    z = df[(df['src'] == x ) & (df['dest'] == y)].a
    r = stats.percentileofscore(z,b)
    return r

b_vector_df = df[df.dt == p0]

b_vector_df['p0_a_percentile_b'] = \
    b_vector_df.apply(lambda x: b_percentile_a(df,x.src,x.dest,x.b), axis=1)

It takes 5.16 seconds for 100 pairs. I have 55,000 pairs. So this will take ~50 minutes. I need to run this 36 times so its going to take several days of run time.

There must be a faster approach?

smci
  • 32,567
  • 20
  • 113
  • 146
codingknob
  • 11,108
  • 25
  • 89
  • 126
  • Anyone have any ideas on how to implement the above for each src/dest group using `pandas` features in as few lines of code as possible that don't require manual looping? – codingknob Feb 07 '17 at 05:14
  • In your example calculation for a specific pair, you compute the percentile of your target value relative to *all* values of `df.a`. Is that what you want? Your description seems to indicate you want to calculate the percentile of the target value relative to only the rows of `df` with a particular `src` and `dest`, but that's not what your code actually does. Because of that, it gives a different result than the `b_percentile_a` function you give at the end. – BrenBarn Feb 20 '17 at 06:09
  • Also, if performance is your concern, it would be good to have a realistic sample data set to benchmark on. The performance of different solutions may vary depending on the nature of the data (e.g., how large each src/dest block is). – BrenBarn Feb 21 '17 at 07:27
  • Just to understand better: by 55,000 pairs, do you mean the dataframe has 55,000 entries, or do you actually have 55,000 different combinations of src and dest (and therefore many more rows in the dataframe)? – JARS Feb 22 '17 at 09:22
  • @BrenBarn - I want `to calculate the percentile of the target value relative to only the rows of df with a particular src and dest`. All df.a values for each src/dest pair. I believe my code at the bottom should do just that (but its very slow). i.e. `b_percentile_a()` – codingknob Feb 22 '17 at 16:02
  • @JARS - I have 55,000 different combinations of src and dest. So yes, many more rows in the dataframe. Typically each src/dest pair has 12 rows so 55,000 x 12 is approx size. – codingknob Feb 22 '17 at 16:03
  • How are needs the `b_vector_df = df[df.dt == p0]` attention? You don't mentioned this in the explanation to #BrenBarn. As I understand it right, `a` and `b` have typically a size of 12 per Pair. And the percentile have to be computed between these 12 values. – stovfl Feb 24 '17 at 14:02
  • @stovfl - you are right. I should have been more clear. So it is the value of `b` when `df[df.dt == p0]` for each `src/dest` pair that is to be percentiled over the entire 12 values of `a` for each pair. Make sense? – codingknob Feb 25 '17 at 18:58

5 Answers5

6

Obtained a incredible saving of time!

Output:
Size of a_list: 49998 Randomized unique values
percentile_1 (Your given df - scipy)
computed percentile 104 times - 104 records in 0:00:07.777022

percentile_9 (class PercentileOfScore(rank_searchsorted_list) using given df)
computed percentile 104 times - 104 records in 0:00:00.000609
_ dt src dest a b pct scipy _ 0: 2016-01-01 YYZ SFO 54812 279.28 74.81299251970079 74.8129925197 1: 2016-01-01 DFW PDX 111.35 -65.5 24.66698667946718 24.6669866795 2: 2016-02-01 YYZ SFO 64.84 342.35 76.4810592423697 76.4810592424 3: 2016-02-01 DFW PDX 63.81 61.64 63.84655386215449 63.8465538622 ... 24: 2017-01-01 YYZ SFO 97.04 338.28 76.3570542821712 76.3570542822 25: 2017-01-01 DFW PDX 133.94 -129.69 21.4668586743469 21.4668586743

Looking at the implementation of scipy.percentileofscore i found that the whole list( a ) are - copied, inserted, sorted, searched - on every call of percentileofscore.

I implemented my own class PercentileOfScore

import numpy as np
class PercentileOfScore(object):

    def __init__(self, aList):
        self.a = np.array( aList )
        self.a.sort()
        self.n = float(len(self.a))
        self.pct = self.__rank_searchsorted_list
    # end def __init__

    def __rank_searchsorted_list(self, score_list):
        adx = np.searchsorted(self.a, score_list, side='right')
        pct = []
        for idx in adx:
            # Python 2.x needs explicit type casting float(int)
            pct.append( (float(idx) / self.n) * 100.0 )

        return pct
    # end def _rank_searchsorted_list
# end class PercentileOfScore

I don't think that def percentile_7 will fit your needs. dt will not considered.

PctOS = None
def percentile_7(df_flat):
    global PctOS
    result = {}
    for k in df_flat.pair_dict.keys():
        # df_flat.pair_dict = { 'src.dst': [b,b,...bn] }
        result[k] = PctOS.pct( df_flat.pair_dict[k] )

    return result
# end def percentile_7

In your manual sample you use the whole df.a. In this sample its dt_flat.a_list, but i'm not sure if this is what you want?

from PercentileData import DF_flat
def main():
    # DF_flat.data = {'dt.src.dest':[a,b]}
    df_flat = DF_flat()

    # Instantiate Global PctOS
    global PctOS
    # df_flat.a_list = [a,a,...an]
    PctOS = PercentileOfScore(df_flat.a_list)

    result = percentile_7(df_flat)
    # result = dict{'src.dst':[pct,pct...pctn]}

Tested with Python:3.4.2 and 2.7.9 - numpy: 1.8.2

stovfl
  • 14,998
  • 7
  • 24
  • 51
  • that seems amazing. Can you please share the code for `percentile_7` and your `class PercentileOfScore`? – codingknob Feb 17 '17 at 19:33
  • Would this work for python 2 as well? Aside from the basic syntax differences like print and print()? – AsheKetchum Feb 22 '17 at 13:49
  • 1
    @AsheKetchum Tested with Python 2.7.9, have to add explicit type casting float(int). Edited the Answer. Python 2.7.9 can handle print() as well. – stovfl Feb 22 '17 at 20:02
  • Why do we need the `for idx in adx:` loop. Instead, we could do `pct = ads / self.n * 100.0` to vectorize the division, right? – panc Jun 20 '21 at 04:33
4

Assuming you have a list of pairs, say pairs = [[a,b], [c,d], ...] and df is defined,

    r = stats.percentileofscore(z,b)
    return r

for pair in pairs:
    # get the corresponding rows for each pair
    bvalues = df.loc[(df['src']==pair[0])&(df['dest']==pair[1])][['a', 'b']]
    # apply the percentileofscore map
    b_vector_df['p0_a_percentile_b'] = bvalues.b.apply(lambda x: stats.percentileofscore(bvalues.a, x))

I am not entirely sure what the goal is. My understanding is you read a b value for each src, dest pair and look for the corresponding a value and then calculate the percentile of that a value. Let me know if this helps :)

EDIT: assuming you are only working with the five columns date, src, dest, a, and b, you can consider working with a copy of the data frame that only contains those 5 columns. It reduces the amount of work required by each extraction step. I feel like it is more efficient to work with only the amount of data you need. Selecting rows from a Dataframe based on values in multiple columns in pandas is a discussion that may be relevant for you.

Community
  • 1
  • 1
AsheKetchum
  • 1,098
  • 3
  • 14
  • 29
4

You can groupby multiple columns at once.

# takes the b value at a specified point
# and returns its percentile of the full a array
def b_pct(df, p0):
    bval = df.b[df.dt==p0]
    assert bval.size == 1, 'can have only one entry per timestamp'
    bval = bval.values[0]
    # compute the percentile
    return (df.a < bval).sum() / len(df.a)

# splits the full dataframe up into groups by (src, dest) trajectory and
# returns a dataframe of the form src, dest, percentile
def trajectory_b_percentile(df, p0):
    percentile_df = pd.DataFrame([pd.Series([s, d, b_pct(g, p0)],
                                            index=['src', 'dest', 'percentile'])
                                  for ((s, d), g) in df.groupby(('src', 'dest'))])
    return percentile_df

For comparison, your code above spits out

           dt  src dest       a       b  p0_a_percentile_b
24 2017-01-01  YYZ  SFO   97.04  338.28          23.076923
25 2017-01-01  DFW  PDX  133.94 -129.69          46.153846

whereas `trajectory_b_percentile' returns

   src dest  percentile
0  DFW  PDX   46.1538
1  YYZ  SFO   23.0769

I didn't see any speedup with 25 entries, but it should be noticeable with more.

Elliot
  • 2,541
  • 17
  • 27
1

It seems another considerable speedup is obtained by converting everything to numpy arrays and constructing the percentiles also as a numpy array:

# Get airport strings as indices
_, ir = np.unique(df['src'].values,  return_inverse=True)
_, ic = np.unique(df['dest'].values, return_inverse=True)

# Get a and b columns
a = df['a'].values
b = df['b'].values

# Compute percentile scores in a numpy array
prc = np.zeros(a.shape)
for i in range(0, a.shape[0]):
    prc[i] = stats.percentileofscore(a[np.logical_and(ir==ir[i], ic==ic[i])], b[i])

On a dataframe with 24000 entries (see construction below), running %%timeit gives

1 loop, best of 3: 2.17 s per loop

However, the original version

df['p0_a_percentile_b'] = \
df.apply(lambda x: b_percentile_a(df,x.src,x.dest,x.b), axis=1)

yields

1 loop, best of 3: 1min 2s per loop

which is much slower. I also checked that both snippets produce the same output by running np.all(prc == df.p0_a_percentile_b.values), yielding True.

Appendix:

I constructed a dataframe to test this and here I share the process for reproducibility. I took 2000 pairs of airports using 100 unique airport names, then generated 12 dataframe rows per pair, and then generated random a and b columns.

import pandas as pd
import numpy as np
import scipy.stats as stats
import numpy.matlib as mat

# Construct dataframe

T=12
N_airports = 100
N_entries = 2000
airports = np.arange(0, N_airports).astype('string')

src  = mat.repmat(airports[np.random.randint(N_airports, size=(N_entries, ))], 1, T)
dest = mat.repmat(airports[np.random.randint(N_airports, size=(N_entries, ))], 1, T)
a    = np.random.uniform(size=N_entries*T)
b    = np.random.uniform(size=N_entries*T)

df = pd.DataFrame(np.vstack((src, dest, a, b)).T, columns=['src', 'dest', 'a', 'b'])
JARS
  • 1,109
  • 7
  • 10
1

Please verify and comment if this represent your Data Model!

  1. 6 ^ 6 Pairs [AAA-ZZZ] = 46,656 are used. Typically each PAIR has 12 RECORDS
  2. This is RECORD( 0 ) of PAIR( DFW PDX )
       dt        src dest       a           b
    0: 2016-01-01 DFW PDX   111.35       -65.5
    
  3. This is SET( DFW PDX ) = 13 RECORDS of PAIR( DFW PDX )
       dt        src dest       a           b
    0: 2016-01-01 DFW PDX   111.35       -65.5
    1: 2016-02-01 DFW PDX    63.81       61.64
    2: 2016-03-01 DFW PDX  -207.49      115.89
    3: 2016-04-01 DFW PDX   -56.43        2.35
    4: 2016-05-01 DFW PDX  -296.62       62.12
    5: 2016-06-01 DFW PDX  -376.26     -142.78
    6: 2016-07-01 DFW PDX  -821.73     -214.36
    7: 2016-08-01 DFW PDX -1410.09     -207.62
    8: 2016-09-01 DFW PDX  -326.26     -132.53
    9: 2016-10-01 DFW PDX   -24.44       13.47
    10:2016-11-01 DFW PDX    -5.24      -144.6
    11:2016-12-01 DFW PDX   195.21       -50.2
    12:2017-01-01 DFW PDX   133.94     -129.69
    
  4. Example: Calculate Percentile of RECORD( 0 )
       dt        src dest       a           b
    0: 2016-01-01 DFW PDX   111.35       -65.5
    

    Pseudocode: stats.percentileofscore( SET( DFW PDX )[a0...a12], -65.5) = 46.15

  5. Example: Calculate Percentile of SET( DFW PDX )

    Pseudocode
    for record in SET( DFW PDX ):
       stats.percentileofscore( SET( DFW PDX )[a0...a12], record.b)
    Output: pct0...pct12

    Using rank_searchsorted_list does not require 'for record in' :
    rank_searchsorted_list( SET( DFW PDX )[a0...a12], SET( DFW PDX )[b0...b12] )
    Output: [pct0...pct12]

  6. This is SET( DFW PDX ) vectorized

    OBJECT = {'DFW PDX':[
    ['2016-01-01', '2016-02-01', '2016-03-01', '2016-04-01', '2016-05-01', '2016-06-01', '2016-07-01', '2016-08-01', '2016-09-01', '2016-10-01', '2016-11-01', '2016-12-01', '2017-01-01']
    [111.35, 63.81, -207.49, -56.43, -296.62, -376.26, -821.73, -1410.09, -326.26, -24.44, -5.24, 195.21, 133.94]
    [-65.5, 61.64, 115.89, 2.35, 62.12, -142.78, -214.36, -207.62, -132.53, 13.47, -144.6, -50.2, -129.69]
    [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
    ]}  
    
  7. Example: Calculate Percentile of OBJECT( DFW PDX )
    Using stats.percentileofscore:

    a = 1; b = 2
    for b_value in OBJECT['DFW PDX'][b]:
        stats.percentileofscore( OBJECT['DFW PDX'][a], b_value)
    Output: pct0...pct12  
    

    Using rank_searchsorted_list does not require 'for b_value in':

    a = 1; b = 2; pct = 3
    vector = OBJECT['DFW PDX']
    vector[pct] = rank_searchsorted_list( vector[a], vector[b] )  
    

    Output:

       dt         src dest      a           b       pct    scipy
    0: 2016-01-01 DFW PDX   111.35       -65.5     46.15   46.15
    1: 2016-02-01 DFW PDX    63.81       61.64     69.23   69.23
    2: 2016-03-01 DFW PDX  -207.49      115.89     84.61   84.61
    3: 2016-04-01 DFW PDX   -56.43        2.35     69.23   69.23
    4: 2016-05-01 DFW PDX  -296.62       62.12     69.23   69.23
    5: 2016-06-01 DFW PDX  -376.26     -142.78     46.15   46.15
    6: 2016-07-01 DFW PDX  -821.73     -214.36     38.46   38.46
    7: 2016-08-01 DFW PDX -1410.09     -207.62     38.46   38.46
    8: 2016-09-01 DFW PDX  -326.26     -132.53     46.15   46.15
    9: 2016-10-01 DFW PDX   -24.44       13.47     69.23   69.23
    10:2016-11-01 DFW PDX    -5.24      -144.6     46.15   46.15
    11:2016-12-01 DFW PDX   195.21       -50.2     53.84   53.84
    12:2017-01-01 DFW PDX   133.94     -129.69     46.15   46.15
    

Please verify and confirm the calculated percentile!

stovfl
  • 14,998
  • 7
  • 24
  • 51