0

My code works when none of the dates are missing, but as soon as I encounter 1 missing value, I get an error. What is the best way to account for this so that I retain the missing rows in the final result?

fake_hol = '{"holiday_dt":{"0":"2000-04-23","1":"2001-04-15","2":"2002-03-31","3":"2000-01-01","4":"2000-01-17","5":"2000-05-29","6":"2000-07-04","7":"2000-09-04","8":"2000-10-09","9":"2000-11-11","10":"2000-11-23","11":"2000-11-24","12":"2000-12-25","13":"2000-12-23","14":"2001-01-01","15":"2001-01-15","16":"2001-05-28","17":"2001-07-04","18":"2001-09-03","19":"2001-10-08","20":"2001-11-11","21":"2001-11-22","22":"2001-11-23","23":"2001-12-25","24":"2001-12-22","25":"2002-01-01","26":"2002-01-21","27":"2002-05-27","28":"2002-07-04","29":"2002-09-02","30":"2002-10-14","31":"2002-11-11","32":"2002-11-28","33":"2002-11-29","34":"2002-12-25","35":"2002-12-21"},"holiday":{"0":"Easter","1":"Easter","2":"Easter","3":"New Year\'s Day","4":"Martin Luther King, Jr. Day","5":"Memorial Day","6":"Independence Day","7":"Labor Day","8":"Columbus Day","9":"Veterans Day","10":"Thanksgiving","11":"Black Friday","12":"Christmas Day","13":"Sat Before X-max","14":"New Year\'s Day","15":"Martin Luther King, Jr. Day","16":"Memorial Day","17":"Independence Day","18":"Labor Day","19":"Columbus Day","20":"Veterans Day","21":"Thanksgiving","22":"Black Friday","23":"Christmas Day","24":"Sat Before X-max","25":"New Year\'s Day","26":"Martin Luther King, Jr. Day","27":"Memorial Day","28":"Independence Day","29":"Labor Day","30":"Columbus Day","31":"Veterans Day","32":"Thanksgiving","33":"Black Friday","34":"Christmas Day","35":"Sat Before X-max"}}'

dfA_nomissing = pd.DataFrame({'x1': [3,4,2,4,5,6], 'x2': ['A','Z','G','I','D','H'], 'dt': ['2001-01-23','2001-08-14','2001-04-23','2001-08-08','2001-09-17','2001-11-11'], 'y': [1,1,1,0,1,0]})
dfA_1missing = pd.DataFrame({'x1': [3,4,2,4,5,6], 'x2': ['A','Z','G','I','D','H'], 'dt': ['2001-01-23','','2001-04-23','2001-08-08','2001-09-17','2001-11-11'], 'y': [1,1,1,0,1,0]})
dfB = pd.read_json(fake_hol)

dfA_nomissing
+----+----+------------+---+
| x1 | x2 |     dt     | y |
+----+----+------------+---+
|  3 | A  | 2001-01-23 | 1 |
|  4 | Z  | 2001-08-14 | 1 |
|  2 | G  | 2001-04-23 | 1 |
|  4 | I  | 2001-08-08 | 0 |
|  5 | D  | 2001-09-17 | 1 |
|  6 | H  | 2001-11-11 | 0 |
+----+----+------------+---+

dfB
+------------+-----------------------------+
| holiday_dt |           holiday           |
+------------+-----------------------------+
| 2000-04-23 | Easter                      |
| 2001-04-15 | Easter                      |
| 2002-03-31 | Easter                      |
| 2000-01-01 | New Year's Day              |
| 2000-01-17 | Martin Luther King, Jr. Day |
| ...        | ...                         |
| 2002-11-11 | Veterans Day                |
| 2002-11-28 | Thanksgiving                |
| 2002-11-29 | Black Friday                |
| 2002-12-25 | Christmas Day               |
| 2002-12-21 | Sat Before X-max            |
+------------+-----------------------------+

Here is the code which compares the 'dt' column in dfA and adds some time-relative features from dfB.

def add_calendar_cols(dfMain, dfEvents, date_col_list, eventname_col='Name', eventdate_col='Date'):
    
    # dont modify the original for testing purposes
    df = dfMain.copy(deep=True)
    
    # convert date cols to datetime
    for c in date_col_list:
        df[c] = pd.to_datetime(df[c])
    dfEvents[eventdate_col] = pd.to_datetime(dfEvents[eventdate_col])
    
    # function that calculates days until next event
    def calc_days(df, dfCal, direction, mainjoinkey, eventjoinkey):
        s = pd.merge_asof(df.sort_values(mainjoinkey), dfCal.sort_values(eventjoinkey), left_on=mainjoinkey, right_on=eventjoinkey, direction=direction)
        s = (s[eventjoinkey] - s[mainjoinkey]).dt.days.abs()
        return s
    
    # unique list of events
    unique_events = dfEvents[eventname_col].unique().tolist()
    
    # loop in case there are multiple date columns
    for dtcol in date_col_list:
        
        # dataframe of unique dates
        dfDates = pd.DataFrame(df[dtcol].unique(),columns=[dtcol])
        
        # calc days until the next event
        dfDates['until_next'] = calc_days(dfDates, dfEvents, 'forward', dtcol, eventdate_col)

        # do the same for each specific event
        for e in unique_events:  
            dfDates[dtcol + '_days_until_' + e] = calc_days(dfDates, dfEvents[dfEvents[eventname_col].eq(e)], 'forward', dtcol, eventdate_col)

        # merge everything back to the original dataframe    
        df = df.merge(dfDates, how='left', left_on=dtcol, right_on=dtcol)
        
        return df

This works:

result = add_calendar_cols(dfA_nomissing, dfB, ['dt'], eventname_col='holiday', eventdate_col='holiday_dt')

This gives me an error ValueError: Merge keys contain null values on left side

result = add_calendar_cols(dfA_1missing, dfB, ['dt'], eventname_col='holiday', eventdate_col='holiday_dt')



<ipython-input-93-67afeca366e9> in calc_days(df, dfCal, direction, mainjoinkey, eventjoinkey)
     12     # requires pandas >= 1.1.0
     13     def calc_days(df, dfCal, direction, mainjoinkey, eventjoinkey):
---> 14         s = pd.merge_asof(df.sort_values(mainjoinkey), dfCal.sort_values(eventjoinkey), left_on=mainjoinkey, right_on=eventjoinkey, direction=direction)
     15         s = (s[eventjoinkey] - s[mainjoinkey]).dt.days.abs()
     16         return s
Josh
  • 1,493
  • 1
  • 13
  • 24
  • Does this answer your question? [Pandas Merging 101](https://stackoverflow.com/questions/53645882/pandas-merging-101) – Trenton McKinney Aug 31 '20 at 22:40
  • Nah, merge_asof doesn't do outer joins, and I actually want to do an inner (like its doing) but its a problem with the left side having a blank. I tried to do a mask for the missing items, but was unable to get that to work – Josh Sep 01 '20 at 00:06

1 Answers1

0

I found a way to solve my issue, and I'm posting it here in case someone else runs into this with merge_asof.

I believe there's no way to handle it within the functions themselves, so what I did was mask the missing values with a value that it could do the operation on, and then go back and make them NaN manually. From the above, here were the modifications:

    def calc_days(df, dfCal, direction, mainjoinkey, eventjoinkey):
        
        df = df.copy(deep=True)
        # mask missing values with the minimum (this is temporary to avoid an error)
        df['__temp'] = df[mainjoinkey].mask(df[mainjoinkey].isnull(), df[mainjoinkey].min())
        
        # calculate days until or since, based on direction (which is passed in)
        s = pd.merge_asof(df.sort_values(['__temp']), dfCal.sort_values(eventjoinkey), left_on=['__temp'], right_on=eventjoinkey, direction=direction)
        s = ((s[eventjoinkey] - s['__temp']).dt.days.abs() * np.where(df[mainjoinkey].notnull(), 1, np.NaN))

        return s

This doesn't feel very elegant and I'm not proud of it, but it seems to work for now. If anyone ever finds a better solution please let me know, but I couldn't find one.

Josh
  • 1,493
  • 1
  • 13
  • 24