4

I have code that runs 16 test cases against a CSV, checking for anomalies from poor data entry. A new column, 'Test case failed,' is created. A number corresponding to which test it failed is added to this column when a row fails a test. These failed rows are separated from the passed rows; then, they are sent back to be corrected before they are uploaded into a database.

There are duplicates in my data, and I would like to add code to check for duplicates, then decide what field to use based on the date, selecting the most updated fields.

Here is my data with two duplicate IDs, with the first row having the most recent Address while the second row has the most recent name.

ID MnLast MnFist MnDead? MnInactive? SpLast SpFirst SPInactive? SpDead Addee Sal Address NameChanged AddrChange
123 Doe John No No Doe Jane No No Mr. John Doe Mr. John 123 place 05/01/2022 11/22/2022
123 Doe Dan No No Doe Jane No No Mr. John Doe Mr. John 789 road 11/01/2022 05/06/2022

Here is a snippet of my code showing the 5th testcase, which checks for the following: Record has Name information, Spouse has name information, no one is marked deceased, but Addressee or salutation doesn't have "&" or "AND." Addressee or salutation needs to be corrected; this record is married.

import pandas as pd 
import numpy as np

data = pd.read_csv("C:/Users/file.csv", encoding='latin-1' )

# Create array to store which test number the row failed
data['Test Case Failed']= ''
data = data.replace(np.nan,'',regex=True)
data.insert(0, 'ID', range(0, len(data)))

# There are several test cases, but they function primarily the same
# Testcase 1
# Testcase 2
# Testcase 3
# Testcase 4

# Testcase 5 - comparing strings in columns
df = data[((data['FirstName']!='') & (data['LastName']!='')) & 
              ((data['SRFirstName']!='') & (data['SRLastName']!='') &
              (data['SRDeceased'].str.contains('Yes')==False) & (data['Deceased'].str.contains('Yes')==False) 
              )]
df1 = df[df['PrimAddText'].str.contains("AND|&")==False] 
data_5 = df1[df1['PrimSalText'].str.contains("AND|&")==False] 
ids = data_5.index.tolist()

# Assign 5 for each failed
for i in ids:
  data.at[i,'Test Case Failed']+=', 5'


# Failed if column 'Test Case Failed' is not empty, Passed if empty
failed = data[(data['Test Case Failed'] != '')]
passed = data[(data['Test Case Failed'] == '')]


failed['Test Case Failed'] =failed['Test Case Failed'].str[1:]
failed = failed[(failed['Test Case Failed'] != '')]

# Clean up
del failed["ID"]
del passed["ID"]

failed['Test Case Failed'].value_counts()

# Print to console 
print("There was a total of",data.shape[0], "rows.", "There was" ,data.shape[0] - failed.shape[0], "rows passed and" ,failed.shape[0], "rows failed at least one test case")

# output two files
failed.to_csv("C:/Users/Failed.csv", index = False)
passed.to_csv("C:/Users/Passed.csv", index = False)

What is the best approach to check for duplicates, choose the most updated fields, drop the outdated fields/row, and perform my test?

feelsgood
  • 135
  • 14
  • 1
    how many columns have dates to indicate most recent? can you create one column for each row that is THE reference date for most recent update for the entire row? in that case, should be straightforward to sort the dataframe in ascending (or descending order) by the reference date field, then use pandas.drop_duplicates(keep='first') (or keep='last') option – bici.sancta Apr 29 '22 at 19:50
  • @bici.sancta there is 6 date columns that correspond to a corresponding data column. I'm familiar with being able to drop frames but haven't ever had to change and drop cells based off a condition. – feelsgood May 01 '22 at 03:21
  • 1
    so I understand correctly, for entries with the same name, you may use some data from one column and then use some data from another column. You will not (exclusively) use entire rows of data corresponding to the one that has the most recent data entry of any field ? – bici.sancta May 02 '22 at 17:31
  • That is correct. – feelsgood May 02 '22 at 18:54

3 Answers3

4

First, try to set a mapping that associates update date columns to their corresponding value columns.

date2val = {"AddrChange": ["Address"], "NameChanged": ["MnFist", "MnLast"], ...}

Then, transform date columns into datetime format to be able to compare them (using argmax later).

for key in date2val.keys():
   failed[key] = pd.to_datetime(failed[key])

Then, group by ID the duplicates (since ID is the value that decides whether it is a duplicate), and for each date column get the maximum value in the group (which refers to the most recent update) and retrieve the columns to update from the initial mapping. I'll update the last row and set it as the final updated result (by putting it in corrected list).

corrected = list()
for _, grp in failed.groupby("ID"):
   for key in date2val.keys():
      recent = grp[key].argmax()
      for col in date2val[key]:
         grp.iloc[-1][col] = grp.iloc[recent][col]      
   corrected.append(grp.iloc[-1])
corrected = pd.DataFrame(corrected)
meh
  • 126
  • 4
  • Is this similar to a Hashtable? I haven't seen `for _, grp` specifically the `_,` part. Can you point me where I can read up on that, or tell me what that is? – feelsgood May 19 '22 at 16:23
  • 1
    Yes, i am using a dictionary `date2val`, which is essentially a HashTable. The underscore is simply a conventional way to tell Python that I don't need a variable and i am throwing it away. The GroupBy object (in the loop) yields tuples `id, failed[failed["ID"]==id]`. I am throwing away the `id` because I won't use it. Check out the answers to this [post](https://stackoverflow.com/questions/5893163/what-is-the-purpose-of-the-single-underscore-variable-in-python/5893946#5893946) for more details about the underscore. – meh May 19 '22 at 16:50
2

Preparing data:

import pandas as pd
c = 'ID MnLast  MnFist  MnDead? MnInactive? SpLast  SpFirst SPInactive? SpDead  Addee   Sal Address NameChanged AddrChange'.split()
data1 = '123    Doe John    No  No  Doe Jane    No  No  Mr.JohnDoe  Mr.John 123place    05/01/2022  11/22/2022'.split()
data2 = '123    Doe Dan No  No  Doe Jane    No  No  Mr.JohnDoe  Mr.John 789road 11/01/2022  05/06/2022'.split()
data3 = '8888 Brown Peter No No Brwon Peter No No Mr.PeterBrown M.Peter 666Avenue 01/01/2011 01/01/2011'.split()
df = pd.DataFrame(columns = c, data = [data1, data2, data3])
df.AddrChange.astype('datetime64')
df.NameChanged.astype('datetime64')
df

DataFrame is like the example: dataframe

Then you pick a piece of the dataframe avoiding changes in original. Adjacent rows have the same ID and the first one has the apropriate name:

df1 = df[['ID', 'MnFist', 'NameChanged']].sort_values(by=['ID', 'NameChanged'], ascending = False)
df1

Then you build a dictionary putting key as df.ID and the appropriate name for its value. You intend to build all the column MnFist:

d = {}
for id in set(df.ID.values):
    df_mask = df1.ID == id # filter only rows with same id
    filtered_df = df1[df_mask]
    if len(filtered_df) <= 1:
        d[id] = filtered_df.iat[0, 1] # id has only one row, so no changes
        continue
    for name in filtered_df.MnFist:
        if name in ['unknown', '', ' '] or name is None: # name discards
            continue
        else:
            d[id] = name # found a servible name
    if id not in d.keys():
        d[id] = filtered_df.iat[0, 1] # no servible name, so picked the first
print(d)

The partial output of the dictionary is: {'8888': 'Peter', '123': 'Dan'}

Then you build all the column:

df.MnFist = [d[id] for id in df.ID]
df

The partial output is: DataFrame-NameChanged

Then the same procedure to the other column:

df1 = df[['ID', 'Address', 'AddrChange']].sort_values(by=['ID', 'AddrChange'], ascending = False)
df1
d = { id: df1.loc[df1.ID == id, 'Address'].values[0] for id in set(df.ID.values) }
d
df.Address = [d[id] for id in df.ID]
df

The final output is: FinalOutput

Edited after author comented possibility of unknow inservible data.

  • What if the column that has the change is unknown? How would I find the duplicate ID, compare them, parse out the differences and then choose which difference to use based on the date? My dataset is 300k records. – feelsgood May 25 '22 at 11:30
1

Let me restate what I understood from the question:

  1. You have a dataset on which you are doing several sanity checks. (Looks like you already have everything in place for this step)
  2. In next step you are finding duplicates row with different columns updated at different dates. (I assume that you already have this)
  3. Now, you are looking for a new dataset that has non-duplicated rows with updated fields using the latest date entries.

First, define different dates and their related columns in a form of dictionary:

date_to_cols = {"AddrChange": "Address", "NameChanged": ["MnLast", "MnFirst"]}

Next, apply group by using "ID" and then get the index for maximum value of different dates. Once we have the index, we can pull the related fields for that date from the data.

data[list(date_to_cols.keys())] =data[list(date_to_cols.keys())].astype('datetime64')
latest_data = df.groupby('ID')[list(date_to_cols.keys())].idxmax().reset_index()

for date_field, cols_to_update in date_to_cols.items():
    latest_data[cols_to_update] = latest_data[date_field].apply(lambda x: data.iloc[x][cols_to_update])
    latest_data[date_field] = latest_data[date_field].apply(lambda x: data.iloc[x][date_field])

Next, you can merge these latest_data with the original data (after removing old columns):

cols_to_drop = list(latest_data.columns)
cols_to_drop.remove("ID")
data.drop(columns= cols_to_drop, inplace=True)
latest_data_all_fields = data.merge(latest_data, on="ID", how="left")
latest_data_all_fields.drop_duplicates(inplace=True) 
Archana
  • 41
  • 2
  • 5