0

I am trying to tidy up a csv I was given where the columns are not very developer-friendly right now. I would like to use regular expressions to find multiple patterns in the column names to replace multiple conditions. For example, given df1 with leading/trailed spaces, white space throughout the header, parenthesis (), and <, then I would like remove the leading/trailing spaces and parenthesis, replace the white space with _, and replace the < with LESS_THAN

For example, turning df1 into df2:

df1 = pd.DataFrame({' APPLES AND LEMONS': [1,2], ' ORANGES ([POUNDS]) ': [2,1], ' BANANAS < 5 ': [8,9]})

APPLES AND LEMONS   ORANGES (POUNDS)    BANANAS < 5

0                   1                     2              8
1                   2                     1              9

df2 = pd.DataFrame({'APPLES_AND_LEMONS': [1,2], 'ORANGES_POUNDS': [2,1], 'BANANAS_LESS_THAN_5 ': [8,9]})

   APPLES_AND_LEMONS  ORANGES_POUNDS  BANANAS_LESS_THAN_5
0                  1               2                     8
1                  2               1                     9

My current implementation is by just chaining a bunch of str.replaces. Is there a better way to do this? I was thinking that regular expressions could be especially useful because there are hundreds of columns and I'm sure that there will be a few more headaches that I have yet to find.

df1.columns = df1.columns.str.strip()
df1.columns = concatenated_df.columns.str.replace(' ','_').str.replace('<','LESS_THAN').str.replace('(', '').str.replace(')','')
RocketSocks22
  • 391
  • 1
  • 4
  • 20
  • 2
    The `columns` object is typically small, so just specify a dict (or OrderedDict if order matters) of `{pat: repl}` and iterate over the `.items()`. For instance see https://stackoverflow.com/questions/6116978/how-to-replace-multiple-substrings-of-a-string. It's basically the same, though slightly more readable. – ALollz Apr 28 '19 at 21:49
  • no, your approach is entirely reasonable – anon01 Apr 28 '19 at 21:50

2 Answers2

0

Thanks to the link Alollz gave me I was able to get a solution that is much easier to maintain than continuously chaining str.replace

def clean_column_names(df):

    df.columns = df.columns.str.strip()
    replace_dict = {' ': '_', '<': 'LESS_THAN', '(': '', ')':''}
    for i, j in replace_dict.items():
        new_columns = [column.replace(i, j) for column in df.columns]
        df.columns = new_columns

    return df

clean_column_names(df1)

   APPLES_AND_LEMONS  ORANGES_POUNDS  BANANAS_LESS_THAN_5
0                  1                 2                    8
1                  2                 1                    9
RocketSocks22
  • 391
  • 1
  • 4
  • 20
0

Not sure if this is better for you.

old_cols = list(df1.columns.values)

remove = re.compile(r'^\s+|\s+$|[\(\)\[\]]')
wspace = re.compile(r'\s+')
less = re.compile(r'<')
great = re.compile(r'>')

new_cols = []

for i in old_cols:
    i = re.sub(remove, "", i)
    i = re.sub(wspace, "_", i)
    i = re.sub(less, "LESS_THAN", i)
    i = re.sub(less, "GREATER_THAN", i)
    new_cols.append(i)

df1.columns = new_cols
tzujan
  • 186
  • 1
  • 10