3

I am trying to figure out the best way possible to align my dataset which contains "Company Names". My dataset is about 300k rows and 3 columns. I tried many methods so far including Fuzzywuzzy using

choices = ["Atlanta Falcons", "New York Jets", "New York Giants", "Dallas Cowboys"]
>>> process.extract("new york jets", choices, limit=2)
    [('New York Jets', 100), ('New York Giants', 78)]

Now this code has two data sets and when I convert df[Name] into two and match with the above method the first one by default becomes 100% since the list is duplicate.

My exact code is

import pandas as pd
df = pd.DataFrame({"Name" : ["Google","google.inc", "ddood"]})
df2 = pd.DataFrame({"Name" : ["google","google"]})

from fuzzywuzzy import fuzz
from fuzzywuzzy import process

get_match = []
for row in df.index:
    name1 = []
    name1.append(df.get_value(row,"Name"))
    for columns in df2.index:
        name2 = []
        name2.append(df2.get_value(columns,"Name") )
        matched_token=[process.extract(x, name2, limit = 2)[0][1] for x in name1]
        get_match.append([matched_token, name1[0], name2[0]])
df_maneet = pd.DataFrame({'name1': [i[1] for i in get_match], 'name2':[i[2] for i in get_match], 'Ratio': [i[0][0] for i in get_match]})

new_df = df_maneet[df_maneet.Ratio>95]

I am in doubt if the above is the best way to approach my problem. My end result should be all similar like companies making a group.

The below answer did not help as well finding-similar-contact-names-within-table

Maneet Giri
  • 185
  • 3
  • 18
  • Possible duplicate of [Fuzzy logic for excel data -Pandas](https://stackoverflow.com/questions/49507193/fuzzy-logic-for-excel-data-pandas) – Abbas Nov 06 '18 at 07:13

2 Answers2

4

You can use np.meshgrid to create a list of values & get ratio for each value pair with fuzz.ratio then select rows greater than your threshold ratio.

import pandas as pd
import numpy as np
from fuzzywuzzy import fuzz

df = pd.DataFrame({"Name" : ["Google","google.inc", "ddood"]})
df2 = pd.DataFrame({"Name" : ["google","Grrgle"]})

x = np.array(np.meshgrid(df.Name.values, df2.Name.values)).T.reshape(-1,2)
df3 = pd.DataFrame(x)

df3.columns = ['Name1', 'Name2']

df3['Ratio'] = [fuzz.ratio(*i) for i in map(tuple, x)]


print (df3[df3.Ratio > 75])

    Name1   Name2  Ratio
0  Google  google     83

Edit: Use difflib.get_close_matches to get close matches for your values.

from difflib import get_close_matches

df = pd.DataFrame({'company_name': ['Alarm.com','Analytics inc.','Adaptiv',
                                   'AllState Insurance','Alarm co', 'Analytics', 
                                    'Adaptive', 'AllState Insurance Group']})

df1 = df['company_name'].map(lambda x: get_close_matches(x, df.company_name, n=2,
                                       cutoff=0.8)).apply(pd.Series).dropna()

print (df1)
                          0                         1
0                 Alarm.com                  Alarm co
2                   Adaptiv                  Adaptive
3        AllState Insurance  AllState Insurance Group
4                  Alarm co                 Alarm.com
6                  Adaptive                   Adaptiv
7  AllState Insurance Group        AllState Insurance
Abhi
  • 4,068
  • 1
  • 16
  • 29
  • but my problem remained the same. Since the dataset is too big. 300,000 rows. It will take forever to get the possible matches and then filtering out with above 60% match. – Maneet Giri Nov 06 '18 at 08:13
  • @ManeetGiri You can try the numpy approach. It's faster. – Abhi Nov 06 '18 at 11:07
  • MemoryError Traceback (most recent call last) in () 6 #df2 = pd.DataFrame({"Name" : ["google","Grrgle"]}) 7 ----> 8 x = np.array(np.meshgrid(df['Global Customer Name'].values, df2['Global Customer Name'].values)).T.reshape(-1,2) 9 df3 = pd.DataFrame(x) 10 MemoryError: – Maneet Giri Nov 06 '18 at 11:14
  • Memory error for only first 10k rows. I got a dataset of 300k – Maneet Giri Nov 06 '18 at 11:15
  • Thanks I will look into it.. But do you think my problem is that unique? I just have a single column of companies and want to cluster the similar names together. like Washington DC, Washingtondc, Washington D.C. – Maneet Giri Nov 06 '18 at 11:21
  • @ManeetGiri Maybe we can also try `.str` methods but using loops are very inefficient. If possible provide some sample data. It's hard to say without looking at the data. – Abhi Nov 06 '18 at 11:30
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/183187/discussion-between-maneet-giri-and-abhi). – Maneet Giri Nov 06 '18 at 12:25
  • Hi Abhi, you helped me with the above problem. Would you mind sharing a way to contact you? – Maneet Giri Apr 29 '19 at 20:24
0

You can also try exploring libraries like difflib and fuzzyset

You can try using it like below with your dataframes df and df2:

In [1070]: from difflib import SequenceMatcher as SM
In [1076]: SM(None, df['Name'].iloc[0], df2['Name'].iloc[0]).ratio()
Out[1076]: 0.8333333333333334

Please explore fuzzy-string-comparison for more info.

Let me know if this helps.

Mayank Porwal
  • 33,470
  • 8
  • 37
  • 58