I'm trying to count the number of duplicate values based on set of columns in a DataFrame.
Example:
print df
Month LSOA code Longitude Latitude Crime type
0 2015-01 E01000916 -0.106453 51.518207 Bicycle theft
1 2015-01 E01000914 -0.111497 51.518226 Burglary
2 2015-01 E01000914 -0.111497 51.518226 Burglary
3 2015-01 E01000914 -0.111497 51.518226 Other theft
4 2015-01 E01000914 -0.113767 51.517372 Theft from the person
My workaround:
counts = dict()
for i, row in df.iterrows():
key = (
row['Longitude'],
row['Latitude'],
row['Crime type']
)
if counts.has_key(key):
counts[key] = counts[key] + 1
else:
counts[key] = 1
And I get the counts:
{(-0.11376700000000001, 51.517371999999995, 'Theft from the person'): 1,
(-0.111497, 51.518226, 'Burglary'): 2,
(-0.111497, 51.518226, 'Other theft'): 1,
(-0.10645299999999999, 51.518207000000004, 'Bicycle theft'): 1}
Aside from the fact this code could be improved as well (feel free to comment how), what would be the way to do it through Pandas?
For those interested I'm working on a dataset from https://data.police.uk/