1

I have two dataframes, df1 and df2, each containing latitude and longitude data. For each observation in df1, I would like to use the haversine function to calculate the distance between each point in df2. I have tried two approaches, but performance becomes an issue with larger datasets.

In [1]: import pandas as pd
        import numpy as np
        from haversine import haversine

In [2]: df1 = pd.DataFrame({'lat_long': [(25.99550273, 179.18526021), (76.24387873, -34.21956936), (-51.43773064, -113.93795667)]})
        df2 = pd.DataFrame({'lat_long': [(22.89956242, 107.04009984), (-80.25375578, -92.05425401), (-0.81621289, -147.26962084), (0,0)]})

In [3]: # method 1: iterate through rows
        for i in df1['lat_long']:
            for j in df2['lat_long']:
                print(haversine(i,j))

7215.01729234
12830.1178484
4673.37638582
17123.1981646
8678.49300206
17721.004245
10690.0998826
8746.62635254
15294.1258757
3303.30690512
6434.34272913
11636.6462421

In [4]: # method 2: create one dataframe and then perform calculation
        df1_dup = df1.append([df1]*(len(df2)-1), ignore_index=True)
        df2_dup = df2.append([df2]*(len(df1)-1), ignore_index=True)
        df = pd.DataFrame({'lat_long_df1': df1_dup.sort_values('lat_long')['lat_long'],'lat_long_df2': df2_dup['lat_long']})
        print(df.apply(lambda x: haversine(x['lat_long_df1'], x['lat_long_df2']), axis=1))

0      7215.017292
1     17721.004245
2      6434.342729
3     17123.198165
4      8678.493002
5      3303.306905
6      4673.376386
7      8746.626353
8     15294.125876
9     12830.117848
10    10690.099883
11    11636.646242
dtype: float64

Any ideas for alternative approaches that would work better with much larger dataframes?

cs95
  • 379,657
  • 97
  • 704
  • 746
Jon S
  • 165
  • 6
  • I'm not clear on this: do you want to calculate distance between each point in `df1` and each point in `df2`? a.k.a. calculating distances pairwise between elements of `df1` and elements of `df2`? Do the distances need to be exact, or can you get away with a simpler formula than the haversine for at least some elements? Do you really need to process every pair? Basically, although you can probably do this with cleaner syntax, I wouldn't expect anything to improve _performance_ much unless you can somehow skip part of your calculation. – David Z Oct 02 '17 at 20:55
  • 2
    Here's a related question: https://stackoverflow.com/questions/29545704/fast-haversine-approximation-python-pandas – David Z Oct 02 '17 at 20:57
  • @DavidZ Yes, I need pairwise distances. I could perhaps sacrifice precision in terms of distance if it results in a marked performance boost. – Jon S Oct 02 '17 at 21:00

1 Answers1

3

If you're looking for a more performant merge, you can do a cross join on a surrogate column:

temp = df1.assign(A=1).merge(df2.assign(A=1), on='A').drop('A', 1) 
temp
                       lat_long_x                    lat_long_y
0     (25.99550273, 179.18526021)   (22.89956242, 107.04009984)
1     (25.99550273, 179.18526021)  (-80.25375578, -92.05425401)
2     (25.99550273, 179.18526021)  (-0.81621289, -147.26962084)
3     (25.99550273, 179.18526021)                        (0, 0)
4     (76.24387873, -34.21956936)   (22.89956242, 107.04009984)
5     (76.24387873, -34.21956936)  (-80.25375578, -92.05425401)
6     (76.24387873, -34.21956936)  (-0.81621289, -147.26962084)
7     (76.24387873, -34.21956936)                        (0, 0)
8   (-51.43773064, -113.93795667)   (22.89956242, 107.04009984)
9   (-51.43773064, -113.93795667)  (-80.25375578, -92.05425401)
10  (-51.43773064, -113.93795667)  (-0.81621289, -147.26962084)
11  (-51.43773064, -113.93795667)                        (0, 0)

temp.apply(lambda x: haversine(x['lat_long_x'], x['lat_long_y']), 1)
0      7215.017292
1     12830.117848
2      4673.376386
3     17123.198165
4      8678.493002
5     17721.004245
6     10690.099883
7      8746.626353
8     15294.125876
9      3303.306905
10     6434.342729
11    11636.646242
dtype: float64

You could combine the performant merge with the answer from this question for a nice speed boost. You should also consider keeping your lat/long data in separate columns.

cs95
  • 379,657
  • 97
  • 704
  • 746