2

I have two dataframes of errors in 3 axis (x, y, z):

df1 = pd.DataFrame([[0, 1, 2], [-1, 0, 1], [-2, 0, 3]], columns = ['x', 'y', 'z'])
df2 = pd.DataFrame([[1, 1, 3], [1, 0, 2], [1, 0, 3]], columns = ['x', 'y', 'z'])

I'm looking for a fast way to find the Cartesian sum of the square of each row of the two dataframes.

EDIT My current solution:

cartesian_sum = list(np.sum(list(tup), axis = 0).tolist() 
                    for tup in itertools.product( (df1**2).to_numpy().tolist(),
                                                  (df2**2).to_numpy().tolist() ) )

cartesian_sum
>>> 
[[1, 2, 13],
 [1, 1, 8],
 [1, 1, 13],
 [2, 1, 10],
 [2, 0, 5],
 [2, 0, 10],
 [5, 1, 18],
 [5, 0, 13],
 [5, 0, 18]]

is too slow (~ 2.4 ms; compared to the solutions based purely in Pandas running ~ 8-10 ms).

This is similar to the related question (link here) but using itertools is so slow. Is there a faster way of doing this in Python?

batlike
  • 668
  • 1
  • 7
  • 19

1 Answers1

6

I think you need cross join first, remove column a, squared, convert columns to MultiIndex and sum per first level:

df = df1.assign(a=1).merge(df2.assign(a=1), on='a').drop('a', axis=1) ** 2
df.columns = df.columns.str.split('_', expand=True)
df = df.sum(level=0, axis=1)
print (df)
   x  y   z
0  1  2  13
1  1  1   8
2  1  1  13
3  2  1  10
4  2  0   5
5  2  0  10
6  5  1  18
7  5  0  13
8  5  0  18

Details:

print (df1.assign(a=1).merge(df2.assign(a=1), on='a'))
   x_x  y_x  z_x  a  x_y  y_y  z_y
0    0    1    2  1    1    1    3
1    0    1    2  1    1    0    2
2    0    1    2  1    1    0    3
3   -1    0    1  1    1    1    3
4   -1    0    1  1    1    0    2
5   -1    0    1  1    1    0    3
6   -2    0    3  1    1    1    3
7   -2    0    3  1    1    0    2
8   -2    0    3  1    1    0    3

One idea for improve performance:

#https://stackoverflow.com/a/53699013/2901002
def cartesian_product_simplified_changed(left, right):
    la, lb = len(left), len(right)
    ia2, ib2 = np.broadcast_arrays(*np.ogrid[:la,:lb])

    a = np.column_stack([left.values[ia2.ravel()] ** 2, right.values[ib2.ravel()] ** 2])
    a = a[:, :la] + a[:, la:]
    return a


a = cartesian_product_simplified_changed(df1, df2)
print (a)
[[ 1  2 13]
 [ 1  1  8]
 [ 1  1 13]
 [ 2  1 10]
 [ 2  0  5]
 [ 2  0 10]
 [ 5  1 18]
 [ 5  0 13]
 [ 5  0 18]]
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252
  • beat me too it, i wish cross joins / cartesian products were as simple as SQL in python – Umar.H Oct 14 '20 at 07:50
  • Shouldn't it remove `a` before sum? `(df1.assign(a=1).merge(df2.assign(a=1), on='a') ** 2).drop(columns='a').sum(axis=1)` – nocibambi Oct 14 '20 at 07:54
  • Wow, nifty. Is working only in pandas (cross joining) faster than itertools? Is there also a way to element-wise sum (as my solution posted above?) – batlike Oct 14 '20 at 07:54
  • I verified this solution, it is indeed what I want - but it is 4 times slower than my solution above. – batlike Oct 14 '20 at 07:57
  • 1
    @batlike - Hard question, What about use faster `cross join` ? from [this](https://stackoverflow.com/a/53699013/2901002) – jezrael Oct 14 '20 at 07:57
  • Great suggestion @jezrael - although I will leave this question open. The avg run time on my system for both the performant ```cross join``` and your solution is about the same ~8 ms. My current solution avg's 2 ms, but even then I am unhappy with the performance. Maybe I need to move away from Python.. – batlike Oct 14 '20 at 08:03
  • 1
    ```ia2, ib2 = np.broadcast_arrays(*np.ogrid[:la,:lb])``` is critical. Thank you. – batlike Oct 14 '20 at 08:09
  • @jezrael do you think the solution should be changed from ```a = a[:, :la] + a[:, la:]``` to ```a = a[:, :left.shape[1]] + a[:, right.shape[1]:]``` – batlike Oct 14 '20 at 08:18
  • @batlike - I use `la, lb = len(left), len(right)`, because `left.shape[1]` is same like `len(left)`, but your solution is nice too. Also I use both `la` for simplify solution - all length of columns is same in both DataFrames. – jezrael Oct 14 '20 at 08:21