7

Suppose I represent a feature vector using a dictionary (why? because I know the features are sparse, but, more on that later).

How should I implement the inner product of two such dictionaries (denoted, A, B)

I tried the naive approach:

for k in A:
  if k in B:
    sum += A[k] * B[k]

but it turns out to be slow.

Some more details:

  • I'm using a dictionary to represent features because

    1. The feature keys are strings
    2. There are ~20K possible keys
    3. Each vector is sparse (say, about 1000 non-zero elements).
  • I'm really interested in computing the pairwise inner product of N=2000 different dictionaries (that is, their linear kernel).

Dharman
  • 30,962
  • 25
  • 85
  • 135
Tomer Levinboim
  • 992
  • 12
  • 18
  • You could try `for k, v in A.iteritems(): sum += v * B.get(k,0)` – val Aug 16 '13 at 08:08
  • This might be obvious, but I figured I'd add it anyway: Are you making sure that "A" has the smallest `len()` of the two dictionaries? Depending on the circumstances, that could have a huge impact on speed. (And since the operation is commutative it wouldn't have any impact on the answer.) – Noah Feb 05 '15 at 21:50

5 Answers5

7

Not sure about faster, but here's another approach:

keys = A.viewkeys() & B.viewkeys()
the_sum = sum(a[k] * b[k] for k in keys)
Eric
  • 95,302
  • 53
  • 242
  • 374
7

Hmm, it seems your approach is actually the best for dense vectors:

>>> # Eric's answer
>>> timeit.timeit('sum([A[k]*B[k] for k in set(A.keys()) & set(B.keys())])', setup='A=dict((i,i) for i in xrange(100));B=dict((i,i) for i in xrange(100))', number=10000)
0.4360210521285808

>>> # My comment
>>> timeit.timeit('for k,v in A.iteritems(): sum += v*B.get(k,0)', setup='A=dict((i,i) for i in xrange(100));B=dict((i,i) for i in xrange(100));sum=0', number=10000)
0.4082838999682963

# My comment, more compact
>>> timeit.timeit('sum(v*B.get(k,0) for k,v in A.iteritems())', setup='A=dict((i,i) for i in xrange(100));B=dict((i,i) for i in xrange(100))', number=10000)
0.38053266868496394

>>> #Your approach
>>> timeit.timeit('for k in A: sum += A[k]*B[k] if k in B else 0.', setup='A=dict((i,i) for i in xrange(100));B=dict((i,i) for i in xrange(100));sum=0', number=10000)
0.35574231962510794

>>> # Your approach, more compact
>>> timeit.timeit('sum(A[k]*B[k] for k in A if k in B)', setup='A=dict((i,i) for i in xrange(100));B=dict((i,i) for i in xrange(100))', number=10000)
0.3400850549682559

For sparser ones, Eric's answer performs better but yours is still the fastest:

# Mine
>>> timeit.timeit('sum(v*B.get(k,0) for k,v in A.iteritems())', setup='import random;A=dict((i,i) for i in xrange(100) if random.random() < 0.3);B=dict((i,i) for i in xrange(100) if random.random() < 0.2)', number=10000)
0.1390782696843189

# Eric's
>>> timeit.timeit('sum([A[k]*B[k] for k in set(A.keys()) & set(B.keys())])', setup='import random;A=dict((i,i) for i in xrange(100) if random.random() < 0.3);B=dict((i,i) for i in xrange(100) if random.random() < 0.2)', number=10000)
0.11702822992151596

# Yours
>>> timeit.timeit('sum(A[k]*B[k] for k in A if k in B)', setup='import random;A=dict((i,i) for i in xrange(100) if random.random() < 0.3);B=dict((i,i) for i in xrange(100) if random.random() < 0.2)', number=10000)
0.07878250570843193

EDIT

After messing around for a bit, it seems sum([x for x ...]) is significantly faster than sum(x for x in ...). Rebenchmarking with this and Janne's remark for the keys in Eric's answer, yours is still on top (with Joowani's giving a slight improvement):

>>> timeit.timeit('sum([v*B.get(k,0) for k,v in A.items()])', setup='import random;A=dict((i,i) for i in xrange(100) if random.random() < 0.3);B=dict((i,i) for i in xrange(100) if random.random() < 0.2)', number=100000)
1.1604375791416714
>>> timeit.timeit('sum([A[k]*B[k] for k in A.viewkeys() & B.viewkeys()])', setup='import random;A=dict((i,i) for i in xrange(100) if random.random() < 0.3);B=dict((i,i) for i in xrange(100) if random.random() < 0.2)', number=100000)
0.9234189571552633
>>> timeit.timeit('sum([A[k]*B[k] for k in A if k in B])', setup='import random;A=dict((i,i) for i in xrange(100) if random.random() < 0.3);B=dict((i,i) for i in xrange(100) if random.random() < 0.2)', number=100000)
0.5411289579401455
>>> timeit.timeit('sum([A[k]*B[k] for k in A if k in B]) if len(A)<len(B) else sum([A[k]*B[k] for k in B if k in A])', setup='import random;A=dict((i,i) for i in xrange(100) if random.random() < 0.3);B=dict((i,i) for i in xrange(100) if random.random() < 0.2)', number=100000)
0.5198972138696263

Scaling to very large sizes, you see exactly the same pattern:

>>> #Mine
>>> timeit.timeit('sum([v*B.get(k,0) for k,v in A.iteritems()])', setup='import random;A=dict((i,i) for i in xrange(10000) if random.random() < 0.1);B=dict((i,i) for i in xrange(10000) if random.random() < 0.2)', number=100000)
45.328807250833506

>>> #Eric's
>>> timeit.timeit('sum([A[k]*B[k] for k in A.viewkeys() & B.viewkeys()])', setup='import random;A=dict((i,i) for i in xrange(10000) if random.random() < 0.1);B=dict((i,i) for i in xrange(10000) if random.random() < 0.2)', number=100000)
28.042937058640973

>>> #Yours
>>> timeit.timeit('sum([A[k]*B[k] for k in A if k in B])', setup='import random;A=dict((i,i) for i in xrange(10000) if random.random() < 0.1);B=dict((i,i) for i in xrange(10000) if random.random() < 0.2)', number=100000)
16.55080344861699

>>> #Joowani's
>>> timeit.timeit('sum([A[k]*B[k] for k in A if k in B]) if len(A)<len(B) else sum([A[k]*B[k] for k in B if k in A])', setup='import random;A=dict((i,i) for i in xrange(10000) if random.random() < 0.1);B=dict((i,i) for i in xrange(10000) if random.random() < 0.2)', number=100000)
15.485236119691308

I think Joowani's trick does not improve it significantly here because vectors are roughly the same size, but depending on your problem (if some vectors are ridiculously smaller than others) this might be more significant...

EDIT AGAIN

Oops, seems like I should have taken another coffee before posting... As Eric pointed out (although I completely missed it...), defining the array in setup keeps it the same for all trials, which is not really the best way to benchmark. With PROPER random vectors being tested, results are not significantly different, but for the sake of completeness:

>>> timeit.timeit('mine(dict((i,i) for i in xrange(100) if random.random() < 0.3),dict((i,i) for i in xrange(100) if random.random() < 0.2))', setup='import random;joowanis=lambda A,B:sum([A[k]*B[k] for k in A if k in B]) if len(A)<len(B) else sum([A[k]*B[k] for k in B if k in A]);mine=lambda A,B:sum([v*B.get(k,0) for k,v in A.iteritems()]);erics=lambda A,B:sum([A[k]*B[k] for k in A.viewkeys() & B.viewkeys()]);yours=lambda A,B:sum([A[k]*B[k] for k in A if k in B])', number=100000)
6.294158102577967
>>> timeit.timeit('erics(dict((i,i) for i in xrange(100) if random.random() < 0.3),dict((i,i) for i in xrange(100) if random.random() < 0.2))', setup='import random;joowanis=lambda A,B:sum([A[k]*B[k] for k in A if k in B]) if len(A)<len(B) else sum([A[k]*B[k] for k in B if k in A]);mine=lambda A,B:sum([v*B.get(k,0) for k,v in A.iteritems()]);erics=lambda A,B:sum([A[k]*B[k] for k in A.viewkeys() & B.viewkeys()]);yours=lambda A,B:sum([A[k]*B[k] for k in A if k in B])', number=100000)
6.068052507449011
>>> timeit.timeit('yours(dict((i,i) for i in xrange(100) if random.random() < 0.3),dict((i,i) for i in xrange(100) if random.random() < 0.2))', setup='import random;joowanis=lambda A,B:sum([A[k]*B[k] for k in A if k in B]) if len(A)<len(B) else sum([A[k]*B[k] for k in B if k in A]);mine=lambda A,B:sum([v*B.get(k,0) for k,v in A.iteritems()]);erics=lambda A,B:sum([A[k]*B[k] for k in A.viewkeys() & B.viewkeys()]);yours=lambda A,B:sum([A[k]*B[k] for k in A if k in B])', number=100000)
5.745110704570834
>>> timeit.timeit('joowanis(dict((i,i) for i in xrange(100) if random.random() < 0.3),dict((i,i) for i in xrange(100) if random.random() < 0.2))', setup='import random;joowanis=lambda A,B:sum([A[k]*B[k] for k in A if k in B]) if len(A)<len(B) else sum([A[k]*B[k] for k in B if k in A]);mine=lambda A,B:sum([v*B.get(k,0) for k,v in A.iteritems()]);erics=lambda A,B:sum([A[k]*B[k] for k in A.viewkeys() & B.viewkeys()]);yours=lambda A,B:sum([A[k]*B[k] for k in A if k in B])', number=100000)
5.737499445367575

To scale:

>>> timeit.timeit('mine(dict((i,i) for i in xrange(10000) if random.random() < 0.1),dict((i,i) for i in xrange(10000) if random.random() < 0.2))', setup='import random;joowanis=lambda A,B:sum([A[k]*B[k] for k in A if k in B]) if len(A)<len(B) else sum([A[k]*B[k] for k in B if k in A]);mine=lambda A,B:sum([v*B.get(k,0) for k,v in A.iteritems()]);erics=lambda A,B:sum([A[k]*B[k] for k in A.viewkeys() & B.viewkeys()]);yours=lambda A,B:sum([A[k]*B[k] for k in A if k in B])', number=1000)
5.0510995368395015
>>> timeit.timeit('erics(dict((i,i) for i in xrange(10000) if random.random() < 0.1),dict((i,i) for i in xrange(10000) if random.random() < 0.2))', setup='import random;joowanis=lambda A,B:sum([A[k]*B[k] for k in A if k in B]) if len(A)<len(B) else sum([A[k]*B[k] for k in B if k in A]);mine=lambda A,B:sum([v*B.get(k,0) for k,v in A.iteritems()]);erics=lambda A,B:sum([A[k]*B[k] for k in A.viewkeys() & B.viewkeys()]);yours=lambda A,B:sum([A[k]*B[k] for k in A if k in B])', number=1000)
4.350612399185138
>>> timeit.timeit('yours(dict((i,i) for i in xrange(10000) if random.random() < 0.1),dict((i,i) for i in xrange(10000) if random.random() < 0.2))', setup='import random;joowanis=lambda A,B:sum([A[k]*B[k] for k in A if k in B]) if len(A)<len(B) else sum([A[k]*B[k] for k in B if k in A]);mine=lambda A,B:sum([v*B.get(k,0) for k,v in A.iteritems()]);erics=lambda A,B:sum([A[k]*B[k] for k in A.viewkeys() & B.viewkeys()]);yours=lambda A,B:sum([A[k]*B[k] for k in A if k in B])', number=1000)
4.15619379016789
>>> timeit.timeit('joowanis(dict((i,i) for i in xrange(10000) if random.random() < 0.1),dict((i,i) for i in xrange(10000) if random.random() < 0.2))', setup='import random;joowanis=lambda A,B:sum([A[k]*B[k] for k in A if k in B]) if len(A)<len(B) else sum([A[k]*B[k] for k in B if k in A]);mine=lambda A,B:sum([v*B.get(k,0) for k,v in A.iteritems()]);erics=lambda A,B:sum([A[k]*B[k] for k in A.viewkeys() & B.viewkeys()]);yours=lambda A,B:sum([A[k]*B[k] for k in A if k in B])', number=1000)
4.185129374341159

I think the bottom line is that you cannot expect significant speedup by cleverly reordering your expressions for this kind of thing... Maybe you could try doing the numerical part in C/Cython or using Scipy's Sparse package ?

val
  • 8,459
  • 30
  • 34
  • I just updated mine from `set(d.keys())` to `d.viewkeys()`, which may give a speed boost – Eric Aug 16 '13 at 08:29
  • Also, I'm not sure this test is meaningful if the initial dictionary is different for each code snippet – Eric Aug 16 '13 at 08:30
  • True, but I think this can still give us a rough idea of the speed of each one on average. – val Aug 16 '13 at 08:34
  • It seems like my way does speed it up quite a bit on my side. Can I ask you for a confirmation? – Joohwan Aug 16 '13 at 08:41
  • Congrats ! You come on top :) – val Aug 16 '13 at 08:47
  • You are right, it won't speed up vectors with same sizes. The improvement seemed to be much better on my side but I guess that was purely by chance (I used your code for timeit). I doubt the tiny improvement (if any) comes close to what Tomer is looking for anyway :( – Joohwan Aug 16 '13 at 08:56
  • The "timeit()" calls suggested above run about twice as fast when using pypy (on my mac machine, from 4.1sec ==> 1.8sec). Regardless, I think the most efficient implemented (for my needs) is to wrap a scipy.sparse matrix, as also suggested by Valentin (I'll post an answer soon) – Tomer Levinboim Aug 17 '13 at 05:56
2

In the case where A is much longer than B this could help maybe?

if len(A) > len(B):
    A, B = B, A

for k in A:
    if k in B:
        the_sum += A[k] * B[k]
Eric
  • 95,302
  • 53
  • 242
  • 374
Joohwan
  • 2,374
  • 1
  • 19
  • 30
1

You should try to use namedtuples instead of dict.

from collections import namedtuple
A = dict
B = dict
_A = namedtuple('_A', A.keys())
_B = namedtuple('_B', B.keys())
DictA = _A(**A)
DictB = _B(**B)

and then use them as dict. more info on namedtuples here: What are "named tuples" in Python?

Community
  • 1
  • 1
6160
  • 1,002
  • 6
  • 15
  • How would this speed up the calculation? – Janne Karila Aug 16 '13 at 09:16
  • actually namedtuples are more faster and efficient than dict (http://stackoverflow.com/questions/1336791/dictionary-vs-object-which-is-more-efficient-and-why), using that instead of dict can improve the speed of calculation – 6160 Aug 16 '13 at 09:24
  • If you look up the fields by name, it is still a dictionary lookup under the hood. – Janne Karila Aug 16 '13 at 10:12
1

Here's my answer (Following on a suggestion by @valentin-clement):

First I wrap a scipy.sparse dok_matrix. The idea is to assign each of the possible features an index.

import scipy.sparse as sps
import numpy as np

class MSK:
    # DD is a dict of dict, whose values are of type float.
    # features - the set of possible features keys
    def __init__(self, DD, features):
        self.features = {k: j for (j, k) in enumerate(features)}
        self.strings = DD.keys()
        n = len(self.strings)
        d = len(self.features)
        self.M = sps.dok_matrix((n, d), dtype=np.float64)
        for (i, s) in enumerate(self.strings):
            v = DD[s]
            for k in v:
                j = self.features[k]
                self.M[i, j] = v[k]

And we test using the following code, where the number of elements is 800, the dimensionality is also 800, but the sparsity is 200 (exactly 200 elements are non-zero).

np.random.seed(1)
N = 800
DD = dict()
R = range(N)
for i in xrange(N):
    DD[i] = dict()
    S = np.random.permutation(R)
    S = S[:N/4]
    for j in S:
        DD[i][j] = np.random.randn(1)[0]

K = MSK(DD, R)
import cProfile
cProfile.runctx("A = K.M * K.M.T", globals(), locals())
print A.todense()

The output is:

2080520 function calls (2080519 primitive calls) in 2.884 seconds

Lets say 3 seconds. My naive implementation (that uses @Joowani's if-statement) takes about 19 seconds.

(MSK stands for MatrixSparseKeys)

Tomer Levinboim
  • 992
  • 12
  • 18