6

I've got the following tiny Python method that is by far the performance hotspot (according to my profiler, >95% of execution time is spent here) in a much larger program:

def topScore(self, seq):
    ret = -1e9999
    logProbs = self.logProbs  # save indirection
    l = len(logProbs)
    for i in xrange(len(seq) - l + 1):
        score = 0.0
        for j in xrange(l):
            score += logProbs[j][seq[j + i]]
        ret = max(ret, score)

    return ret

The code is being run in the Jython implementation of Python, not CPython, if that matters. seq is a DNA sequence string, on the order of 1,000 elements. logProbs is a list of dictionaries, one for each position. The goal is to find the maximum score of any length l (on the order of 10-20 elements) subsequence of seq.

I realize all this looping is inefficient due to interpretation overhead and would be a heck of a lot faster in a statically compiled/JIT'd language. However, I'm not willing to switch languages. First, I need a JVM language for the libraries I'm using, and this kind of constrains my choices. Secondly, I don't want to translate this code wholesale into a lower-level JVM language. However, I'm willing to rewrite this hotspot in something else if necessary, though I have no clue how to interface it or what the overhead would be.

In addition to the single-threaded slowness of this method, I also can't get the program to scale much past 4 CPUs in terms of parallelization. Given that it spends almost all its time in the 10-line hotspot I've posted, I can't figure out what the bottleneck could be here.

dsimcha
  • 67,514
  • 53
  • 213
  • 334
  • 1
    I can't quite get my head around the data structure you are using. Could you post a shortened sample of `seq` and `logProbs`? – Katriel Nov 17 '10 at 22:32
  • My first thought was numpy, so maybe something on this page might be of use: http://stackoverflow.com/questions/316410/is-there-a-good-numpy-clone-for-jython – Russell Borogove Nov 17 '10 at 22:41
  • My second thought is everting the iteration such that you go over seq only one time, but that probably means that logProbs and score become more complex, and may not actually reduce work done. – Russell Borogove Nov 17 '10 at 22:43
  • @Russell: No numpy in Jython, though I think you should be able to access Java's numerics. – Katriel Nov 17 '10 at 22:53
  • It might be worthwhile to make `logProbs` a list instead of a dictionary because indexing should be faster than hashing. – martineau Nov 17 '10 at 23:30
  • @martineau: "`logProbs` is a list of dictionaries ..." – Fred Larson Nov 17 '10 at 23:33
  • 1
    @Fred Larson: Sorry, I meant make each item in the `logProbs` list a list instead of a dictionary. There's only a small number of possible values that `seq[index]` can have, I believe. This would entail logically mapping each possible value to an index, but that's probably faster than hashing each value from the sequence to lookup its value if it's a dictionary. – martineau Nov 18 '10 at 01:21
  • Can you update your question to give us a full example of `seq` and `logProbs` so we can have a go at optimising it? It is very difficult to help without sample data. – fmark Nov 18 '10 at 06:55

8 Answers8

2

The reason it is slow is because it is O(N*N)

The maximum subsequence algorithm may help you improve this

John La Rooy
  • 295,403
  • 53
  • 369
  • 502
  • This problem is a little bit different than maximum subsequence, just enough that the proposed solution doesn't quite work. – dsimcha Nov 17 '10 at 23:18
2

if topScore is called repeatedly for same seq you could memoize its value.

E.g. http://code.activestate.com/recipes/52201/

Rohan Monga
  • 1,759
  • 1
  • 19
  • 31
  • While I was hoping to get something a little more insightful out of this post, I'll accept this because it's what I ended up doing. – dsimcha Dec 08 '10 at 20:37
  • That recipe is basically showing how to write a decorator that captures the return values and maps it to inputs. I thought I'd give you code to do that, cause I always like to use other people's code instead of writing my own – Rohan Monga Dec 09 '10 at 05:22
1

i don't have any idea what i'm doing but maybe this can help speed up your algo:

ret = -1e9999
logProbs = self.logProbs  # save indirection
l = len(logProbs)

scores = collections.defaultdict(int)

for j in xrange(l):
    prob = logProbs[j]
    for i in xrange(len(seq) - l + 1):
        scores[i] += prob[seq[j + i]]


ret = max(ret, max(scores.values()))
mouad
  • 67,571
  • 18
  • 114
  • 106
1

What about precomputing xrange(l) outside the for i loop?

cwallenpoole
  • 79,954
  • 26
  • 128
  • 166
Wangnick
  • 735
  • 1
  • 8
  • 14
0

Nothing jumps out as being slow. I might rewrite the inner loop like this:

score = sum(logProbs[j][seq[j+i]] for j in xrange(l))

or even:

seqmatch = zip(seq[i:i+l], logProbs)
score = sum(posscores[base] for base, posscores in seqmatch)

but I don't know that either would save much time.

It might be marginally quicker to store DNA bases as integers 0-3, and look up the scores from a tuple instead of a dictionary. There'll be a performance hit on translating letters to numbers, but that only has to be done once.

Thomas K
  • 39,200
  • 7
  • 84
  • 86
0

Definitely use numpy and store logProbs as a 2D array instead of a list of dictionaries. Also store seq as a 1D array of (short) integers as suggested above. This will help if you don't have to do these conversions every time you call the function (doing these conversions inside the function won't save you much). You can them eliminate the second loop:

import numpy as np
...
print np.shape(self.logProbs) # (20, 4)
print np.shape(seq) # (1000,)
...
def topScore(self, seq):
ret = -1e9999
logProbs = self.logProbs  # save indirection
l = len(logProbs)
for i in xrange(len(seq) - l + 1):
    score = np.sum(logProbs[:,seq[i:i+l]])
    ret = max(ret, score)

return ret

What you do after that depends on which of these 2 data elements changes the most often:

If logProbs generally stays the same and you want to run many DNA sequences through it, then consider stacking your DNA sequences as a 2D array. numpy can loop through the 2D array very quickly so if you have 200 DNA sequences to process, it will only take a little longer than a single.

Finally, if you really need speed up, use scipy.weave. This is a very easy way to write a few lines of fast C to accelerate you loops. However, I recommend scipy >0.8.

kiyo
  • 1,929
  • 1
  • 18
  • 22
0

You can try hoisting more than just self.logProbs outside the loops:

def topScore(self, seq):
    ret = -1e9999
    logProbs = self.logProbs  # save indirection
    l = len(logProbs)
    lrange = range(l)
    for i in xrange(len(seq) - l + 1):
        score = 0.0
        for j in lrange:
            score += logProbs[j][seq[j + i]]
        if score > ret: ret = score # avoid lookup and function call

    return ret
John Machin
  • 81,303
  • 11
  • 141
  • 189
0

I doubt it will make a significant difference, but you could try changing:

  for j in xrange(l):
        score += logProbs[j][seq[j + i]]

to

  for j,lP in enumerate(logProbs):
        score += lP[seq[j + i]]

or even hoisting that enumeration outside the seq loop.

Russell Borogove
  • 18,516
  • 4
  • 43
  • 50