I've been working on the following code which sort of maximizes the number of unique (in lowest common denominator) p by q blocks with some constraints. It is working perfectly. For small inputs. E.g. input 50000, output 1898.
I need to run it on numbers greater than 10^18, and while I have a different solution that gets the job done, this particular version gets super slow (made my desktop reboot at one point), and this is what my question is about.
I'm trying to figure out what is causing the slowdown in the following code, and to figure out in what order of magnitude they are slow.
The candidates for slowness:
1) the (-1)**(i+1) term? Does Python do this efficiently, or is it literally multiplying out -1 by itself a ton of times?
[EDIT: still looking for how operation.__pow__
works, but having tested setting j=-j
: this is faster.]
2) set instantiation/size? Is the set getting too large? Obviously this would impact membership check if the set can't get built.
3) set membership check? This indicates O(1) behavior, although I suppose the constant continues to change.
Thanks in advance for insight into these processes.
import math
import time
a=10**18
ti=time.time()
setfrac=set([1])
x=1
y=1
k=2
while True:
k+=1
t=0
for i in xrange(1,k):
mo = math.ceil(k/2.0)+((-1)**(i+1))*(math.floor(i/2.0)
if (mo/(k-mo) not in setfrac) and (x+(k-mo) <= a and y+mo <= a):
setfrac.add(mo/(k-mo))
x+=k-mo
y+=mo
t+=1
if t==0:
break
print len(setfrac)+1
print x
print y
to=time.time()-ti
print to