I have a for loop that loops over 5000 days worth of data points. For each day there are 500 class instances that needs to process each of those events. For example:
class SimpleClass:
def __init__(self,name):
self.name = name
self.final_value = 0
def process(self,x):
self.final_value = x+1 #this is an absurd simplification
# Create N Class instances
ind = []
for i in xrange(0,500):
ind.append( SimpleClass(str(i)) )
# Main processing loop
for j in xrange(0,5000):
# Is there a way of speeding this up?
for k in ind:
k.process(j)
This above is a really simple example but its highlights what I am trying to do. The inner for loop is obviously slow, but if I can parallelize it, or any way of sleep the consumption of those j
then it will speed it up. Any ideas? I have not much experience in multiprocessing library.