0

I recently found out that the bottleneck of my code is the following block. N is of order 10,000, and L (10,000)^2. RQ_func is just a function that takes indices (tuples) and returns float V and dictionary sp_dist of {index : probability} format.

Is there a way I can parallelize this code? I have access to cluster computing from which I can use up to 20 cores at a time and would like to use the option.

    R = np.empty((L,))
    Q = scipy.sparse.lil_matrix((L, N))

    traverser = 0        # Populate R and Q by traversing the array
    for s_index in state_indices:
        for a_index in action_indices:
            V, sp_dist = RQ_func(s_index, a_index)
            R[traverser] = V
            for sp_index, prob in sp_dist.items():
                Q[traverser, sp_index] = prob
            traverser += 1
very_doge
  • 23
  • 3
  • @DanielM I tried using [joblib](https://pythonhosted.org/joblib/parallel.html) but it doesn't seem like it handles nested loops so I kind of gave up there. But then I again I'm very new to the whole idea very confused in general. – very_doge Jul 25 '16 at 17:21
  • @DanielM. I also looked into autojit used with prange but read somewhere that it causes stability issues. Is that true? – very_doge Jul 25 '16 at 17:24
  • I'm not an expert on python, but I recommend reading [How to ask a good question](http://stackoverflow.com/help/how-to-ask). – Daniel M. Jul 25 '16 at 17:27
  • However, a google search pointed me to [this](http://www.tutorialspoint.com/python/python_multithreading.htm) and [this](http://stackoverflow.com/questions/2846653/how-to-use-threading-in-python) – Daniel M. Jul 25 '16 at 17:31

0 Answers0