0

I have a high CPU usage code in Python that I want to split among cores so that the code is completed faster, e.g.

for i in range(1000):
    for j in range(1000):
        for k in range(1000):
            for l in range(1000):
                print i, j, k, l

The script will have to run a trillion times on a single core. Which module in Python would allow be to run the processes parallel on multiple cores and how? I have seen some other answers but what would be the easiest way to split the above code among 8 cores?

Tom Kurushingal
  • 6,086
  • 20
  • 54
  • 86
  • what other answers have you seen? – shafeen Oct 09 '15 at 18:38
  • and do you want to "split" the above code to 8 cores or just have 8 cores do that same work? Because I don't see anything in your code that can be "split" up really. – shafeen Oct 09 '15 at 18:39
  • 4
    Your example of `print i, j, k, l` doesn't look quite paralellizable ;) – Cong Ma Oct 09 '15 at 18:39
  • @shafeen http://stackoverflow.com/questions/3842237/parallel-processing-in-python – Tom Kurushingal Oct 09 '15 at 18:46
  • @shafeen split the code among 8 cores. – Tom Kurushingal Oct 09 '15 at 18:47
  • 1
    Your example is not very useful. The loops are to arbitrary to give a proper answer. Please look at https://docs.python.org/2/library/multiprocessing.html#using-a-pool-of-workers for e.g. parallel mapping. Also never forget https://en.wikipedia.org/wiki/Amdahl's_law. – Kijewski Oct 09 '15 at 18:53
  • Good news: 1000 / 8 = 125, so you can run a thread on each core with i in the ranges [0,125), [125,250), [250,375), ... , [875, 1000). Bad news: there's only one console to print to. – Jim Wood Oct 09 '15 at 20:50

0 Answers0