1

I'm using a Stochastic method to approximate the volume of an d-dimensional sphere. I begin by using sample size of n = 10^6 as a single process. Then I try to begin the same approximation with sample size of n = 10^5 as 10 parallel processes.

Since the function is ordo(N), I would assume that the execution time would be ~10 times less but this does not seem to be the case. Any ideas why?

import n_sphere
# import multiprocessing as mp
from time import perf_counter as pc
import concurrent.futures as future
from time import sleep as wait
import math
from numpy import mean, round


def pause(sec):
    wait(sec)


# ====== # Parameters for assignment # ====== #
n = 10 ** 6
d = 11
r = 1
# ============================================ #
#
#
# ==== # Parameters for multiprocessing # ==== #
thread = 10
p1 = [int(n / thread) for i1 in range(thread)]
p2 = [d for i2 in range(thread)]
p3 = [r for i3 in range(thread)]
# ============================================ #
#
#
# =========== Time for non-mp ================ #
t1 = pc()
volume_non_mp = n_sphere.N_sphere(n, d, r)
t2 = pc()
# ============================================ #
#
#
# =========== Time for mp ==================== #
t3 = pc()
with future.ThreadPoolExecutor() as ex:
    volume_mp = mean(list(ex.map(n_sphere.N_sphere, p1, p2, p3)))
t4 = pc()
# ============================================ #
#
#
# =========== Displaying results ============= #
print(f'''
Time w/o multiprocessing: {round(volume_non_mp, 4)}               time: {round(t2 - t1, 4)}s
Time w/ multiprocessing:  {round(volume_mp, 4)}               time: {round(t4 - t3, 4)}s''')
# ============================================ #
#
#
# =========== Displaying results ============= #
v_d = math.pi ** (d / 2) * (r ** d) / (math.gamma((d / 2 + 1)))
print(f'\nActual volume:  {v_d}')
# ============================================ #

The N_sphere function looks like this:

import random
import math
import functools


def N_sphere(nf, df, rf):
    # Producing 'n' cords of dimension 'd' within a radius of 'r'
    cord_list = [[random.uniform(-rf, rf) for i in range(df)] for cords in range(nf)]

    squares = []
    for i in range(nf):
        squares.append(functools.reduce(lambda x, y: math.sqrt(x*x + y*y), cord_list[i]))

    n_in = list(filter(lambda x: x <= rf, squares))
    n_out = list(filter(lambda x: x > rf, squares))
    volume = ((2 * rf) ** df) * (len(n_in) / (len(n_out) + len(n_in)))

    return volume
tripleee
  • 175,061
  • 34
  • 275
  • 318

1 Answers1

0

Python GIL does not allow using multiple CPU cores to run threads in parallel. It can run threads concurrently. Although it seems that the threads run at the same time, behind the scene a single CPU core runs them in time slices consecutively. So it's logical that multi threading does not decrease the execution time in Python (It does in cases that are I/O bound but not in your case).

You can read this to get more information.

Apart from these technical issues, you are using multithreading and not multiprocessing in your code. To multiprocess, use concurrent.ProcessPoolExecuter instead of ThreadpoolExecuter and it should reduce the time.

  • The question is about multiprocessing, not multithreading. – tripleee Sep 03 '22 at 17:20
  • He is using multithreading is his code, not multiprocessing ... Answer edited. – milad heidari Sep 03 '22 at 18:08
  • So I don't think I understand this multithreading/multiprocessing thing: I switched "with future.ThreadPoolExecutor() as ex:" to with "future.ProcessPoolExecutor() as ex:" Obviously have I gotten the two terms mixed up but this improved runtime with a factor of 5 instead of what I thought, factor of 10. – Olle Virding Sep 03 '22 at 18:13
  • @OlleVirding You should read up on the difference between threads and processes. A process is basically a new instance of the python interpreter that executes some code. A process cannot be spawned for "free" and costs time to initialize. While the total execution time of the algorithm is reduced by factor n, there is an additional overhead from the process initialization. – Plagon Sep 03 '22 at 18:36