0
import random as r
from random import Random
from threading import Thread
#ap = amount of random points
#load_split = how many threads are tasked with it
def pi(ap=1000000,load_split=16):
#circle hits
    c,=0,
#chooses a random point and sees if it is in the circle
    def t(seed,ap=ap/load_split):
        nonlocal c
        r = Random()
        r.seed(seed)
        while ap>0:
            if ((r.random()-0.5)**2+(r.random()-0.5)**2)**0.5<=0.5: c+=1
            ap-=1
    th = []
    for i in range(load_split):
        thr = Thread(target=t,args=[r.random()*i])
        thr.start()
        th.append(thr)
#executes the random tries lost to the threads
    for i in range(ap%load_split): 
        if ((r.random()-0.5)**2+(r.random()-0.5)**2)**0.5<=0.5: c+=1
#waiting for threads to complete
    for i in th: i.join()
    return 4 * c / ap
input(pi())

Why do the approximated pi values get smaller when I distribute the load over more threads?

First I thought it may be because of using the same seed, so I generate differently seeded local Randoms for each Thread, which each seed being randomised as well instead of just being incrementing integer values. (Even though I don't think the latter part made the difference)

But the problem still persists. Does anyone know the reason for that behaviour?

eyllanesc
  • 235,170
  • 19
  • 170
  • 241
user10385242
  • 407
  • 1
  • 3
  • 10
  • Why do you call `input(pi())`? – Barmar Apr 05 '19 at 18:33
  • 1
    What do you mean by "approximated pi values get smaller"? You mean they get closer to the correct value? – Barmar Apr 05 '19 at 18:35
  • 1
    Maybe it's because of `ap=ap/load_split`. So `ap` is smaller with more threads. – Barmar Apr 05 '19 at 18:36
  • 1
    You probably need a mutex around the increments of `c`. – Barmar Apr 05 '19 at 18:41
  • i run it from explorer, so i input() prevents the window from closing. – user10385242 Apr 05 '19 at 20:15
  • no, i mean they get smaller in value – user10385242 Apr 05 '19 at 20:15
  • yes, the ap/load_split is to reduce the load on the main thread, so it's supposed to get smaller – user10385242 Apr 05 '19 at 20:16
  • mutex? probably, i just conveniently thought thread safety is already guaranteed in python because of the convenient design i already experienced from the language. looking into it now – user10385242 Apr 05 '19 at 20:18
  • I tried it with 16, 32, and 64 threads. The results I got were 2.748732 2.395884 2.75288 – Barmar Apr 05 '19 at 20:25
  • A second test adding 200 threads produced: 2.979836 2.076516 2.679564 3.142104 – Barmar Apr 05 '19 at 20:26
  • With 100, 200, 300, 400 threads the results were 2.667636 3.14018 3.143712 3.143656 – Barmar Apr 05 '19 at 20:27
  • maybe it's at a resource limit at the very high thread counts so it optimizes the accesses to proper intervals again, my non-scientific guess – user10385242 Apr 05 '19 at 20:29
  • but these locks, when they are locked and another thread calls acquire, is it in an infinite while loop only breaking if the unlocked condition is met? isnt that quite cpu heavy and go against the purpose of spreading the load – user10385242 Apr 05 '19 at 20:31
  • The benefit of multi-threading depends on how often the threads need to access shared resources. The more independent computations they can do, the more you gain. – Barmar Apr 05 '19 at 20:33
  • In your case, it depends on how the time spent waiting for the lock compares to calling `r.random()` twice. – Barmar Apr 05 '19 at 20:34
  • Well, now I let the threads use a list. Doens't speed it up significantly for my purposes tho, since it's still in the multiple seconds and I want it to complete almost instantaneously. For the lock.acquire vs r.random, doesnt that not matter? Because even if I put the lock in front of the r.random, after the lock releases the r.random still needs to be called, while the other way round it just doesnt need to increment, thus dont needing the lock. – user10385242 Apr 05 '19 at 20:44
  • The lock should only be around `c += 1`. That's the only shared resource. – Barmar Apr 05 '19 at 20:47
  • See https://stackoverflow.com/questions/10021882/make-the-random-module-thread-safe-in-python – Barmar Apr 05 '19 at 20:49
  • Since you create a new `Random` instance in each thread, it's already thread-safe, so you don't need the mutex around `r.random()`. – Barmar Apr 05 '19 at 20:49
  • Yes, I already made the random thread-safe in a previous edit iteration, as stated in the question. I tried the lock around the read/write of the assignment operator, but after all I decided to just let each thread use it's own slot in a list to make them independent. Still thank you for pointing out the major point in my problem, which was assuming python was default thread-safe or at least had its built-ins or primitives thread-safe, because I am used to the simplicity feel of the language. – user10385242 Apr 05 '19 at 21:25
  • You should post your final solution as an answer. – Barmar Apr 05 '19 at 22:19

1 Answers1

0
import random as r
from random import Random
from threading import Thread
def pi(ap=8000000,load_split=4):
    c=[]
    for i in range(load_split): c.append(0)
#now each thread writes to its own circle hit count
    def t(seed,ap=ap/load_split):
        r = Random()
        r.seed(seed)
        while ap>0:
            if ((r.random()-0.5)**2+(r.random()-0.5)**2)**0.5<=0.5:
                c[seed]+=1
            ap-=1
    th = []
    for i in range(load_split):
        thr = Thread(target=t,args=[i])
        thr.start()
        th.append(thr)
    for i in range(ap%load_split): 
        if ((r.random()-0.5)**2+(r.random()-0.5)**2)**0.5<=0.5: c+=1
    for i in th: i.join()
    return 4 * sum(c) / ap
input(pi())
user10385242
  • 407
  • 1
  • 3
  • 10