7

What i am trying to do is to make use of global variable by each process. But my process is not taking the global values

import multiprocessing

count = 0 

def smile_detection(thread_name):
    global count

    for x in range(10):
        count +=1
        print thread_name,count

    return count    

x = multiprocessing.Process(target=smile_detection, args=("Thread1",))
y = multiprocessing.Process(target=smile_detection, args=("Thread2",))
x.start()
y.start()

I am getting output like

Thread1 1
Thread1 2
.
.
Thread1 9
Thread1 10
Thread2 1
Thread2 2
.
.
Thread2 9
Thread2 10

what I want is

Thread1 1
Thread1 2
.
.
Thread1 9
Thread1 10
Thread2 11
Thread2 12
.
.
Thread2 19
Thread2 20

What I have to do to achieve this?

Will
  • 24,082
  • 14
  • 97
  • 108
Abhishek Sachan
  • 934
  • 3
  • 13
  • 26
  • Have a look at [Globals variables and Python multiprocessing](http://stackoverflow.com/questions/11215554/globals-variables-and-python-multiprocessing) and [Python multiprocessing global variable updates not returned to parent](http://stackoverflow.com/questions/11055303/python-multiprocessing-global-variable-updates-not-returned-to-parent). Global variables are not shared between *processes*. They are completely different python instances. – Michael Hoff Jul 12 '16 at 07:38

4 Answers4

8

Unlike threading, multiprocessing is a bit trickier to handle shared state due to forking (or spawning) of a new process. Especially in windows. To have a shared object, use a multiprocessing.Array or multiprocessing.Value. In the case of the array, you can, in each process, dereference its memory address in another structure, e.g an numpy array. In your case, I would do something like this:

import multiprocessing, ctypes

count = multiprocessing.Value(ctypes.c_int, 0)  # (type, init value)

def smile_detection(thread_name, count):

    for x in range(10):
        count.value +=1
        print thread_name,count

    return count    

x = multiprocessing.Process(target=smile_detection, args=("Thread1", count))
y = multiprocessing.Process(target=smile_detection, args=("Thread2", count))
x.start()
y.start()
alexpeits
  • 847
  • 8
  • 18
3

Try doing it like this:

import multiprocessing

def smile_detection(thread_name, counter, lock):
    for x in range(10):
        with lock:
            counter.value +=1
            print thread_name, counter.value  


count = multiprocessing.Value('i',  0)
lock = multiprocessing.Lock()
x = multiprocessing.Process(target=smile_detection, args=("Thread1", count, lock))
y = multiprocessing.Process(target=smile_detection, args=("Thread2", count, lock))
x.start()
y.start()
x.join()
y.join()

First problem is that global variables are not shared between processes. You need to use a mechanism with some type of threadsafe locking or synchronization. We can use multiprocessing.Value('i', 0) to create a threadsafe, synchronized integer value. We use our multiprocessing.Lock() to ensure that only one thread can update the counter at a time.

If you really want to use the global variable, you can use multiprocessing.Manager(), which can stay in a global variable:

import multiprocessing

count = multiprocessing.Manager().Value('i',  0)
lock = multiprocessing.Manager().Lock()

def smile_detection(thread_name):
    global count, lock

    for x in range(10):
        with lock:
            counter.value +=1
            print thread_name, counter.value  

x = multiprocessing.Process(target=smile_detection, args=("Thread1",))
y = multiprocessing.Process(target=smile_detection, args=("Thread2",))
x.start()
y.start()
x.join()
y.join()

But, personally, I like the first method better, as a Manager() overcomplicates this.

Here's the output now:

$ python test.py
Thread1 1
Thread1 2
Thread1 3
Thread1 4
Thread1 5
Thread1 6
Thread1 7
Thread1 8
Thread1 9
...
Thread2 15
Thread2 16
Thread2 17
Thread2 18
Thread2 19
Thread2 20
Will
  • 24,082
  • 14
  • 97
  • 108
2

To share data between processes you to need to let mutiprocessing.Manager manage the shared data:

count = multiprocessing.Manager().Value('i', 0) # creating shared variable
lock = multiprocessing.Manager().Lock() # we'll use lock to acquire lock on `count` before count += 1

def smile_detection(thread_name):
    global count

    for x in range(10):
        lock.acquire()
        count +=1
        lock.release()
        print thread_name,count

    return count   
Samuel
  • 3,631
  • 5
  • 37
  • 71
2

You can use a multiprocessing.Value :

Return a ctypes object allocated from shared memory. By default the return value is actually a synchronized wrapper for the object.

The code would be like this:

import multiprocessing

count = multiprocessing.Value('i', 0)

def smile_detection(thread_name, count):
    for x in range(10):
        count += 1
        print thread_name, count

x = multiprocessing.Process(target=smile_detection, args=("Thread1",count))
y = multiprocessing.Process(target=smile_detection, args=("Thread2",count))

x.start()
y.start()
x.join()
y.join()

Be aware that the output will likely not be the one that you expect. In your expected output in fact, all the iterations of Thread 1 come before the ones of Thread 2. That's not the case in multi-threaded applications. If you want that to happen, well, you do not want it to be threaded!

enrico.bacis
  • 30,497
  • 10
  • 86
  • 115