If I need to share a multiprocessing.Queue
or a multiprocessing.Manager
(or any of the other synchronization primitives), is there any difference in doing it by defining them at the global (module) level, versus passing them as an argument to the function executed in a different process?
For example, here are three possible ways I can imagine a queue could be shared:
# works fine on both Windows and Linux
from multiprocessing import Process, Queue
def f(q):
q.put([42, None, 'hello'])
def main():
q = Queue()
p = Process(target=f, args=(q,))
p.start()
print(q.get()) # prints "[42, None, 'hello']"
p.join()
if __name__ == '__main__':
main()
vs.
# works fine on Linux, hangs on Windows
from multiprocessing import Process, Queue
q = Queue()
def f():
q.put([42, None, 'hello'])
def main():
p = Process(target=f)
p.start()
print(q.get()) # prints "[42, None, 'hello']"
p.join()
if __name__ == '__main__':
main()
vs.
# works fine on Linux, NameError on Windows
from multiprocessing import Process, Queue
def f():
q.put([42, None, 'hello'])
def main():
p = Process(target=f)
p.start()
print(q.get()) # prints "[42, None, 'hello']"
p.join()
if __name__ == '__main__':
q = Queue()
main()
Which the correct approach? I'm guessing from my experimentation that it's only the first one, but wanted to confirm it's officially the case (and not only for Queue
but for Manager
and other similar objects).