1

TL;DR: Is this expected behavior?

Not at all similar to Python multiprocessing with start method 'spawn' doesn't work, which is the closest existing question I could find.

To be precise, here is a MWI I am using to test:

import multiprocessing as mp

def fun_computation(x, output):
    acc = 0
    for i in range(x):
        acc += i * i
        output.value = acc

def main():
    shared = mp.Value("i", -1)
    proc = mp.Process(target=fun_computation, args=(100, shared))
    proc.start()
    proc.join()
    assert(shared.value >= 0)
    print(shared.value)

if __name__ == "__main__":
    mp.set_start_method('spawn')
    main()

And this is the output I get:

❯ python mptest.py
328350
❯ env -i python mptest.py
/usr/lib/python3.8/multiprocessing/resource_tracker.py:96: UserWarning: resource_tracker: process died unexpectedly, relaunching.  Some resources might leak.
  warnings.warn('resource_tracker: process died unexpectedly, '
Traceback (most recent call last):
  File "mptest.py", line 19, in <module>
    main()
  File "mptest.py", line 14, in main
    assert(shared.value >= 0)
AssertionError
❯ python -V
Python 3.8.3

This was tested on an up-to-date Arch Linux installation (at the time of writing). I have yet to test it on Windows and do not have access to macOS. Normal python scripts (e.g., calling fun_computation w/o the multiprocessing) work fine.

Sorry if this example seems a little convoluted; the real error is a byproduct of many more systems interfacing together, but this demonstrates the issue I am having.

TL;DR: Is this expected behavior?

Blue
  • 545
  • 3
  • 13
  • Works at [repl.it](https://repl.it/repls/InternationalWeakSoftwaresuite#main.py) Python 3.8.2 – stovfl May 29 '20 at 19:39
  • @stovfl yes, that is expected. My question is why doesn’t this work when the environment variables have been cleared? Thank you for testing, though. – Blue May 29 '20 at 23:52
  • ***environment variables have been cleared?***: [Edit] your question and explain this in more detail. – stovfl May 30 '20 at 07:18

1 Answers1

-2

From python >3.x, you can use multiprocessing.get_context() in order to gain a context object to use multiple start methods:

Please try these solutions:

if __name__ == '__main__':
    shared = mp.Value("i", -1)
    ctx = mp.get_context('spawn')
    q = ctx.Queue() #here q=100
    p = ctx.Process(target=fun_computation, args=(q,shared))
    p.start()
    print(q.get())
    p.join()

Or:

if __name__ == '__main__':
    pool = multiprocessing.Pool(processes=7)
    pool.apply_async(fun_computation, args=(q,shared))
    pool.close()
    pool.join()
Mahsa Hassankashi
  • 2,086
  • 1
  • 15
  • 25
  • The first solution does not work for me, and I get the same error. The second solution seems to not allow passing `mp.Value`s between processes, which is unacceptable for my use case. – Blue May 29 '20 at 16:26
  • I changed 2th one and please try it again. Based on https://docs.python.org/2/library/multiprocessing.html or https://pymotw.com/2/multiprocessing/communication.html – Mahsa Hassankashi May 29 '20 at 16:39
  • Your updated example still does not work, I get the same error as before (`RuntimeError: Synchronized objects should only be shared between processes through inheritance`). I am also using python 3, so I would not entirely trust python 2 docs. The fundamental premise of your answer is incorrect, I will be downvoting. – Blue May 29 '20 at 18:27
  • @Blue For your recent error: You should use the first solution which is a multiprocess Queue object that the workers could use to send status data. Your main process will have to read the status entries from the queue and update the status accordingly. I think you have a conflict for python version, I recommend you to try your code https://colab.research.google.com/ and if all works change your environment, about downvote is up to you! – Mahsa Hassankashi May 29 '20 at 18:33