27

I worte a multiprocessing program in python. I use multiprocessing.Manager().list() to share list within subprocess. At first, I add some tasks in main process. And then, start some subprocesses to do tasks which in the shared list, the subprocesses also add tasks to the shared list. But I got a exception as follow:

    Traceback (most recent call last):
      File "/usr/lib64/python2.6/multiprocessing/process.py", line 232, in _bootstrap
        self.run()
      File "/usr/lib64/python2.6/multiprocessing/process.py", line 88, in run
        self._target(*self._args, **self._kwargs)
      File "gen_friendship.py", line 255, in worker
        if tmpu in nodes:
      File "<string>", line 2, in __contains__
      File "/usr/lib64/python2.6/multiprocessing/managers.py", line 722, in _callmethod
        self._connect()
      File "/usr/lib64/python2.6/multiprocessing/managers.py", line 709, in _connect
        conn = self._Client(self._token.address, authkey=self._authkey)
      File "/usr/lib64/python2.6/multiprocessing/connection.py", line 143, in Client
        c = SocketClient(address)
      File "/usr/lib64/python2.6/multiprocessing/connection.py", line 263, in SocketClient
        s.connect(address)
      File "<string>", line 1, in connect
    error: [Errno 2] No such file or directory

I find something about how to use shared list in python multiprocessing like this. But still have some exception. I have no idea of the meaning of the exception. And what's the difference between the common list and the manager.list?

the code as follow:

    nodes = multiprocessing.Manager().list()

    lock = multiprocessing.Lock()

    AMOUNT_OF_PROCESS = 10

    def worker():
        lock.acquire()
        nodes.append(node)
        lock.release()

    if __name__ == "__main__":

        for i in range(i):
            nodes.append({"name":"username", "group":1})

        processes = [None for i in range(AMOUNT_OF_PROCESS)]

        for i in range(AMOUNT_OF_PROCESS):
            processes[i] = multiprocessing.Process(taget=worker, args=())
            processes[i].start()
Community
  • 1
  • 1
stamaimer
  • 6,227
  • 5
  • 34
  • 55
  • 1
    You'll need to share enough code to reproduce the issue for anyone to tell you what went wrong here. It looks like maybe the manager shut down before you tried to use it, but it's hard to say without seeing any code. – dano Apr 17 '15 at 14:39
  • Looks like the code is using UNIX sockets and is not able to bind to the socket file. – Igor Apr 17 '15 at 14:39
  • @dano the code is in [here](https://github.com/stamaimer/MrUirf/blob/master/twitter/gen_friendship.py) – stamaimer Apr 17 '15 at 14:41
  • @lgor the manager list return a proxy of the shared list – stamaimer Apr 17 '15 at 14:42
  • Please provide a [*minimal*, complete, and verifiable example](http://stackoverflow.com/help/mcve) in the post itself (not in an external link). – Kevin Apr 17 '15 at 14:55
  • @dano I add processes[i].join() after prcesses[i].start(). the program runs, but only one process is run. – stamaimer Apr 17 '15 at 15:05

1 Answers1

49

The problem is that your main process is exiting immediately after you start all your worker processes, which shuts down your Manager. When your Manager shuts down, none of the children can use the shared list you passed into them. You can fix it by using join to wait for all the children to finish. Just make sure you actually start all your processes prior to calling join:

for i in range(AMOUNT_OF_PROCESS):
    processes[i] = multiprocessing.Process(target=worker, args=())
    processes[i].start()
for process in processes:
    process.join()
dano
  • 91,354
  • 19
  • 222
  • 219
  • Yes, you are right. Thank you very much. And i have figured out why only one process runs. I haven't understand `join` before. – stamaimer Apr 17 '15 at 15:14
  • This should have a million upvotes. It solved an issue I was troubleshooting for a day now. The processes were being started while the queue was emptied out and other processes were being shut down. – mudda Jul 31 '15 at 13:45
  • @dano This is exactly what I'm doing as I mentioned [here](https://stackoverflow.com/a/25456494/2838606)! I'm really surprised that I still get that error. – Amir Aug 15 '19 at 17:08