Scenario
I have a rpc-server that needs to spawn important processes (multiprocessing.Process
) that last for several days. For security/safety reasons, I don't want these processes survival to depend on the rpc-server. Therfore, I want the server to be able to die and be able to reboot while the processes are running.
Orphaning processes
This problem is solvable by (don't paste it where you don't want to loose previous work, it will close your python session):
import os
import multiprocessing
import time
def _job(data):
for _ in range(3):
print multiprocessing.current_process(), "is working"
time.sleep(2)
print multiprocessing.current_process(), "is done"
#My real worker gets a Connection-object as part of a
#multiprocessing.Pipe among other arguments
worker = multiprocessing.Process(target=_job, args=(None,))
worker.daemon = True
worker.start()
os._exit(0)
Problem: Closing the socket of the rpc-server if worker is alive
Exiting the main process seems not to aid or effect the closing of the socket-issue. So for illustrating the problem the server reboot, it is simulated with starting a second server with identical parameters after the first has been closed.
The following works perfectly:
import SimpleXMLRPCServer
HOST = "127.0.0.1"
PORT = 45212
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
s.server_close()
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
s.server_close()
However, if a worker is created, it raises a socket.error
saying the socket is already in use:
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
worker = multiprocessing.Process(target=_job, args=(None,))
worker.start()
s.server_close()
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT)) #raises socket.error
worker.join()
s.server_close()
A manual closing of the servers socket does work:
import socket
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
worker = multiprocessing.Process(target=_job, args=(None,))
worker.start()
s.socket.shutdown(socket.SHUT_RDWR)
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
worker.join()
s.server_close()
But this behavior really worries me. I don't pass the socket in any way to the worker, but it appears as if it gets a hold of it anyhow.
There are similar questions previously posted, but they tend to pass the socket through to the worker, which is not intended here. If I send the socket through though, I can close it in the worker and get around the shutdown
hack:
def _job2(notMySocket):
notMySocket.close()
for _ in range(3):
print multiprocessing.current_process(), "is working"
time.sleep(2)
print multiprocessing.current_process(), "is done"
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
worker = multiprocessing.Process(target=_job2, args=(s.socket,))
worker.start()
time.sleep(0.1) #Just to be sure worker gets to close socket in time
s.server_close()
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
worker.join()
s.server_close()
But the server's socket has absolutely no reason to visit the worker. I don't like this solution a bit, even if it's the best one so far.
Question
Is there a way of limiting what gets forked when using multiprocessing.Process
so only that which I want to pass to the target gets copied, and not all open sockets and other stuff?
In my case, to get this code working:
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT))
childPipe, parentPipe = multiprocessing.Pipe()
worker = multiprocessing.Process(target=_job, args=(childPipe,))
worker.start()
s.server_close()
s = SimpleXMLRPCServer.SimpleXMLRPCServer((HOST, PORT)) #raises socket.error
worker.join()
s.server_close()