In Brief
How can I monkey patch module A from module B when module A's functions are supposed to be importable so that I can run the module A's functions with the multiprocessing
standard library package?
Background
A client requested a hotfix that will not be applicable to any of our other clients, so I created a new branch and wrote a separate module just for them to make merging in changes from the master branch easy. To maintain the client's backward compatibility with pre-hotfix behavior, I implemented the hotfix as a configurable setting in the app. Thus I didn't want to replace my old code -- just patch it when the setting was turned on. I did this by monkey patching.
Code Structure
The __main__
module reads in the configuration file. If the configuration turns on the hotfix's switch, __main__
patches my engine
module by replacing a couple of functions with code defined in the hotfix
module -- in essence, the function being replaced is the key function to a maximization function. The engine
module later loads up a pool of multiprocessing
workers.
The Problem
Once a multiprocessing
worker gets started, the first thing multiprocessing
does it re-imports* the engine
module and looks for the key function that __main__
had tried to replace (then multiprocessing
hands over control to my code and the maximization algorithm begins). Since engine
is being re-imported by a brand new process and the new process does not re-run __main__
(where the configuration file gets read) because that would cause an infinite loop, it doesn't know to re-monkey-patch engine
.
The Question
How can I maintain modularity in my code (i.e., keeping the hotfix code in a separate module) and still take advantage of Python's multiprocessing
package?
* Note my code has to work on Windows (for my client) and Unix (for my sanity...)