I am trying to create a heavily modular and data-driven program in python, and I'd like to have calls between modules go through a central proxy singleton instead of every object holding references to objects they communicate with. The main reason for this is that I don't want objects to have any information of how other modules work in case they are switched around during runtime.
Currently, this proxy service is just a singleton that has a list of tuples with function names and a reference to the object that owns it. Any time a module comes across a command that isn't part of their own methods, the default behavior would be to send it up to this proxy service that would check is there is anyone online capable of executing it. If it finds someone, the proxy forwards the function call to the right object and then sends the return to the caller once it receives it.
The problem I have with this is that every time I need to do any inter-process communication, I am causing a context swap to an object that will end up bloating my callstack unnecessarily.
My question then is thus:
How are context changes such as this handled in python implementations? And what optimizations can I make to lower the impact of such context changes?
P.S.: The name proxy might be a bit misleading, but this is not a networked application. The whole thing is running in a single process in a single machine.
Edit: As asked here is a more concrete example.
class forwardToProxy:
dict = {} #this dictionary would contain the names of commands such as "deleteName"
#and a first class function bound to the part of the program currently
#responsible for executing that command.
def forward_command(command, arg):
return dispatch[command](arg)
This is one example of what I meant for a proxy object. The only thing this class does is forward commands to the parts of the program currently assigned to executing. My question pertains to the consequences of using this kind of structure.
Does every call a module makes to it create a new instance of this class in memory until it resolves?
If a command were to be forwarded to a module, that would in consequence forward another command, and would cause another module to forwards yet another command, how many instances of this object would be waiting in memory for the eventual return?
What exactly happens in memory every time we make an function call in different python implementations?(Please explain for both same-module and external-module calls)
What optimizations can be used to reduce the impact of such context changes(would @staticmethod help, for example)?