I'm trying to create a central logging system for some of my python modules. I want to be able to send messages from a number of modules with logs, then central logger takes them and does the processing.
For the simplicity, I want my module A to look something like this:
bus = connect_to_a_bus_that_is_always_there
while True:
#Publish a message to message bus, pseudo code
bus.publish(topic="logs.a", message="example")
time sleep(1)
and the logger(the only subscriber)
def actOnNewMessage(msg):
if msg.topic.subtopic == "a":
doSomethingForA(msg.data)
bus = connect_to_a_bus_that_is_always_there
bus.subscribe("logs", handler=actOnNewMessage)
while True:
#wait for messages
Right now Logger module act like a library, so it's not persistently run, so maybe I can introduce something in between Logger and Message Bus, that would be constantly watching for new messages.
I have looked at PyPubSub but it doesn't seem to introduce the persistent communication between different running python modules in documentation. If anyone tried this, it works for me if I can use this between different modules.
Another catch is that I might end up with modules not written in python, so I don't really want a direct communication between modules A,B and Logger.
In the end my architecture might look like that:
I hope the information above is not confusing.
tl;dr: Publish-Subscribe with persistent message bus in python and a subscriber that is constantly waiting for new messages. Any ready-to-use solution?
EDIT: I'm considering running a web socket server that knows about Logger module, and other modules A, B know the address of the websocket. Are there any drawbacks to this design?