1

I'm struggling to design an efficient way to exchange information between processes in my LAN. Till now, I've been working with one single RPi, and I had a bunch of python scripts running as services. The services communicated using sockets (multprocessing.connection Client and Listener), and it was kind of ok.

I recently installed another RPi with some further services, and I realized that as the number of services grows, the problem scales pretty badly. In general, I don't need all the services to communicate with any other, but I'm looking for an elegant solution to enable me to scale quickly in case I need to add other services. So essentially I though I first need a map of where each service is, like

  • Service 1 -> RPi 1
  • Service 2 -> RPi 2
  • ...

The first approach I came up with was the following: I thought I could add an additional "gateway" service so that any application running in RPx would send its data/request to the gateway, and the gateway would then forward it to the proper service or the gateway running on the other device.

Later I also realized that I could actually just give the map to each service and let all the services manage their own connection. This would mean to open many listeners to the external address, though, and I'm not sure it's the best option.

Do you have any suggestions? I'm also interested in exploring different options to implement the actual connection, might the Client / Listener one not be efficient.

Thank you for your help. I'm learning so much with this project!

Yaxit
  • 167
  • 2
  • 11
  • Sounds vaguely like you are reinventing SNMP; but the typical primary use case of SNMP is to collect information from devices on a network. Keeping track of which service runs where is obviously a necessary prerequisite. – tripleee Feb 19 '20 at 10:14
  • 1
    You could use a Redis `pub/sub` type of thing https://stackoverflow.com/a/59914945/2836621, or a Redis `central repository` type of thing https://stackoverflow.com/a/58521903/2836621 – Mark Setchell Feb 19 '20 at 11:06

0 Answers0