45

I have a C++ process running in the background that will be generating 'events' infrequently that a Python process running on the same box will need to pick up.

  • The code on the C side needs to be as lightweight as possible.
  • The Python side is read-only.
  • The implementation must be cross-platform.
  • The data being sent is very simple.

What are my options?

Thanks

Community
  • 1
  • 1
Lee Treveil
  • 6,655
  • 4
  • 30
  • 29

7 Answers7

58

zeromq -- and nothing else. encode the messages as strings.

However, If you want to get serialiazation from a library use protobuf it will generate classes for Python and C++. You use the SerializeToString() and ParseFromString() functions on either end, and then pipe the strings via ZeroMq.

Problem solved, as I doubt any other solution is faster, and neither will any other solution be as easy to wire-up and simple to understand.

If want to use specific system primitives for rpc such as named pipes on Windows and Unix Domain Sockets on unix then you should look at Boost::ASIO. However, unless you have (a) a networking background, and (b) a very good understanding of C++, this will be very time consuming

Andreas Haferburg
  • 5,189
  • 3
  • 37
  • 63
Hassan Syed
  • 20,075
  • 11
  • 87
  • 171
  • 5
    +1 for multiple options. And pointing out that `protobuf` is only a solution for the serialization aspect. – André Caron Aug 02 '11 at 17:05
  • 3
    I chose zeromq because the server-side implementation is 12 lines of code!! I don't like taking on dependencies if I don't have to but zeromq is the exception. :) – Lee Treveil Aug 03 '11 at 08:18
  • Yes zeromq is designed exactly for your use case. It is very primitive and very easy to understand. It's primitiveness is robust though, as you can implement more complex messaging constructs out of it. In my work I chose to implement my own RPC system on top of boost:ASIO since I needed the system primitives I mentioned above. – Hassan Syed Aug 03 '11 at 11:10
  • Zeromq is the worst. I have done exactly this with ZeroMQ and am now switching to anything else. ZeroMQ has no concept of failure at all. If you try to send a message and your process went down it would be impossible to tell. It would just continue trying to send forever. There are many other issues where failure is completely opaque, and thus retry is impossible also. – ghostbust555 Sep 08 '18 at 00:10
  • @ghostbust555 It's been a long time since I have worked with zeromq. "No concept of failure at all" in other words "fire and forget", there is nothing wrong with "fire and forget" messaging. Also you can build failure mechanics on top of zeromq if you need it. Having said that these days I might lean towards GRPC, but it does have quite a heavy python dependency footprint if I remember correctly. – Hassan Syed Sep 25 '18 at 11:40
  • @HassanSyed You are right that you can build failure detection on top, but it gets very messy with having to do things like abort threads to kill never ending wait queues. I have never found a reason to not need to be notified of message delivery failure. If you are doing IPC and a message fails to send then you almost always want to notify someone or respawn the listener. I think not having this ability vastly decreases the usefulness of any communication library. – ghostbust555 Sep 25 '18 at 21:11
5

Google's protobuf is a great library for RPC between programs. It generates bindings for Python and C++.

If you need a distributed messaging system, you could also use something like RabbitMQ, zeromq, or ActiveMQ. See this question for a discussion on the message queue libraries.

Community
  • 1
  • 1
jterrace
  • 64,866
  • 22
  • 157
  • 202
  • RabbitMq is a bazooka compared to ZeroMq which is a fly-swatter ;) – Hassan Syed Aug 02 '11 at 16:58
  • 2
    The OP didn't specify if a "bazooka" was needed, so I presented the one that I think is the most popular. I've edited my answer to include zeromq and ActiveMQ as well, and pointed to another SO question on that topic. – jterrace Aug 02 '11 at 17:33
  • 2
    I think `protobuf` is just a serialization library for portable transportation of the message itself. It does not seem to provide any mechanism for RPC calls and IPC. – Stefan Nov 01 '16 at 20:55
5

Use zeromq, it's about as simple as you can get.

Zach Kelling
  • 52,505
  • 13
  • 109
  • 108
2

Another option is to just call your C code from your Python code using the ctypes module rather than running the two programs separately.

mgalgs
  • 15,671
  • 11
  • 61
  • 74
1

You can use Google GRPC for this

KindDragon
  • 6,558
  • 4
  • 47
  • 75
1

How complex is your data? If it is simple I would serialize it as a string. If it was moderately complex I would use JSON. TCP is a good cross-platform IPC transport. Since you say that this IPC is rare the performance isn't very important, and TCP+JSON will be fine.

Spike Gronim
  • 6,154
  • 22
  • 21
-3

I will say you create a DLL that will manage the communication between the two. The python will load DLL and call method like getData() and the DLL will in turn communicate with process and get the data. That should not be hard. Also you can use XML file or SQLite database or any database to query data. The daemon will update DB and Python will keep querying. There might be a filed for indicating if the data in DB is already updated by daemon and then Python will query. Of course it depends on performance and accuracy factors!

Stefano Mtangoo
  • 6,017
  • 6
  • 47
  • 93