0

There are two processes running in the server (LINUX), they are PHPApp and C++App. The PHPApp is written by PHP and C++App is written by C++.

Now they need to communicate with each other to perform below task: PHPApp sends a request to C++App, when the C++App receives the request it reads data from shared memory and does some calculation, finally return the data to PHPApp.

There are two methods to do above:

  1. PHPApp communicates with C++App by sockets. C++App will serve as daemon process.
  2. PHPApp communicates with C++App by calling exec(...) (php has such function). No C++App process until there is request from PHPApp, and in this way each request requires one C++App instance.

I wonder which way is more efficient?

UPDATE
The PHPApp is part of a server software based on Apache, thus there might be hundreds of PHPApp processes sending requests to C++App. The PHPApp makes request in parallel.

Wallace
  • 561
  • 2
  • 21
  • 54
  • It depends on the frequency the 'C++ App' is involved: Low: exec High: demon. –  Apr 16 '14 at 11:31
  • Loading and executing a process is an expensive operation. Sending the message is more efficient in most cases. – masoud Apr 16 '14 at 11:59

1 Answers1

2

This depends completely on what you are trying to do. If C++App is working like a function, thus input -> C++App -> output and is not called very often then it makes sense to just call exec and spawn it.

On the other hand, if C++App has to serve a lot of requests per minute, and also in parallel, then it makes more sense to build it as a daemon that can asynchronously handle all requests. (boost::asio can help you here)

Why? Because a) communication via sockets it less expensive than spawning a new process everytime and b) because lets say you have 10 000 simultaneous requests, then the exec approach would spawn 10000 times C++App. You can imagine that this could eventually suck up all your memory. In the daemon approach, you would just have 10 000 socket connections, which boost::asio can handle without any problems.

But be careful, the async approach definitely needs good engineering. You need to write it in a way so that no requests blocks another requests and this can turn out to be quite difficult. So I would also consider this.

markus_p
  • 574
  • 8
  • 25
  • + Agreed. The async approach can be the most efficient. What I've had to do is a) [*random pausing*](http://stackoverflow.com/a/378024/23771) each process to get rid of needless nonsense, and then b) run time-stamped traces of messages showing when they were sent and when they were acted upon. This showed needless blocking due to things like DB update, which could be fixed with things like process priorities. – Mike Dunlavey Apr 16 '14 at 14:14