I am working on a PHP script that:
- receives a client request;
- processes the request via a CPU-and-Time-Intensive-binary-computation
- store the result of the computation into a MySQL database
- then respond to the client with a
Status 200 OK
Problem: when there are 10s of 1000s of requests coming in per second during peak hours: the clients have to wait for a long time to receive Status 200 OK
.
Flexibilities: The script does not need to respond to the client with the result of the computation. The script does not even need to respond Status 200 OK
based on the success/failure of the computation - the computation may eventually fail and that's completely okay. So the actual computation could really happen in parallel behind the scene.
What tools / packages / libraries / strategies should be used to achieve this kind of intensive request handling design on PHP? Is it even something on the PHP side or is it solvable from the Apache side?
Notes:
- Running Apache, MySQL, PHP, Redis on Ubuntu [AMPRU]
- Clients will just send a request and receive a
Status 200 OK
right away. - Clients will not wait for the computation of the request to complete.
- There is no auto-scaling or load-balancing concept in place: it's a single AMPRU server.
- Better if multiple computations can happen in parallel behind the scenes