0

I'm currently working on a project that involves nested api calls. In my project I'm currently facing a problem with a deadlock where I am somewhat at a loss of how to debug this problem.

Setup

In my setup I have two repositories:

  1. client-facing Laravel
  2. internal Shopware 6

The laravel project will be called via a REST API. In case of some API calls, the Laravel will internally call Shopware to get some customer or storefront information. These calls are made via Guzzle. This works fine in most test cases but fails if multiple Laravel requests are made at the same time.

Problem

If multiple shop related API calls are made at the same time, Shopware will no longer receive these internal calls from the Laravel project. The internal call to Shopware will be postponed until the Laravel calls have failed (due to a timeout of the store request). Once the Laravel request is done, the - seemingly queued - internal request then reaches Shopware, but at that point it's obviously too late, as the response will no longer be processed.

So the timeline looks like this:

  1. Multiple requests reach Laravel
  2. Laravel starts processing requests
  3. Laravel sends internal request to Shopware
  4. Laravel stops listening to the request due to a timeout
  5. Laravel returns error response
  6. Shopware starts processing previous requests
  7. Shopware finishes requests but Laravel doesn't listen anymore

Assumption

With the initial batch of Laravel requests it seems to take up all the available concurrent requests. Therefore Laravel seems to be unable to send internal requests to Shopware until the initial requests (to Laravel) have finished. So in this way a deadlock is created where the initial requests wait for the follow-up requests which are waiting for the initial requests to finish.

Thoughts

For a while I was wondering whether there might be a lock somewhere - as mentioned here - which would prevent the Shopware request from executing. But I think I can rule that scenario out, because the issue only occurs if multiple nested Shopware calls happen at the same time. If only a few requests are happening, then the request will be sent as expected. Additionally I can see that Shopware is not called at all until the Laravel requests have finished. As the first line of the index.php is not even called, I think I can rule out potential mechanisms including locks down the road.

My guess is that my system reaches its limit of simultaneous requests. Because of that, the internal request will be queued until the initial request is finished and a request slot is freed up. Once the slot becomes available the queued request will then be processed afterwards.

Unfortunately I'm not sure how to debug/proof my guess explicitly. I've also tried a few attempts to solve the problem, but so far I was not successful. Therefore I would appreciate any tips which could hint me to a solution.

Attempts

I've tried to increase the MaxRequestWorkers count, as well as some other parameters.

<IfModule mpm_prefork_module>
  StartServers    50
  MinSpareServers 50
  MaxSpareServers 150
  MaxRequestWorkers       150
  MaxConnectionsPerChild  5000
</IfModule>

Result: Nothing changed


I've tried to store the databases in two separate MySQL setups. The Laravel database in a MySQL 5 and the Shopware in a MySQL 8, to avoid possible database locks.

Result: Nothing changed

Environment

  • macOS Ventura
  • MAMP Pro 6.6
  • Apache 2.4.46
  • MySQL 5.7.34

Related questions

  1. long poll hanging all other server requests
  2. Simultaneous Requests to PHP Script
  3. Allow clients to run multiple simultaneous PHP requests

Cheers, zZeepo

zZeepo
  • 336
  • 1
  • 3
  • 9

1 Answers1

0

I've found the solution!

The issue was the FastCGI PHP_FCGI_CHILDREN configuration.

This controls how many child processes the PHP process spawns. When the fastcgi starts, it creates a number of child processes which handle one page request at a time. Value 0 means that PHP willnot start additional processes and main process will handle FastCGI requests by itself. Note that this process may die (because of PHP_FCGI_MAX_REQUESTS) and it willnot respawned automatic. Values 1 and above force PHP start additioanl processes those will handle requests. The main process will restart children in case of their death. So by default, you will be able to handle 1 concurrent PHP page requests. Further requests will be queued. Increasing this number will allow for better concurrency, especially if you have pages that take a significant time to create, or supply a lot of data (e.g. downloading huge files via PHP). On the other hand, having more processes running will use more RAM, and letting too many PHP pages be generated concurrently will mean that each request will be slow.

This parameter was set to 4. So as soon as 4 simultaneous request where send to Laravel, no further child process could be spawned resulting in the previously described deadlock.

The value was defined in /Applications/MAMP/fcgi-bin/php8.1.0.fcgi

#!/bin/sh
export PHP_FCGI_CHILDREN=4
export PHP_FCGI_MAX_REQUESTS=200
exec /Applications/MAMP/bin/php/php8.1.0/bin/php-cgi -c "/Library/Application Support/appsolute/MAMP PRO/conf/php8.1.0.ini"

I have increased the value to PHP_FCGI_CHILDREN=10, which should be enough for my development environment and it resolved my issue.

zZeepo
  • 336
  • 1
  • 3
  • 9