1

I am using a Laravel Job to schedule a Web Service call, and a single request just works fine, meaning there is no problem in connection to the host, nor any other, the web service communication is OK.

I use WSDL SoapClient initialization, so something like

$soapClient = new \SoapClient($wsdl,[
        'trace' => 1,
        'features' => SOAP_SINGLE_ELEMENT_ARRAYS,
        'keep_alive' => true,
        'compression' => SOAP_COMPRESSION_ACCEPT | SOAP_COMPRESSION_GZIP,
        'cache_wsdl' => WSDL_CACHE_MEMORY
]);

Once the queue started to get serious traffic, I enabled the queue spooling with

php artisan queue:work --queue=soapQueue

As expected, requests are spooled and processed, with a pretty decent speed, more or less 6 requests per second, not bad if you consider that each SOAP call takes more or less 150ms, so 900ms for Web Service and only 100ms for processing the queue. For each request we have an average of 16ms for queue processing.

After a while (less than a minute) something changes: each and every web service call fails with the exception

[2019-03-29 09:50:29] local.ERROR: Could not connect to host  
[2019-03-29 09:50:29] local.ERROR: #0 [internal function]: SoapClient->__doRequest('<?xml version="...', 'https://ws.host...', 'PAR_ServiceCall...', 1, 0)

At first I was suspecting to be hitting too hard the host, so that I was temporarily banned, but this wasn't the case: if I quit the process and restart it, it immediately starts to process messages.

Moreover, if I use queue:listen instead of queue:work (meaning that you reload the Laravel environment with each job, as explained in What is the difference queue:work and queue:listen, this does not happen, evidently caused by the reload of the environment.

Using queue:listen the performance degrades quite significantly, passing from 6 messages per second to 3, meaning that with the same average of 150ms needed for each call, summing up to 450ms, the queuing process takes the remaining 550ms, that is more or less 180ms for each call, 11 times more than before.

It perfectly makes sense, but I was wondering if there is a way to prevent this error with SoapClient.

Ing. Luca Stucchi
  • 3,070
  • 6
  • 36
  • 58
  • 1
    You can try to check some additional debugging info on your Soap requests ([for example, as suggested here](https://stackoverflow.com/a/50847927/9348748)). But I do feel the problem is not with SoapClient but rather with Laravel. – d3jn Mar 29 '19 at 10:45
  • I would also think that is a client issue but the restart queue issue is very weird, there must be something cached that clears or changes when restarting the process but honestly I have no idea what could it be. – namelivia Mar 29 '19 at 10:48
  • Thanks @d3jn , I am logging the request and the response, and when the error is thrown, there is just no response (it really can't connect to host) while the request is 100% the same as before. WSDL cache works but doesn't help, meaning that if I disable it, it loads the WSDL each time. I am running the process in a docker container, I'd like to debug it but it seems a little overkill, for the moment I will not do it. Thanks for the link ! – Ing. Luca Stucchi Mar 29 '19 at 11:00
  • Thanks @namelivia, it's obviously some setting that is kept by the Laravel environment, the only problem is finding out which one. Restarting the queue (killing the process and re-launching a queue:work) does the trick, since it's doing manually what queue:listen does automatically, so I am not surprised about that. – Ing. Luca Stucchi Mar 29 '19 at 11:04

1 Answers1

1

After A LOT of attempts, I stumbled upon a solution that prevented this behavior to happen.

The problem was all about SoapClient keep_alive indicator.

Creating the SoapClient with a false keep_alive flag

$soapClient = new \SoapClient($wsdl,[
    'trace' => 1,
    'features' => SOAP_SINGLE_ELEMENT_ARRAYS,
    'keep_alive' => false,
    'compression' => SOAP_COMPRESSION_ACCEPT | SOAP_COMPRESSION_GZIP,
    'cache_wsdl' => WSDL_CACHE_MEMORY
]);

you prevent it to establish a keep-alive connection, so each call will create a brand new connection to the web service.

This may not be super optimized, but in my long-running script context prevents the weird error that I had, and testing this for a long time (hours) the error never came back

Ing. Luca Stucchi
  • 3,070
  • 6
  • 36
  • 58