1

I am using Symfony to talk to the Office365 mail server, by calling a Symfony command, in a cronned process (every minute).

Apparently, occasionally something in that communication gets stuck and my php script keeps running - even after the next scheduled call happens, so then I have 2, then 3, then 4 scripts running in parallel.

Question is: how can I reliably limit the total duration of this Symfony command call to max. of 1 minute?

I'm saying "total duration" because I did try with setting the max_execution_time, that is the set_time_limit(60), but apparently this setting doesn't count in any externals calls to other scripts - so any wait for the MX server to respond wouldn't be calculated.

I also thought to try with setting the max_input_time but that didn't work in my case, out of 2 reasons: 1. apparently Symfony console somehow overwrites my regular php.ini value with a -1 and 2. I cannot set this setting manually inside the Symfony script. What ever I set to it, with ini_set("max_input_time", XX), it stays on the "infinite" value (of -1). Thank you!

userfuser
  • 1,350
  • 1
  • 18
  • 32
  • How are you doing the call to the mail server? Depending on that, you could configure a timeout in your client service (the caller) – yceruto Jan 18 '23 at 16:10
  • 1
    can you use curl? if so, you can use `curl_setopt` `CURLOPT_TIMEOUT`. very reliable – bluepinto Jan 18 '23 at 18:03
  • *"...occasionally something in that communication gets stuck"*, That is the root cause, you should try and find out what is happening, then you can have your command handle it. You could try adding the [Logger](https://symfony.com/doc/current/logging.html) to your command to narrow down the issue. – Bossman Jan 18 '23 at 22:12
  • @Bossman probably offers the best advice for now - I need to find out first what exactly happens, although it might take a week or two to catch it. I will add some logging and wait. BTW, the call to Office365 is done through `weblex/php-imap` library, which talks to it over IMAP protocol. That library does have a default connection_timeout of 30s, but I also have to figure out what is that timeout actually counting (just opening the connection or whole communication in total). I'll report the findings – userfuser Jan 20 '23 at 01:41
  • Just a short update, for now. I have managed to stop re-occuring of the indefinite runs, by running the Symfony command together with Linux's `timeout` command, like this: `timeout 55 -k 4 php bin/console somesymfony:command`. That effectively stops the script, no matter what is doing at that point. The `55` in my example tries to stop the script after 55s, while the `-k 4` actually kills the script after additional 4s, if the first try doesn't work. In my case - the simple `timeout 55` did not work, I had to use the `-k` param as well. Now I can look for the actual issue in peace. – userfuser Feb 08 '23 at 16:30

1 Answers1

0

but apparently this setting doesn't count in any externals calls to other scripts

Indeed, as by https://www.php.net/manual/en/function.set-time-limit.php

The set_time_limit() function and the configuration directive max_execution_time only affect the execution time of the script itself. Any time spent on activity that happens outside the execution of the script such as system calls using system(), stream operations, database queries, etc. is not included when determining the maximum time that the script has been running. This is not true on Windows where the measured time is real.

How about setting a timeout for the call to API? If you use Guzzle, https://docs.guzzlephp.org/en/stable/request-options.html#timeout

// Timeout if a server does not return a response in 3.14 seconds.
$client->request('GET', '/delay/5', ['timeout' => 3.14]);

And then you must make sure the limit you set by set_time_limit() + the timeout is less than the time interval reserved for running the script by the cron.

iloo
  • 926
  • 12
  • 26