13

Scenario:

  1. Shared hosting, so no ability to install new extensions + no CRON
  2. A submitted request needs to perform some heavy processes.
  3. I want the answer to the client to go as fast as possible, and the heavy lifting to continue immediately, but not stop the client.
  4. can be on a new thread (if it is possible) also no problem with starting a new process.

What is the best way to do this?

DaveRandom
  • 87,921
  • 11
  • 154
  • 174
Itay Moav -Malimovka
  • 52,579
  • 61
  • 190
  • 278

5 Answers5

12

On *nix:

exec('/path/to/executable > /dev/null 2>&1 &');

On Windows:

$WshShell = new COM('WScript.Shell'); 
$oExec = $WshShell->Run('C:\path\to\executable.exe', 0, false);

Both of these will spawn a new process that will run a-synchronously, completely disconnected from the parent. As long as your host allows you to do it.

DaveRandom
  • 87,921
  • 11
  • 154
  • 174
  • can I change the /dev/null to a log file of my choosing? – Itay Moav -Malimovka Dec 17 '11 at 22:46
  • I think so, as long as you redirect STDOUT *and* STDERR to somewhere that isn't the parent process, and suffix with `&`, it should work. TBH, I have always just `/dev/null`ed it, so really you'll just have to suck it and see... – DaveRandom Dec 17 '11 at 22:48
  • Start an arbitrary external process on shared hosting? Good luck with that. – JB Nizet Dec 17 '11 at 22:59
  • @JBNizet I can start external PHP scripts on mine doing exactly as above, and they will actually run I think forever - I had one running for over a week as an experiment once. – DaveRandom Dec 17 '11 at 23:04
  • Wow. You're much luckier than me then :-) – JB Nizet Dec 17 '11 at 23:06
3

You can google by key: php continue processing after closing connection.

The following links that relate to your problem, are:

You can use belong command to continue executing without user aborting

ignore_user_abort(true);
set_time_limit(0);

You use fastcgi_finish_request to alert client to stop the response output. And your scripts will continue to be executed.

An example:

// redirecting...
ignore_user_abort(true);
set_time_limit(0);
header("Location: ".$redirectUrl, true);
header("Connection: close", true);
header("Content-Length: 0", true);
ob_end_flush();
flush();
fastcgi_finish_request(); // important when using php-fpm!

sleep (5); // User won't feel this sleep because he'll already be away

// do some work after user has been redirected
Community
  • 1
  • 1
Nguyễn Văn Vinh
  • 2,754
  • 1
  • 14
  • 11
  • Welcome to SO. You should review the formatting of your answer. As of now, it seems that you put everything in a code block, while there is part of your answer that is not code. – Laf Dec 20 '12 at 13:37
3

To complement @DaveRandom's answer: you don't actually need to redirect STDERR to STDOUT (with 2>&1).

You need to redirect STDOUT though, if you want to prevent the parent process from hanging waiting for the child to finish. This is required cause exec will return the last line of the output of the child process, therefore, it needs to wait for the child's STDOUT to be closed.

That doesn't mean you need to redirect it to /dev/null. You can redirect it to some other file, or even to some other file descriptor (like STDERR: 1>&2).

  • exec('/path/to/executable'): will start a new process and wait for it to finish (i.e. blocking the parent process).
  • exec('/path/to/executable &'): basically the same as the above.
  • $pid = exec('/path/to/executable > /dev/null & echo $!'): will start a process in the background, with child and parent processes running in parallel. The output of /path/to/executable will be discarded, and $pid will receive the PID of the child process.

Using 2>&1 is actually not necessary, cause exec ignores the STDERR of the child process. It is also probably undesirable cause it will make it harder to find some errors, cause they will be just silently thrown away. If you omit 2>&1, you can pipe the STDERR of the parent process to some log file, that can be checked later when something goes wrong:

php /path/to/script.php 2>> /var/log/script_error.log

By using the above to start the script which triggers the child processes, everything that script.php and any child process write to STDERR will be written to the log file.

Thiago Barcala
  • 6,463
  • 2
  • 20
  • 23
1

There are no threads in PHP. You could cheat by sending back an HTML page that triggers an Ajax call to start the heavy process in a new request. But if it's shared hosting, my guess is that you'll quickly hit the limits on memory, time or CPU usage imposed by your hosting provider.

JB Nizet
  • 678,734
  • 91
  • 1,224
  • 1,255
  • [Evidently, there are threads in PHP now](http://docs.php.net/manual/en/class.thread.php) but you may have to [install them](http://docs.php.net/manual/en/pthreads.installation.php). However, I only just learned this tonight, so I haven't tried it yet. – Volomike Mar 18 '15 at 22:58
-1
$WshShell = new COM('WScript.Shell');
$oExec    = $WshShell->Run('C:\xampp\php\php.exe C:\xampp\htdocs\test.php -a asdf', 0, true);

Cannot pass argv to test.php.

var_dump($argv);
j0k
  • 22,600
  • 28
  • 79
  • 90
mike
  • 1