35

Is there a way in PHP to close the connection (essentially tell a browser than there's no more data to come) but continue processing. The specific circumstance I'm thinking of is that I would want to serve up cached data, then if the cache had expired, I would still serve the cached data for a fast response, close the connection, but continue processing to regenerate and cache new data. Essentially the only purpose is to make a site appear more responsive as there wouldn't be the occasional delay while a user waits for content to be regenerated.

UPDATE:

PLuS has the closest answer to what I was looking for. To clarify for a couple of people I'm looking for something that enables the following steps:

  1. User requests page
  2. Connection opens to server
  3. PHP checks if cache has expired, if still fresh, serve cache and close connection (END HERE). If expired, continue to 4.
  4. Serve expired cache
  5. Close connection so browser knows it's not waiting for more data.
  6. PHP regenerates fresh data and caches it.
  7. PHP shuts down.

UPDATE:

This is important, it must be a purely PHP solution. Installing other software is not an option.

Endophage
  • 21,038
  • 13
  • 59
  • 90
  • Closing connections means HTTP or MySQL? – powtac Jan 26 '11 at 16:04
  • 1
    @powtac HTTP. I can always re-open a MySQL connection. I would like a script to continue running (for a short and finite time) after the HTTP connection is closed. – Endophage Jan 26 '11 at 16:39

8 Answers8

38

If running under fastcgi you can use the very nifty:

fastcgi_finish_request();

http://php.net/manual/en/function.fastcgi-finish-request.php

More detailed information is available in a duplicate answer.

Community
  • 1
  • 1
Adam Jimenez
  • 3,085
  • 3
  • 35
  • 32
22

I finally found a solution (thanks to Google, I just had to keep trying different combinations of search terms). Thanks to the comment from arr1 on this page (it's about two thirds of the way down the page).

<?php
ob_end_clean();
header("Connection: close");
ignore_user_abort(true);
ob_start();
echo 'Text the user will see';
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush(); // All output buffers must be flushed here
flush();        // Force output to client
// Do processing here 
sleep(30);
echo('Text user will never see');

I have yet to actually test this but, in short, you send two headers: one that tells the browser exactly how much data to expect then one to tell the browser to close the connection (which it will only do after receiving the expected amount of content). I haven't tested this yet.

Walf
  • 8,535
  • 2
  • 44
  • 59
Endophage
  • 21,038
  • 13
  • 59
  • 90
  • 9
    doesn't work for me. user is still forced to wait during the sleep. – Adam Jimenez Sep 06 '11 at 10:02
  • 2
    @Adam We've found a number of things can screw the system up. If your content length is wrong (being too large included) it seems to break. I've only had it work with CSS and JS files. It is possible your hosting/server may be adding extra content for tracking and removing some of the required headers for this to work. I'm assuming the mime type is checked so CSS and JS files aren't modified, hence why this systems works for them. – Endophage Sep 06 '11 at 17:42
  • 6
    I'm running under php-fpm and discovered I can do: fastcgi_finish_request(); – Adam Jimenez Sep 13 '11 at 13:05
  • @Adam awesome, great to know! – Endophage Sep 14 '11 at 21:30
  • Confirmed, under php-fpm the suggested example does not work. Instead, just use fastcgi_finish_request() – beetree Jul 02 '16 at 15:11
8

You can do that by setting time limit to unlimited and ignoring connection

<?php
ignore_user_abort(true);
set_time_limit(0);

see also: http://www.php.net/manual/en/features.connection-handling.php

PLuS
  • 642
  • 5
  • 12
  • most of the times this set_time_limit is not available to set 0 at least when on shared hosting. Another side-note is that shared hosting providers don't like for users to hold up PHP processes for a long time. – Alfred Jan 26 '11 at 16:07
  • You are right, but I think this is the only way to do what Endophage is looking for – PLuS Jan 26 '11 at 16:08
  • +1, but also worth to mention, that usually PHP ignores user-abort. – Mano Kovacs Jan 26 '11 at 16:23
  • @Alfred I'm not on shared hosting but there are still some limitations. I know I can set_time_limit(0) and the caching is reasonably quick, half a second or so to generate and cache, I'm just trying to avoid that half a second being before the user gets the page. – Endophage Jan 26 '11 at 16:37
  • @PLuS if ignore_user_abort works for me then that's exactly what I'm looking for. I'll have to test it. – Endophage Jan 26 '11 at 16:38
  • @Endophage when have something like VPS you should really read my [post](http://stackoverflow.com/questions/4806637/continue-processing-after-closing-connection/4807174#4807174) and compile redis. – Alfred Jan 26 '11 at 16:40
  • @PLus Actually, I don't think that's quite what I want. I need the server to send a notice to the browser to say there is no more data, not just to continue running if the user stops/exits. – Endophage Jan 26 '11 at 16:55
  • @PLuS I was led to that same connection-handling page from another source and in the comments there is a solution to my question. I've posted the answer here, you might be interested. – Endophage Jan 31 '11 at 21:31
1

As far as I know, unless you're running FastCGI, you can't drop the connection and continue execution (unless you got Endophage's answer to work, which I failed). So you can:

  1. Use cron or anything like that to schedule this kind of tasks
  2. Use a child process to finish the job

But it gets worse. Even if you spawn a child process with proc_open(), PHP will wait for it to finish before closing connection, even after calling exit(), die(), some_undefined_function_causing_fatal_error(). The only workaround I found is to spawn a child process that itself spawns a child process, like this:

function doInBackground ($_variables, $_code)
{
    proc_open (
        'php -r ' .     
            escapeshellarg ("if (pcntl_fork() === 0) { extract (unserialize (\$argv [1])); $_code }") .
            ' ' . escapeshellarg (serialize ($_variables)),
        array(), $pipes 
    );
}

$message = 'Hello world!';
$filename = tempnam (sys_get_temp_dir(), 'php_test_workaround');
$delay = 10;

doInBackground (compact ('message', 'filename', 'delay'), <<< 'THE_NOWDOC_STRING'
    // Your actual code goes here:
    sleep ($delay);
    file_put_contents ($filename, $message);
THE_NOWDOC_STRING
);
Septagram
  • 9,425
  • 13
  • 50
  • 81
1

PHP doesn't have such persistence (by default). The only way I can think of is run cron jobs to pre-fill the cache.

R. van Twisk
  • 434
  • 6
  • 14
1

Can compile and run programs from PHP-CLI(not on shared hosting > VPS)

Caching

For caching I would not do it that way. I would use redis as my LRU cache. It is going to be very fast(benchmarks) especially when you compile it with client library written in C.

Offline processing

When you install beanstalkd message queue you can also do delayed puts. But I would use redis brpop/rpush to do the other message queuing part because redis is going to be faster especially if you use PHP client library(in C user-space).

Can NOT compile or run programs from PHP-CLI(on shared hosting)

set_time_limit

most of the times this set_time_limit is not available(because of safe-mode or max_execution_time directive) to set 0 at least when on shared hosting.Also shared hosting really providers don't like for users to hold up PHP processes for a long time. Most of the times the default limit is set to 30.

Cron

Use cron to write data to disc using Cache_lite. Some stackoverflow topic already explaining this:

Also rather easy, but still hacky. I thinky you should upgrade(>VPS) when you have to do such hacking.

Asynchronous request

As last resort you could do asynchronous request caching data using Cache_lite for example. Be aware that shared hosting does not like for you to hold up a lot of long running PHP processes. I would use only one background process which calls another one when it reaches max-execution-time directive. I would note time when script starts and between a couple of cache calls I would measure time spent and when it gets near the time I would do another asynchronous request. I would use locking to make sure only 1 process is running. This way I will not piss of the provider and it can be done. On the other hand I don't think I would write any of this because it is kind of hacky if you ask me. When I get to that scale I would upgrade to VPS.

Community
  • 1
  • 1
Alfred
  • 60,935
  • 33
  • 147
  • 186
  • Installing new software is not an option. It has to be a purely PHP solution. Cron jobs are possible but using cron jobs to regenerate cache really isn't. There are tens of thousands of pages and I want the cache to be dynamic in that it only caches what is actually being actively viewed which is easily done by doing it as part of the request. – Endophage Jan 26 '11 at 16:45
  • Then you should read "Asynchronous request" part, but I think you should upgrade if you have performance problems. – Alfred Jan 26 '11 at 16:48
0

If you are doing this to cache content, you may instead want to consider using an existing caching solution such as memcached.

Justin Ethier
  • 131,333
  • 52
  • 229
  • 284
-3

No. As far as the webserver is concerned, the request from the browser is handled by the PHP engine, and that's that. The request lasts as long as the PHP.

You might be able to fork() though.

Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055