17
<?php
for($i=0;$i<20;$i++)
{
    echo 'printing...<br />';
    ob_flush();
    flush();

    usleep(300000);
}

?>

Url that contains the code: http://domainsoutlook.com/sandbox/delayed.php

I have a dedicated server so I can make the changes. I am running apache and nginx as the proxy server.

Charles
  • 50,943
  • 13
  • 104
  • 142
Speedy Wap
  • 478
  • 4
  • 7
  • 18

13 Answers13

47

So that's what I found out:

Flush would not work under Apache's mod_gzip or Nginx's gzip because, logically, it is gzipping the content, and to do that it must buffer content to gzip it. Any sort of web server gzipping would affect this. In short, at the server side, we need to disable gzip and decrease the fastcgi buffer size. So:

  • In php.ini:

    . output_buffering = Off

    . zlib.output_compression = Off

  • In nginx.conf:

    . gzip off;

    . proxy_buffering off;

Also have this lines at hand, specially if you don't have acces to php.ini:

  • @ini_set('zlib.output_compression',0);

  • @ini_set('implicit_flush',1);

  • @ob_end_clean();

  • set_time_limit(0);

Last, if you have it, coment the code bellow:

  • ob_start('ob_gzhandler');

  • ob_flush();

PHP test code:

ob_implicit_flush(1);

for($i=0; $i<10; $i++){
    echo $i;

    //this is for the buffer achieve the minimum size in order to flush data
    echo str_repeat(' ',1024*64);

    sleep(1);
}
Roger
  • 8,286
  • 17
  • 59
  • 77
  • 14
    The last thing "echoing 64k **senseless** whitespace" did the trick for me. Thank you Roger. – humanityANDpeace Mar 26 '13 at 21:01
  • . proxy_buffering off; was the problem in my config. Thanks! – Noam Apr 06 '15 at 15:50
  • I've done all these but not working on live centos7 apache with php 5.5. Working smooth locally even with output_buffering=4096 – Azghanvi Aug 31 '17 at 04:20
  • Using php7.2fpm the @ob_end_clean was what finally did it for me. – Jeff Beagley Oct 03 '18 at 17:19
  • 1
    I needed 5 hours to find the problem why the SSE FLUSH was not working... `this is for the buffer achieve the minimum size in order to flush data`. Oh what the Hack! ... And there is not yet a way around it, as it seems: https://stackoverflow.com/q/57207571/1066234 – Avatar Feb 18 '22 at 09:27
  • Note: `echo str_repeat(' ', 4096);` seems to be sufficient. https://www.php.net/manual/en/function.flush.php#51679 – Avatar Feb 18 '22 at 09:30
  • All methods seem to have a problem with Cloudflare - when you flush from server, it only flushes to Cloudflare. -- At least that's my theory. Any ideas? – Jomar Sevillejo Feb 19 '22 at 08:53
13

You're using ob_flush without ob_start, so there is nothing to flush for it.

It also depends on the webserver and proxy and its settings.

You should disable buffering for Nginx (add proxy_buffering off; to the config file and restart Nginx)

Also, check if your php.ini contains output_buffering = Off and zlib.output_compression = Off.

Eje
  • 354
  • 4
  • 8
schnaader
  • 49,103
  • 10
  • 104
  • 136
12

Main php file;

<?php
header('Content-Type: text/HTML; charset=utf-8');
header( 'Content-Encoding: none; ' );//disable apache compressed
session_start();
ob_end_flush();
ob_start();
set_time_limit(0);
error_reporting(0);

..... bla bla

for(each)........
{
   bla bla..
    echo "<br>>>>".$i."<<<br>";
    ob_flush();
    flush(); //ie working must

}
?>

it's working..

Emin Kadıoğlu
  • 121
  • 1
  • 5
  • 1
    The Content-Encoding was what worked for me on nginx - without having to turn it off for the whole server. Thanks Emin! – Redzarf Jun 27 '13 at 10:13
  • 2
    Setting `Content-Encoding` finally made flushing work on Firefox, but somehow Chrome was still expecting a compressed response, so it threw an `ERR_CONTENT_DECODING_FAILED` error. – Jelle De Loecker Dec 13 '17 at 10:35
5

You have to fill the buffer, so that it can be flushed to browser. Use this after echo

echo str_pad('',4096)."\n";

Complete code:

<?php
     if (ob_get_level() == 0) ob_start();

     for( $i=0 ; $i<20 ; $i++) {
        echo 'printing...<br />';
        echo str_pad('',4096)."\n";

        ob_flush();
        flush();

        usleep(300000);
     }
     ob_end_flush();
?>
Ankit Sharma
  • 5,191
  • 2
  • 23
  • 29
  • Awesome! Such a simple thing to solve my problem but I havent found this solution or requirement anywhere else. Thanks! – Cosworth66 Oct 17 '20 at 02:56
4

In php.ini:

output_buffering = Off
zlib.output_compression = Off

In nginx.conf:

fastcgi_keep_conn on; # < solution
proxy_buffering off;
gzip off;
Eje
  • 354
  • 4
  • 8
3

As I read, it seems a really hard problem to solve, and the only (dirty) way I found is writing something useless to output to fill the ≠ buffers.

  • Without SSL
    • Without output_buffering, flush is needed.
      nginx buffers can be lowered until the PHP header size
    • With output_buffering, ob_flush need to be added to have the same behavior as above
  • With SSL
    • There is another buffer for SSL and NGX_SSL_BUFSIZE is fixed in nginx compilation

Here is my test.php file (call it with ?size=... to change space writing in the loop)

<!DOCTYPE html>
<html>
<head></head>
<body>
<?php
$vars = array('output_buffering', 'zlib.output_compression');
print('<p>');
foreach ($vars as $var) {
  print("$var : ");
  var_dump(ini_get($var));
  print('<br />');
}
print("ob_get_level() : " .ob_get_level());
print('</p>');
if (ob_get_level()) {
  $bytes = ob_get_length();
  ob_flush();
}

$nb_iterations = !empty($_GET['nb']) ? max(2, (int) $_GET['nb']) : 5;
$size = !empty($_GET['size']) ? $_GET['size'] : 0;

for ($i = 1; $i < $nb_iterations; $i++) {
  sleep(1);
  print(str_repeat(' ', 1024 * $size ));
  print("<p>wait $i s</p>");
  if (ob_get_level()) {
    $bytes += ob_get_length();
    print($bytes + strlen($bytes));
    ob_flush(); // this is working, results aren't depending on output_buffering value
  }
  flush(); // this is needed  
}
?>
</body>
</html>

And the lower conf I can set is

location ~ ^/test.php$ {
  gzip off;
  fastcgi_pass   unix:/var/run/php5-fpm/ssl.socket;
  fastcgi_param   QUERY_STRING            $query_string;  
  fastcgi_param   REQUEST_METHOD          $request_method;
  fastcgi_param   SCRIPT_FILENAME         $request_filename;   
  # if too low => upstream sent too big header while reading response header from upstream
  fastcgi_buffer_size 128; 
  fastcgi_buffers 2 128;  
  fastcgi_busy_buffers_size 128;
}
showdev
  • 28,454
  • 37
  • 55
  • 73
dcaillibaud
  • 341
  • 2
  • 5
1

Another possible cause is mod_security. It looks like it has it's own buffers. So if you are using it you will have to set :

SecResponseBodyAccess Off

Kind of a dirty workaround but so far that it the only way I got it to work.

Chris Koston
  • 975
  • 2
  • 9
  • 22
1

Just wanted to add to the Roger's answer.

If you are using FastCGI php5-fpm module within Apache2 you must also make sure you are adding

-flush

argument in your Apache2 configuration, i.e.

<IfModule mod_fastcgi.c>
...
        FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -flush -socket /tmp/php5-fpm.sock -idle-timeout 480 -pass-header Authorization
</IfModule>
Lukasz
  • 11
  • 1
1

Check your server api with

echo phpinfo();

If you found your server api

Server API :  CGI/FastCGI

in CentOS then add this line in "/etc/httpd/conf.d/fcgid.conf"

OutputBufferSize 0

To test, restart your Apache server and try below code

ob_start();
for($i = 0; $i < 10; $i ++) {
    echo $i;
    echo '<br />';
    flush();
    ob_flush();
    sleep(1);
}
mafu
  • 31,798
  • 42
  • 154
  • 247
gsm
  • 61
  • 1
  • 6
  • This is the one I'm looking for! Thanks @gsm ! I read all the answers/topic/blogs about this problem and all I needed is set the buffer size to zero in my Apache config file. Especially if you're running on Windows, and using `fcgid_module`, you should put `FcgidOutputBufferSize 0` to your httpd conf file where you set fcgid_module. – Zoltán Kurgya Jan 30 '20 at 12:53
0

I have noticed that browsers react differently. Chrome, for example, holds on to the input forever, and doesn't seem to care about displaying it any earlier. Unsurprisingly, Firefox will display the input earlier, if the above tips (contributed by other stackoverflowers) are applied, so try with Firefox.

Rolf
  • 5,550
  • 5
  • 41
  • 61
0

I was able to flush only this way - adding session_write_close();

if (ob_get_level() == 0)
{
    if(!ob_start("ob_gzhandler"))ob_start();
}               
echo ('bla bla bla');

$ans=ob_get_contents();
ob_end_clean();

header('Connection: close');
header('Content-Length: '.strlen($ans));
header('Status: 200');

echo $ans;

session_write_close();
ob_flush();
flush();
It_Never_Works
  • 213
  • 2
  • 10
0
if(!ob_get_level()) ob_start();
echo json_encode(array('valid'=>true,'msg'=>'Flush occured.'));
$size = ob_get_length();
header("Content-Type: application/json");
// Set the content length of the response.
header("Content-Length: {$size}");
//Close the connection if you want to.
header("Connection: close");
// Flush all output.
ob_end_flush();
ob_flush();
flush();
// Close current session (if it exists).
if(session_id()) session_write_close();
RahulD
  • 729
  • 5
  • 8
0

I have tried many times when using php-fpm with nginx. Many answers just instruct:

In php.ini:

output_buffering = Off

zlib.output_compression = Off

In nginx.conf:

gzip off;

proxy_buffering off;

BUT they forgot the very important setting in nginx.conf:

fastcgi_keep_conn on;
Envy
  • 510
  • 6
  • 19