0

I'm reading POST data from php://input with file_get_contents but I have noticed situations that some of my clients don't send proper header information, i.e. the Content-Length header is longer than the actual length of the content send. This caused file_get_contents to wait taking up precious Apache threads.

I was able to simulate the issue with the following code:

<?php

$ctx = stream_context_create(array(
  'http' => array(
    'timeout' => 1, 
  )  
) );

$input = file_get_contents('php://input', 0, $ctx);

print_r( $input );

And calling the script with the following test command:

time curl -H 'Content-Type: application/json' -H 'Content-Length: 100' -X POST --verbose -k -d 'This is test data.' http://localhost/form.php

As you might notice, I'm setting the Content-Length to 100 while the actual test data length is only 18.

I have tried setting an timeout by using a context on file_get_contents but for some reason this is not taken into account.

How can I make file_get_contents timeout in a reasonable amount of time, say 1 or 2 seconds?

Luke
  • 20,878
  • 35
  • 119
  • 178
  • Related: http://stackoverflow.com/questions/3689371/php-file-get-contents-ignoring-timeout – Dave Chen Nov 26 '13 at 23:59
  • The `http` option is indeed not used for a `php://` scheme. Does something more verbose like `$input = ''; $handle = fopen('php://input','r'); while(!feof($handle)) $input .= fgets($handle);` work better, or does it show the same problem? – Wrikken Nov 27 '13 at 00:08
  • BTW: another option is to disable `KeepAlive` on the webserver for those requests (for apache [see here](http://httpd.apache.org/docs/2.2/mod/core.html#keepalive), but it cannot be done on a directory level, it would have to be a shole virtualhost at the very least). – Wrikken Nov 27 '13 at 00:10
  • @Wrikken; your fopen example demonstrates the same behavior. – Luke Nov 27 '13 at 00:13
  • Damn. And the keep alive is only for a _response_ with an improper length. Double damn. – Wrikken Nov 27 '13 at 00:19
  • Hm, `php-cgi` doesn't even get started (apache balks here with a 408 error), what webserver are you using, and what are you running php as? I'd like to make a test-setup for this for some more tries without having to bother you with non-working ideas ;) – Wrikken Nov 27 '13 at 00:27
  • Thanks. I'm running a basic set up on Ubuntu. Simply did a `sudo apt-get install apache2 php5` kinda thing. – Luke Nov 27 '13 at 00:29
  • 1
    Basic ubunutu, basic apache 2.4.6, php 5.5.3: the request still doesn't get to PHP. So, more og an apache thing. 2 options: (1) scold the customers doing improper requests, threaten to just block their IPs if it continues, (2) you might want to toy around with this: http://httpd.apache.org/docs/2.4/mod/mod_reqtimeout.html – Wrikken Nov 27 '13 at 19:51
  • Thanks for that. Makes perfect sense. It turns out that we have `reqtimeout` on Apache in production, however, New Relic is showing us a stack trace of the time out all the way to the `file_get_contents`, which is kinda puzzling if the request doesn't get through to PHP. On a side note, not having `reqtimeout` enabled is a potential security issue? I mean, anyone can write a cURL script with a misaligned content length header and use up all Apache threads, right? If you add your comment as an answer I will accept it as the right answer. Thanks! – Luke Nov 27 '13 at 22:33

0 Answers0