3

Currently have a form with a large number of fields that need to be validated but running into an issue I don't quite understand. Working with php 5.6, nginx 1.11.9, and Laravel 5.1.45.

The form has an indefinite number of fields that need to be validated, as the user can dynamically add more fields to one section where each of these fields is required to have information if added. So this fix needs to take that into account.

Upon failing validation, one of my forms runs into a 502 Bad Gateway error. I looked into the nginx logs a bit and found the following error:

upstream sent too big header while reading response header from upstream

So after a bit of searching, I found this thread on stack overflow and the accepted answer says I should add the following to my nginx.conf file:

fastcgi_buffers 16 16k; 
fastcgi_buffer_size 32k;

When I do that, instead of a 502 error when validation fails, it simply redirects me back to the form as if I had just opened it up (none of the fields are populated, no error messages are shown).

So I looked into how the buffering works some more, and I had a hunch that the flashed error messages are being stored in these buffers, so I removed all validation except for one field just to see if that was related, and it works! Validation fails and the page tells the user what was wrong with the one field.

So I re-add validation to fields one by one until I hit the point where it breaks again. It ends up breaking after 15 fields fail, which was one less than what the fastcgi_buffers parameter specifies.

I remove the fastcgi lines (which makes the fastcgi_buffers parameter default to 8) I added in before, and go one by one again until it breaks at 7 fields, once again one less than the fastcgi_buffers parameter. So I think it might be related to that.

I tried increasing these values further, but it would always cause my site to crash, probably because of the invalid nginx configuration.

Another important point is that this only happens on a live server, not on my local setup running off my pc. So I was thinking that it was because the server simply didn't have the memory to deal with it, but it seemed wrong that one person's request could be too much for the server to handle. The nginx configurations don't seem to differ at all from the site to my local setup.

Is there something else I'm missing? Can I increase the fastcgi_buffer lines further than they are to fix this? Should I go about this a different way? Any help is appreciated!

Community
  • 1
  • 1
Arty
  • 246
  • 1
  • 8

2 Answers2

0

You can try to increase the post_max_size.

Ramy Talal
  • 71
  • 8
  • I tried changing the post_max_size in my php.ini file and it unfortunately didn't change the results with or without the fastcgi_buffer lines in my nginx.conf file. – Arty Mar 22 '17 at 20:02
  • To clarify, I changed the post_max_size from 8M to 32M and then also to 0 (no limit). – Arty Mar 22 '17 at 20:04
  • Have you compared your live and local php.ini? Is your Laravel app error free? (is laravel.log empty?) – Ramy Talal Mar 23 '17 at 08:00
0

It's most probably related to having the cookie session driver set in PHP and the limitations of cookies in browsers. The solution is to use another session driver than cookie (file, redis, etc)

See Cookie session driver won't save any validation error or flash data

Attila Fulop
  • 6,861
  • 2
  • 44
  • 50