81

I am getting a 400 Bad Request request header or cookie too large from nginx with my Rails app. Restarting the browser fixes the issue. I am only storing a string id in my cookie so it should be tiny.

Where can I find the nginx error logs? I looked at nano /opt/nginx/logs/error.log, but it doesn't have anything related.

I tried to set following and no luck:

location / {
    large_client_header_buffers  4 32k;
    proxy_buffer_size  32k;
}

nginx.conf

#user  nobody;
worker_processes  1;
#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;
#pid        logs/nginx.pid;
events {
  worker_connections  1024;
}
http {
passenger_root /home/app/.rvm/gems/ruby-1.9.3-p392/gems/passenger-3.0.19;
passenger_ruby /home/app/.rvm/wrappers/ruby-1.9.3-p392/ruby;
include       mime.types;
default_type  application/octet-stream;
sendfile        on;
keepalive_timeout  65;
client_max_body_size 20M;
server {
    listen       80;
    server_name  localhost;
    root /home/app/myapp/current/public;
    passenger_enabled on;
    #charset koi8-r;
    #access_log  logs/host.access.log  main;

# location / {
#   large_client_header_buffers  4 32k;
#   proxy_buffer_size  32k;
# }

     #  location / {
     #   root   html;
     #   index  index.html index.htm;
     #   client_max_body_size 4M;
#   client_body_buffer_size 128k;
# }
    #error_page  404              /404.html;

    # redirect server error pages to the static page /50x.html
    #
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }

    # proxy the PHP scripts to Apache listening on 127.0.0.1:80
    #
    #location ~ \.php$ {
    #    proxy_pass   http://127.0.0.1;
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all;
    #}
}


# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
#    listen       8000;
#    listen       somename:8080;
#    server_name  somename  alias  another.alias;

#    location / {
#        root   html;
#        index  index.html index.htm;
#    }
#}


# HTTPS server
#
#server {
#    listen       443;
#    server_name  localhost;

#    ssl                  on;
#    ssl_certificate      cert.pem;
#    ssl_certificate_key  cert.key;

#    ssl_session_timeout  5m;

#    ssl_protocols  SSLv2 SSLv3 TLSv1;
#    ssl_ciphers  HIGH:!aNULL:!MD5;
#    ssl_prefer_server_ciphers   on;

#    location / {
#        root   html;
#        index  index.html index.htm;
#    }
#}

}

Here's my code storing the cookies and a screenshot of the cookies in Firebug. I used firebug to check stored session and I found New Relic and jQuery are storing cookies too; could this be why the cookie size is exceeded?

enter image description here

def current_company
  return if current_user.nil?
  session[:current_company_id] = current_user.companies.first.id if session[:current_company_id].blank?
    @current_company ||= Company.find(session[:current_company_id])
end
Brad Koch
  • 19,267
  • 19
  • 110
  • 137
user1883793
  • 4,011
  • 11
  • 36
  • 65
  • How much data do you store in session? – Mike Szyndel Jul 08 '13 at 11:13
  • Please, show the part of code which store data in cookie. – mr.The Jul 08 '13 at 15:28
  • This seems to be a highly ranked response to google queries for this error message. In addition to the obvious cause discussed here it can also be caused if you have a loop in a proxy config - this will manifest as "768 worker_connections are not enough while connecting to upstream" in your error log. – symcbean Nov 26 '19 at 12:06

5 Answers5

166

It's just what the error says - Request Header Or Cookie Too Large. One of your headers is really big, and nginx is rejecting it.

You're on the right track with large_client_header_buffers. If you check the docs, you'll find it's only valid in http or server contexts. Bump it up to a server block and it will work.

server {
    # ...
    large_client_header_buffers 4 32k;
    # ...
}

By the way, the default buffer number and size is 4 and 8k, so your bad header must be the one that's over 8192 bytes. In your case, all those cookies (which combine to one header) are well over the limit. Those mixpanel cookies in particular get quite large.

WhiteHotLoveTiger
  • 2,088
  • 3
  • 30
  • 41
Brad Koch
  • 19,267
  • 19
  • 110
  • 137
  • 5
    Thanks for posting and with the context. Always better to know why and what you are changing versus just being told to change it! – Jim Sep 11 '14 at 17:19
  • I had the same issue. By following these comments. I have solved my issue. – Ariful Islam Feb 22 '19 at 14:06
  • This solution is for Nginx server. Is there any solution for apache server?. – Naresh Rupareliya Jul 15 '20 at 05:32
  • @NareshRupareliya This question is about nginx; you'll want to search/ask a new question or check the [apache docs](https://httpd.apache.org/docs/2.4/). – Brad Koch Jul 15 '20 at 12:30
  • where to check the max value? – junnyea Sep 11 '20 at 08:06
  • I use springboot + nginx. I changed nginx setting `large_client_header_buffers 4 32k;`. and springboot's application.yml also. but those are not working. is there any problem? when User has large cookie, they still show 400 error – horoyoi o Jan 17 '22 at 06:54
23

Fixed by adding

server {
  ...
  large_client_header_buffers 4 16k;
  ...
} 
user1883793
  • 4,011
  • 11
  • 36
  • 65
14

With respect to answers above, but there is client_header_buffer_size needs to be mentioned:

http {
  ...
  client_body_buffer_size     32k;
  client_header_buffer_size   8k;
  large_client_header_buffers 8 64k;
  ...
}
dr.dimitru
  • 2,645
  • 1
  • 27
  • 36
  • 3
    For sake of completeness, the docs for client_header_buffer_size state the following: "Sets buffer size for reading client request header. For most requests, a buffer of 1K bytes is enough. However, if a request includes long cookies, or comes from a WAP client, it may not fit into 1K. If a request line or a request header field does not fit into this buffer then larger buffers, configured by the large_client_header_buffers directive, are allocated." – Christof Dec 13 '16 at 08:18
  • I get http directive is not allowed here on ```nginx -t```, where should http placed? I placed under server block – Shedrack Aug 12 '22 at 05:03
  • @Shedrack `http {...}` block should be placed in the top-level of nginx configuration for example into (or file imported into) top-level of `/etc/nginx/nginx.conf`; – dr.dimitru Sep 03 '22 at 05:53
  • 1
    I got it right, thank you @dr.dimitru – Shedrack Sep 03 '22 at 15:41
3

I get the error almost per 600 requests when web scraping. Firstly, assumed that a proxy server or remote ngix limits. I've tried to delete all cookies and other browser solutions that generally talked by related posts, but no luck. Remote server is not in my control.

In my case, I made a mistake about adding over and over new header to the httpClient object. After defined a global httpclient object, added header once and the problem doesn't appear again. It was a little mistake but unfortunately instead of try to understand the problem, jumped to the stackoverflow :) Sometimes, we should try to understand the problem own.

Dharman
  • 30,962
  • 25
  • 85
  • 135
Erdogan
  • 952
  • 12
  • 26
1

In my case (Cloud Foundry / NGiNX buildpack) the reason was the directive proxy_set_header Host ..., after removing this line nginx became stable:

http {
  server {
    location /your-context/ {
       # remove it: # proxy_set_header Host myapp.mycfdomain.cloud;
    }
  }
}
kinjelom
  • 6,105
  • 3
  • 35
  • 61