110

On server-side using Sinatra with a stream block.

get '/stream', :provides => 'text/event-stream' do
  stream :keep_open do |out|
    connections << out
    out.callback { connections.delete(out) }
  end
end

On client side:

var es = new EventSource('/stream');
es.onmessage = function(e) { $('#chat').append(e.data + "\n") };

When i using app directly, via http://localhost:9292/, everything works perfect. The connection is persistent and all messages are passed to all clients.

However when it goes through Nginx, http://chat.dev, the connection are dropped and a reconnection fires every second or so.

Nginx setup looks ok to me:

upstream chat_dev_upstream {
  server 127.0.0.1:9292;
}

server {
  listen       80;
  server_name  chat.dev;

  location / {
    proxy_pass http://chat_dev_upstream;
    proxy_buffering off;
    proxy_cache off;
    proxy_set_header Host $host;
  }
}

Tried keepalive 1024 in upstream section as well as proxy_set_header Connection keep-alive;in location.

Nothing helps :(

No persistent connections and messages not passed to any clients.

Lukas Mayer
  • 1,117
  • 2
  • 8
  • 6

4 Answers4

260

Your Nginx config is correct, you just miss few lines.

Here is a "magic trio" making EventSource working through Nginx:

proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;

Place them into location section and it should work.

You may also need to add

proxy_buffering off;
proxy_cache off;

That's not an official way of doing it.

I ended up with this by "trial and errors" + "googling" :)

Inaimathi
  • 13,853
  • 9
  • 49
  • 93
  • 6
    Having the server respond with a "X-Accel-Buffering: no" header helps a lot! (see: http://wiki.nginx.org/X-accel#X-Accel-Buffering) – Did Jul 01 '13 at 16:24
  • Have you had any luck with this and websockets? The websocket example on the nginx site automatically closes the connection header if nothing is set... – toxaq Jul 15 '13 at 05:23
  • 18
    That didn't work for me, until I also added the following :- proxy_buffering off; proxy_cache off; – Malcolm Sparks Nov 07 '13 at 01:05
  • 6
    Your trial-and-error + my first google hit = I love stack overflow. Thanks! – Chris Ray Dec 19 '13 at 13:51
  • I needed the proxy_buffering off; and proxy_cache off; to get it working. Thanks to @MalcolmSparks – henry74 Aug 13 '14 at 16:31
  • I'm having a similar issue, but the provided solution didn't work for me. [Maybe take a look?](http://stackoverflow.com/questions/25660399/sse-eventsource-closes-after-first-chunk-of-data-rails-4-puma-nginx) – Sheharyar Sep 04 '14 at 08:13
  • 2
    You just did the OFFICIAL WAY, great job! http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive – Weihang Jian Nov 01 '15 at 05:40
  • It works for proxying to webpack-hot-middleware. Thanks! – minodisk Jul 11 '16 at 08:54
  • 1
    It's work for me after adding proxy_buffer_size 0; Thanks!! – Ashique Ansari Feb 26 '20 at 09:03
  • 1
    Is "proxy_http_version 1.1;" absolutely required? By doing so, browser connections will be limited to only 6 – tsobe Sep 16 '20 at 16:57
  • 1
    I made it work just by adding "X-Accel-Buffering: no" to the response header – KKS Feb 16 '22 at 21:27
  • In context of ngnix ingress inside k8s this answer provides the solution: https://stackoverflow.com/questions/58560048/sse-broken-after-deploying-with-kubernetes-and-connecting-via-ingress – David Shard Jan 13 '23 at 15:12
  • While using the official nginx Docker image, either `proxy_http_version 1.1;` or `proxy_buffering off;` were both sufficient on their own. – DanielM Feb 06 '23 at 11:54
34

Another option is to include in your response a 'X-Accel-Buffering' header with value 'no'. Nginx treats it specially, see http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffering

Svagis
  • 225
  • 2
  • 11
E1.
  • 385
  • 4
  • 6
14

Don't write this from scratch yourself. Nginx is a wonderful evented server and has modules that will handle SSE for you without any performance degradation of your upstream server.

Check out https://github.com/wandenberg/nginx-push-stream-module

The way it works is the subscriber (browser using SSE) connects to Nginx, and the connection stops there. The publisher (your server behind Nginx) will send a POST to Nginx at a corresponding route and in that moment Nginx will immediately forward to the waiting EventSource listener in the browser.

This method is much more scalable than having your ruby webserver handle these "long-polling" SSE connections.

Martin Konecny
  • 57,827
  • 19
  • 139
  • 159
  • How can you make this highly available? like deploying 2 nginx instances and when you post a message to one of them, it is published to clients that are subscribed to both of them? – The Fool Jan 23 '22 at 15:28
1

Hi elevating this comment from Did to an answer: this is the only thing I needed to add when streaming from Django using HttpStreamingResponse through Nginx. All the other switches above didn't help, but this header did.

Having the server respond with a "X-Accel-Buffering: no" header helps a lot! (see: wiki.nginx.org/X-accel#X-Accel-Buffering) – Did Jul 1, 2013 at 16:24

MarcFasel
  • 1,080
  • 10
  • 19