35

We're working on a Ruby on Rails app that needs to take advantage of html5 websockets. At the moment, we have two separate "servers" so to speak: our main app running on nginx+passenger, and a separate server using Pratik Naik's Cramp framework (which is running on Thin) to handle the websocket connections.

Ideally, when it comes time for deployment, we'd have the rails app running on nginx+passenger, and the websocket server would be proxied behind nginx, so we wouldn't need to have the websocket server running on a different port.

Problem is, in this setup it seems that nginx is closing the connections to Thin too early. The connection is successfully established to the Thin server, then immediately closed with a 200 response code. Our guess is that nginx doesn't realize that the client is trying to establish a long-running connection for websocket traffic.

Admittedly, I'm not all that savvy with nginx config, so, is it even possible to configure nginx to act as a reverse proxy for a websocket server? Or do I have to wait for nginx to offer support for the new websocket handshake stuff? Assuming that having both the app server and the websocket server listening on port 80 is a requirement, might that mean I have to have Thin running on a separate server without nginx in front for now?

Thanks in advance for any advice or suggestions. :)

-John

John Reilly
  • 1,790
  • 2
  • 17
  • 18
  • 1
    Anyone still reading this do not accept the current answer below. The TCP proxy module works well and an answer below includes a link on how to set it up: https://github.com/yaoweibin/nginx_tcp_proxy_module and http://www.letseehere.com/reverse-proxy-web-sockets – crockpotveggies Aug 24 '12 at 00:38

7 Answers7

26

You can't use nginx for this currently[it's not true anymore], but I would suggest looking at HAProxy. I have used it for exactly this purpose.

The trick is to set long timeouts so that the socket connections are not closed. Something like:

timeout client  86400000 # In the frontend
timeout server  86400000 # In the backend

If you want to serve say a rails and cramp application on the same port you can use ACL rules to detect a websocket connection and use a different backend. So your haproxy frontend config would look something like

frontend all 0.0.0.0:80
  timeout client    86400000
  default_backend   rails_backend
  acl websocket hdr(Upgrade)    -i WebSocket
  use_backend   cramp_backend   if websocket

For completeness the backend would look like

backend cramp_backend
  timeout server  86400000
  server cramp1 localhost:8090 maxconn 200 check
VBart
  • 14,714
  • 4
  • 45
  • 49
mloughran
  • 11,415
  • 7
  • 28
  • 22
  • 1
    This is great, thank you! I haven't used HAProxy before, but I've always been meaning to learn. Looks like I've got a good reason to do so now. :) – John Reilly Apr 09 '10 at 18:28
  • 2
    This answer is no longer true (not surprising as it's 3 years old). Check out @mak's answer further down (at present) for how to configure this on nginx >= 1.3.13 – toxaq Jul 10 '13 at 09:19
12

How about use my nginx_tcp_proxy_module module?

This module is designed for general TCP proxy with Nginx. I think it's also suitable for websocket. And I just add tcp_ssl_module in the development branch.

Eli Courtwright
  • 186,300
  • 67
  • 213
  • 256
yaoweibin
  • 121
  • 1
  • 2
  • 6
    You **think**, but haven't tested it with WebSocket? – Jonas Sep 14 '10 at 11:11
  • 2
    @Jonas: I don't know whether he'd tested this at the time he made that comment, but I can confirm that his TCP proxy now does explicitly support websockets. – Eli Courtwright Nov 22 '11 at 20:53
  • This article explains how to set up, test, and use yaoweibin's module to host Websocket connections: http://www.letseehere.com/reverse-proxy-web-sockets – natevw Dec 08 '11 at 05:59
  • I tested the module and it works well. However, you have to know that if you plan to serve http content with node _and_ nginx on the standard port 80 then you can't use that module as one of the two will use port 80 and the other must use a different port. Go with the haproxy solution (as described by @mloughran) instead if this is your situation. – istvanp Dec 09 '11 at 22:20
11

nginx (>= 1.3.13) now supports reverse proxying websockets.

# the upstream server doesn't need a prefix! 
# no need for wss:// or http:// because nginx will upgrade to http1.1 in the config below
upstream app_server {
    server localhost:3000;
}

server {
    # ...

    location / {
        proxy_pass http://app_server;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_redirect off;
    }
}
mak
  • 13,267
  • 5
  • 41
  • 47
  • @mark, While this works well for http, I have an issue with https. I somehow get 301. I had successfully setup nginx with websocket on http but over ssl I get t 301. https://github.com/websocket-rails/websocket-rails/issues/333 this is the issue I created. Let me know if you can help. Thanks – Pramod Solanky May 16 '15 at 13:39
7

Out of the box (i.e. official sources) Nginx can establish only HTTP 1.0 connections to an upstream (=backend), which means no keepalive is possibe: Nginx will select an upstream server, open connection to it, proxy, cache (if you want) and close the connection. That's it.

This is the fundamental reason frameworks requiring persistent connections to the backend would not work through Nginx (no HTTP/1.1 = no keepalive and no websockets I guess). Despite having this disadvantage there is an evident benefit: Nginx can choose out of several upstreams (load balance) and failover to alive one in case some of them failed.

Edit: Nginx supports HTTP 1.1 to backends & keepalive since version 1.1.4. "fastcgi" and "proxy" upstreams are supported. Here it is the docs

Alexander Azarov
  • 12,971
  • 2
  • 50
  • 54
  • 1
    Got it, thanks. Essentially then, what I'm trying to do is currently impossible. Maybe someday nginx will support HTTP/1.1 keepalives to backends, but for now I'll have to come up with an alternate solution. Thanks for the response. – John Reilly Mar 10 '10 at 20:53
5

For anyone that wondering about the same problem, nginx now officially supports HTTP 1.1 upstream. See nginx documentation for "keepalive" and "proxy_http_version 1.1".

  • Yes but it won't support websockets until version 1.3 – toxaq Jun 12 '12 at 01:54
  • Indeed, and it should be noted that it hasn't made it in 1.3 yet either even though it's released. Their roadmap will give some info on the status of the Websocket implementation (currently planned for 1.3.x): http://trac.nginx.org/nginx/roadmap – Even André Fiskvik Oct 11 '12 at 10:50
3

How about Nginx with the new HTTP Push module: http://pushmodule.slact.net/. It takes care of the connection juggling (so to speak) that one might have to worry about with a reverse proxy. It is certainly a viable alternative to Websockets which are not fully in the mix yet. I know developer of the HTTP Push module is still working on a fully stable version, but it is in active development. There are versions of it being used in production codebases. To quote the author, "A useful tool with a boring name."

Eric Lubow
  • 763
  • 2
  • 12
  • 30
  • Thanks, that's a good suggestion. We actually were using that very module to achieve server push for a while, but now we're wanting to support bi-directional communication... And since we only need to support webkit browsers for our application, we're hoping to go with a pure websocket approach now. But I appreciate the response! :) – John Reilly Mar 10 '10 at 20:59
2

I use nginx to reverse proxy to a comet style server with long polling connections and it works great. Make sure you configure proxy_send_timeout and proxy_read_timeout to appropriate values. Also make sure your back-end server that nginx is proxying to supports http 1.0 because I don't think nginx's proxy module does http 1.1 yet.

Just to clear up some confusion in a few of the answers: Keepalive allows a client to reuse a connection to send another HTTP request. It does not have anything to do with long polling or holding connections open until an event occurs which is what the original question was asking about. So it doesn't matter than nginx's proxy module only supports HTTP 1.0 which does not have keepalive.

Mark Maunder
  • 420
  • 2
  • 6