12

So far all the tutorials tell me that I need to enable SSL on my server to have HTTP/2 support.

In the given scenario, we have nginx in front of the backend Tomcat/Jetty server(s), and even though performance-wise it worth enabling HTTP/2 on the backend, the requirement to have HTTPS there as well seems to be an overkill.

HTTPS is not needed security-wise (only nginx is exposed), and is a bit cumbersome from the operational perspective - we'd have to add our certificates to each of the Docker containers that run the backend servers.

Isn't there a way around that provides HTTP/2 support all the way (or at least similar performance), and is less involved to set up?

sfThomas
  • 1,895
  • 2
  • 18
  • 26
  • > So far all the tutorials tell me that I need to enable SSL on my server to have HTTP/2 support. Presumably, the reason for that is that browsers only support http/2 over ssl: http://caniuse.com/#feat=http2 (see the #2 note) – Frederik Deweerdt Aug 12 '16 at 05:13

2 Answers2

15

The typical setup that we recommend is to put HAProxy in front of Jetty, and configure HAProxy to offload TLS and Jetty to speak clear-text HTTP/2.

With this setup, you get the benefits of an efficient TLS offloading (done by HAProxy via OpenSSL), and you get the benefits of a complete end-to-end HTTP/2 communication.

In particular, the latter allows for Jetty to push content via HTTP/2, something that won't be possible if the backend communication is HTTP/1.1.

Additional benefits include less resource usage, less conversion steps (no need to convert from HTTP/2 to HTTP/1.1 and viceversa), the ability to fully use HTTP/2 features such as stream resetting all the way to the application. None of these benefits will work if there is a translation to HTTP/1.1 in the chain.

If Nginx is only used as a reverse proxy to Jetty, it is not adding any benefit and it is actually slowing down your system, having to convert requests to HTTP/1.1 and responses back to HTTP/2.

HAProxy does not do any conversion so it's way more efficient, and allows a full HTTP/2 stack with all the benefits that it brings with respect to HTTP/1.1.

sbordet
  • 16,856
  • 1
  • 50
  • 45
  • Is it possible to use nginx for TLS offloading? – sfThomas Aug 03 '16 at 19:40
  • 1
    Interesting! However if HAProxy terminates SSL then it presumably sets up a new HTTP/2 connection to Jetty. Is it possible to use all features (e.g. Push, stream resetting.. etc.) across two different HTTP/2 connections? If so then you're set up seems a very good one! – Barry Pollard Aug 03 '16 at 22:36
  • 2
    @BazzaDP, yes it is possible. This is the setup that we use to serve https://webtide.com and https://cometd.org. HAProxy just forwards the bytes that it decrypts to the backend, it has no knowledge that they are HTTP/2 bytes. Jetty on the backend serves clear-text HTTP/2, and leverages the advanced HTTP/2 push capabilities of Jetty. I have detailed the HAProxy and Jetty configuration [here](https://webtide.com/http2-with-haproxy-and-jetty/). – sbordet Aug 04 '16 at 08:52
  • Very nice. Plus one! – Barry Pollard Aug 04 '16 at 08:58
  • And yes @sfThomas it's possible and common to TLS offload in Nginx if you want to keep that instead but then will be two connections. – Barry Pollard Aug 04 '16 at 09:03
  • There's no benefits in "complete end-to-end HTTP/2 communication" (and you're wrong about stream resetting feature), but bypassing HTTP/2 to the application you lose the ability to load-balance loading of multiple resources through one HTTP/2 connection. – VBart Aug 14 '16 at 09:18
  • 1
    There are obvious benefits to end-to-end HTTP/2, starting from the translation to HTTP/1.1 and back and the capability of server-side applications to perform HTTP/2 push. The stream resetting feature is something that is being utilized by clients to reset long requests, especially when the server-side application is non-blocking, which is a common trend. @VBart, just read the StackOverflow questions of people that is having troubles with the HTTP/2 to legacy HTTP, for example: http://stackoverflow.com/questions/38878880/serving-python-flask-rest-api-over-http2 – sbordet Aug 14 '16 at 15:34
8

You don't need to speak HTTP/2 all the way through.

HTTP/2 primarily addresses latency issues which will affect your client->Nginx connections. Server to server connections (e.g. Nginx to Tomcat/Jetty) will presumably be lower latency and therefore have less to gain from HTTP/2.

So just enable HTTPS and HTTP/2 on Nginx and then have it continue to talk HTTP/1.1 to Tomcat/Jetty.

There's also a question of whether everything supports HTTP/2 all the way through (e.g. Nginx proxy_pass directive and Tomcat/Jetty), which again is less of an issue if only using HTTP/2 at the edge of your network.

Barry Pollard
  • 40,655
  • 7
  • 76
  • 92