4

I'm trying to configure load balancer to serve in HTTPS with certificates provide by Lets Encrypt, even though I couldn't do it yet, reading this article gives steps how to configure

  • Self-signed certs
  • Network Load-Balancer w/ TLS backend
  • HTTPS Load-Balancer w/ non-TLS backend
  • HTTPS Load-Balancer w/ TLS backend

As I'm intersting only in HTTPS, I wonder what is the difference between this two:

  1. HTTPS Load-Balancer w/ non-TLS backend
  2. HTTPS Load-Balancer w/ TLS backend

But I meant not the obvious reason that is the first one is not encrypted from load balancer to backend, I mean in performance and HTTP2 conection, for example will I continue to get all the benefits from http2 like multiplexing and streaming? or is the first option

HTTPS Load-Balancer w/ non-TLS backend

only an illusion but I won't get http2?

Community
  • 1
  • 1
John Balvin Arias
  • 2,632
  • 3
  • 26
  • 41
  • 1
    This is a good question. A lot of people get confused on where to terminate TLS and the benefits of load balancers and back end instances and the various configuration options. – John Hanley Oct 06 '18 at 22:36

1 Answers1

5

To talk HTTP/2 all web browsers require the use of HTTPS. And even without HTTP/2 it's still good to have HTTPS for various reasons.

So the point your web browser needs to talk to (often called the edge server), needs to be HTTPS enabled. This is often a load balancer, but could also be a CDN, or just a single web server (e.g. Apache) in front of an application server (e.g. Tomcat).

So then the question is does the connection from that edge server to any downstream servers need to be HTTPS enabled? Well, ultimately the browser will not know, so not for it. Then you're down to two reasons to encrypt this connection:

  1. Because the traffic is still travelling across an insecure channel (e.g. CDN to origin server, across the Internet).

    Many feel it's disingenuous to make the user think they are on a secure (with a green padlock) then in fact they are not for the full end to end connection.

    This to me is less of an issue if your load balancer is in a segregated network area (or perhaps even on the same machine!) as the server it is connecting to. For example if the load balancer and the 2 (or more) web servers is is connecting to are both in a separate area in a DMZ segregated network or their own VPC.

    Ultimately the traffic will be decrypted at some point and the question for server owners is where/when in your networking stack that happens and how comfortable you are with it.

  2. Because you want HTTPS for some other reason (e.g. HTTP/2 all the way through).

    On this one I think there is less of a good case to be made. HTTP/2 primarily helps high latency, low bandwidth connections (i.e. browser to edge node) and is less important for low latency, high bandwidth connections (as load balancer to web servers often are). My answer to this question discusses this more.

In both the above scenarios, if you are using HTTPS on your downstream servers, you can use self-signed certificates, or long lived self-signed certificates. This means you are not bound by the 30 days LetsEncrypt limitations, nor does it require you to purchase longer certificates from another CA. As the browser never sees these certificates you only need your load balancer to trust them, which is in your control to do for self-signed certificates. This is also useful if the downstream web servers cannot talk to LetsEncrypt to be able to get certificates from there.

The third option, if it really is important to have HTTPS and/or HTTP/2 all the way through, is to use a TCP load balancer (which is option 2 in your question so apologies for confusing the numbering here!). This just forwards TCP packets to the downstream servers. The packets may still be HTTPS encrypted but the load balancer does not care - it's just forwarding them on and if they are HTTPS encrypted then the downstream server is tasked with decrypting them. So you still can have HTTPS and HTTP/2 in this scenario, you just have the end user certificates (i.e. the LetsEncrypt ones) on the downstream web servers. This can be difficult to manage (should the same certificates be used on both? Or should they have different ones? Do we need to have sticky sessions so HTTPS traffic always hits the sae downstream server). It also means the load balancer cannot see or understand any HTTP traffic - they are all just TCP packets as far as it is concerned. So no filtering on HTTP headers, or adding new HTTP headers (e.g. X-FORWARDED_FOR with the orignal IP address.)

To be honest it is absolutely fine, and even quite common, to have HTTPS on the load balancer and HTTP traffic on downstream servers - if in a secure network between the two. It is usually the easiest to set up (one place to manage HTTPS certificates and renewals) and the easiest supported (e.g. some downstream servers may not easily support HTTPS or HTTP/2). Using HTTPS on this connection either by use of self-signed certificates or CA issued certificates is equally fine, though requires a bit more effort, and the TCP load balancer option is probably the most effort.

Barry Pollard
  • 40,655
  • 7
  • 76
  • 92