It looks like you are configuring (or plan to configure) Envoy using a static configuration whereas where Envoy really shines is when you supply it a dynamically generated configuration on the fly. The main difference between the two is that you have a service that you configure Envoy to regularly consult for updates but what the service would have to send back looks very similar to the static configuration.
That's what they term xDS which encompasses different services you can write that generate different parts of the configuration. That one service (that you must supply and run) can effectively provide all the other services (e.g. Listener Discovery Service) via different endpoints that it exposes. Envoy allows you to configure it to poll a REST-like API, a streaming gRPC service or even to watch a file in a specific location (I suspect this one is the winner for you). You actually only need to implement the LDS in order to dynamically managed TLS certs. The rest of the config can remain static.
If you choose the route of writing a dynamic service that Envoy consults for config then it's not to complicated to set this up so that it just reads the contents of the files on disk and provides Envoy with whatever it finds in there. For this you could just supply an inline string data source for the Common TLS Context object. Unless you have thousands of certificates and listeners, the response body won't get anywhere near your bandwidth/memory limits.
I'll confess that I exhausted the time I could afford getting started on Envoy in trying to interpret their extensive machine-oriented documentation so I eventually settled on a polled HTTP service for our config. Even with polling every few seconds it's the only real traffic so it's pretty easy to set up and keep going. I'll speak about this approach since that's the one I'm most familiar with.
You might have started with something like the static example but all you'd need to do to make it more dynamic is move to the dynamic configuration a little further down. Just substitute REST for gRPC as that's a bit easier to get going with and implement the REST endpoints documented further down. It takes a little trial and error but a good way to get going is simply making the service return the JSON version of the config you're already using. One gotcha to look out for is that you'll need to add "type"
and "version"
keys on the top level JSON object that references the proto of the type of thing you're returning, i.e. the response to the LDS service might look like this:
{
"version_info": "0",
"resources": [{
"@type": "type.googleapis.com/envoy.api.v2.Listener",
"name": "http_listener",
"address": "{...}",
"filter_chains": [{
"filters": [
"{...}"
]
}]
}]
}
This was not nearly as easy as I had hoped to get working in Python. They have a pretty good example in Go of the xDS server that uses gRPC but that didn't help me nearly as much as looking through some of the other attempts at implementing the xDS server I found on Github. This project was particularly helpful for me. Also I'm yet to run into anything that would actually need a hot restart if you're already configuring envoy dynamically other than stable things like the cluster identifiers of the Envoy instance itself.