Every time I had to setup Nginx to proxy to Docker containers is, not only for the security aspect, which as you mentioned, is self contained in the Docker container, but also to facilitate the distributed system communication.
In a standard architecture, you have an API container, an IDP container, and a Frontend container (let's assume this is a webapp). Everything is behind Nginx. IDP, API and Frontend are exposed to external traffic... but here comes the fun part. Let say you want to have an additional service running on a different container (a geolocation service, ETL, or whatever else). That container doesn't need to be exposed to the public. Just the internal containers can talk to it.
In the previous scenario a request will hit the frontend, frontend will send the request(s) to the API, API will verify the token with the IDP (internal call), if not authorized redirect the frontend to the IDP (3 legged authentication) or just return 403 and have the user reauth (2 legged auth) by sending credentials back to API again. Then if the user needs to call any additional service, all calls will go first thru the API, OR they could be mapped in Nginx to hit the service directly, just make sure the user is authenticated/authorized to use the service.
I hope that sheds some light on a particular usage of Nginx. Keep in mind, this is just 'one' use case, but Nginx could be used for many other purposes.