We are deploying a Java backend and React UI application using docker-compose. Our Docker containers are running Java, Caddy, and Postgres.
What's unusual about this architecture is that we are not running the application as a cluster. Each user gets their own server with their own subdomain. Everything is working nicely, but we need a strategy for managing/updating machines as the number of users grows.
We can accept some down time in the middle of the night, so we don't need to have high availability.
We're just not sure what would be the best way to update software on all machines. And we are pretty new to Docker and have no experience with Kubernetes or Ansible, Chef, Puppet, etc. But we are quick to pick things up.
We expect to have hundreds to thousands of users. Each machine runs the same code but has environment variables that are unique to the user. Our original provisioning takes care of that, so we do not anticipate having to change those with software updates. But a solution that can also provide that ability would not be a bad thing.
So, the question is, when we make code changes and want to deploy the updated Java jar or the React application, what would be the best way to get those out there in an automated fashion?
Some things we have considered:
- Docker Hub (concerns about rate limiting)
- Deploying our own Docker repo
- Kubernetes
- Ansible
- https://containrrr.dev/watchtower/
Other things that we probably need include GitHub actions to build and update the Docker images.
We are open to ideas that are not listed here, because there is a lot we don't know about managing many machines running docker-compose. So please feel free to offer suggestions. Many thanks!