I'm new to Docker compose and looking for a way to scale two services together at the same time and set an environments for each replica.
This docker-compose example for what I want to achieve
version: '3'
networks:
my-network:
ipam:
driver: default
config:
- subnet: 172.20.0.0/16
services:
my-service:
image: service:latest
networks:
- my-network
my-client:
image: client:latest
depends_on:
- my-service
networks:
- my-network
environment:
SERVICE_IP: "my-service:1234"
when I run this with docker-compose -f docker-compose.test.yml up
everything works fine.
Now I want to scale the two services such that every my-client
has one unique my-service
, I tried to do docker-compose -f docker-compose.test.yml up --scale my-service=10 --scale my-client=10
What happened is some of the my-client
has the same my-service
IP because of SERVICE_IP: "my-service:1234"
and didn't uniquely assign the IP's as one to one. which left some my-service
without any client and some my-service
witch many clients.
My current workaround
since docker compose create services when scale which has names like <service_name>#, I try to do reverse DNS lookup for each my-client
to match with one my-service
version: '3'
networks:
my-network:
ipam:
driver: default
config:
- subnet: 172.20.0.0/16
services:
my-service:
image: service:latest
networks:
- my-network
my-client:
image: client:latest
depends_on:
- my-service
networks:
- my-network
command: sh -c "export MY_IP=$$(ifconfig | grep '172.20' | awk '{print $$2}' | cut -d ':' -f2) && export MY_SERVICE=$$(host $$MY_IP | awk '{print $$5}' | cut -d '.' -f1 | sed 's/my-client/my-serivce/g') && export SERVICE_IP=$$MY_SERVICE:1234 && my-client
I removed SERVICE_IP: "my-service:1234"
and got my-client
domainname using reverse DNS, then replace my-client from the domain to my-service which will map one client for one service.
inspired by: https://stackoverflow.com/a/64790547/14053770
TL;TR
I want to assign environments with a service name,container name, ... that can scale uniquely as group one to one, without this workaround.