3

It seems to have stopped working recently. I use docker compose to run some microservices so that unit tests can use them. Some of the microservices talk to each other, so they use a configuration value for the base URL. This is an example of my docker-compose.yml

version: '3.8'

services:

  microsa:
    container_name: api.a
    image: *****
    restart: always
    ports:
      - "20001:80"

  microsb:
    container_name: api.b
    image: *****
    restart: always
    ports:
      - "20002:80"
    depends_on:
      microsa:
        condition: service_healthy
    environment:
      - ApiUrl=http://host.docker.internal:20001/api/v1/test

This works perfectly on my Windows machine docker desktop, but it will not work in Azure Pipelines on either ubuntu-latest or windows-latest

- task: DockerCompose@0
  displayName: 'Run docker compose for unit tests'
  inputs:
    containerregistrytype: 'Azure Container Registry'
    azureSubscription: ${{ parameters.azureResourceManagerConnection }}
    azureContainerRegistry: ${{ parameters.acrUrl }}
    dockerComposeFile: 'docker-compose.yml'
    action: 'Run services'

When api.b attempts to call api.a, I get the following exception:

No such host is known. (host.docker.internal:20001)

Using http://microsa:20001/... gives the following error:

A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. (microsa:20001)

I've also tried http://localhost:20001/...

I've also confirmed that microsa is accessible directly, so there is no errors within that container.

I've also tried running docker-compose up via AzureCLI@2 instead of DockerCompose@0 with the same results

zXynK
  • 1,101
  • 16
  • 30
  • 1
    `host.docker.internal` is currently only supported by default on Docker Desktop; unless Azure had specific options to support this, I'm not sure how that worked before. That said; is there a reason to publish the ports, and not connect to the other service using the "internal" network by connecting to the other service by its name? Might need to change service-names to not have a `.`? – thaJeztah Mar 09 '21 at 11:29
  • Apologies if you saw the previous comment, I was looking at the wrong results (I was looking at the one that was working last month!). I removed the `.` and tried both `localhost` and `microsa` with the same results. – zXynK Mar 09 '21 at 12:38

2 Answers2

1

I ran into the same issue but couldn't use the service dns name because I'm sharing a configuration file between the dependencies and the test project which contains the connection strings for various services defined in the docker-compose file. The test project (which is not running inside docker-compose) needs access to some of those services as well.

To solve it, all I had to do was add a bash script at the start of the pipeline that adds a new record to the hosts file:

steps:
- bash: |
   echo '127.0.0.1 host.docker.internal' | sudo tee -a /etc/hosts
   
  displayName: 'Update Hosts File'
Rogier Pennink
  • 388
  • 1
  • 12
  • This didn't work for me sadly but after a bit of googling and testing adding this to my container worked: `extra_hosts: - "host.docker.internal:host-gateway"` – David S Jun 13 '22 at 11:06
0

I have no idea why http://host.docker.internal:20001 is not working now, even though I'm certain it used to...

However, using http://microsa/... (without the port number) does work.

zXynK
  • 1,101
  • 16
  • 30