0

I am building an app wrapped in docker, that consist of PHP backend ("API") and NODE frontend, they are united by NGINX, where php app is served by the means of php-fpm and my node app is served by the reverse proxy. NGINX exposes phpMyAdmin app (phpmyadmin.test) and "API" (api.php.test) for dev purposes and NODE api (nodeapp.test).

NODE apps SSR ("Server-Side Rendering") needs to fetch some data from an API within docker the network, and because domains such as api.php.test can't be recognized from within docker I have to make calls to NGINX container which serves 3 different domains mentioned above, so I either need to fake 'HOST' header to get appropriate response from an API via NGINX which leads problems. Such as: Refused to set unsafe header "Host", Error: unable to verify the first certificate in nodejs etc.

Do I have to spin up Nginx container for each endpoint to avoid these issues? Or is there a better way to go around this? Here is an example of my docker-compose.yml to give you a better idea of what happens in my app.

version: "3.7"
services:
  workspace:
    build:
      context: workspace
      args:
        WORKSPACE_USER: ${WORKSPACE_USER}
    volumes:
      - api:/var/www/api
      - site:/var/www/site
    ports:
      - "2222:22"
    environment:
      S3_KEY: ${S3_KEY}
      S3_SECRET: ${S3_SECRET}
      S3_BUCKET: ${S3_BUCKET}
      DB_CONNECTION: ${DB_CONNECTION}
      MYSQL_HOST: ${MYSQL_HOST}
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
      MEDIA_LIBRARY_ENDPOINT_TYPE: ${MEDIA_LIBRARY_ENDPOINT_TYPE}
      MEDIA_LIBRARY_IMAGE_SERVICE: ${MEDIA_LIBRARY_IMAGE_SERVICE}
    tty: true

  php-fpm:
    build:
      context: ./php-fpm
    depends_on:
      - nodejs
    volumes:
      - api:/var/www/api
      - ./certs:/certs
    environment:
      S3_KEY: ${S3_KEY}
      S3_SECRET: ${S3_SECRET}
      S3_BUCKET: ${S3_BUCKET}
      DB_CONNECTION: ${DB_CONNECTION}
      MYSQL_HOST: ${MYSQL_HOST}
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
      MEDIA_LIBRARY_ENDPOINT_TYPE: ${MEDIA_LIBRARY_ENDPOINT_TYPE}
      MEDIA_LIBRARY_IMAGE_SERVICE: ${MEDIA_LIBRARY_IMAGE_SERVICE}

  nodejs:
    build:
      context: ./nodejs
      args:
        NODEJS_SITE_PATH: ${NODEJS_SITE_PATH}
        NODEJS_VER: ${NODEJS_VER}
    volumes:
      - site:${NODEJS_SITE_PATH}
      - ./certs:/certs
    environment:
      NODEJS_ENV: ${NODEJS_ENV}
    ports:
      - 3000:3000
      - 3001:3001

  nginx:
    build:
      context: nginx
    depends_on:
      - php-fpm
      - mariadb
    restart: always
    volumes:
      - api:/var/www/api
      - site:/var/www/site
      - ./nginx/global:/etc/nginx/global
      - ./nginx/sites:/etc/nginx/sites-available
      - ./nginx/logs:/var/log/nginx
      - ./certs:/certs
    ports:
      - 80:80
      - 443:443

  mariadb:
    image: mariadb
    restart: always
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${MYSQL_DATABASE}
      MYSQL_USER: ${MYSQL_USER}
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - db:/var/lib/mysql

  phpmyadmin:
    image: phpmyadmin/phpmyadmin
    depends_on:
      - mariadb
    restart: always
    environment:
      PMA_HOST: ${MYSQL_HOST}
      PMA_USER: root
      PMA_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      UPLOAD_LIMIT: 2048M

volumes:
  phpmyadmin:
  db:
  site:
    external: true
  api:
    external: true
Andrius Solopovas
  • 967
  • 11
  • 41
  • Does this answer your question? [How to update /etc/hosts file in Docker image during "docker build"](https://stackoverflow.com/questions/38302867/how-to-update-etc-hosts-file-in-docker-image-during-docker-build) – Tschallacka Feb 13 '20 at 12:46
  • @Tschallacka it does not, I know about extra host but it requires me manually put an IP address. If there was an alias for example extra_hosts: - "api.php.test:{container_name}" that would be perfect. But I am not aware of such feature. – Andrius Solopovas Feb 13 '20 at 12:53
  • Can't you just use the loopback 127.0.0.1 as the IP? – Tschallacka Feb 13 '20 at 12:57
  • how can I achieve that? I can do that on my local pc just by editing hosts file, but each container has its own hosts file. Realistically I can run a script that would update nginx container and add all the necessary host loopback. But its more of a hack than a solution. – Andrius Solopovas Feb 13 '20 at 13:03
  • just use 127.0.0.1 as the ip in the hosts file. you can use it multiple times. I host on my dev pc 10 domains for testing purposes all looping back to itself via the hosts file. So the domains you have defined in nginx, you also define in your hosts file with the ip 127.0.0.1 – Tschallacka Feb 13 '20 at 13:06
  • in this case, It would create an issue when deploying the app. As calls to api are made within the docker network, and domain has to be recognised by nginx container within the docker. To make it work in production I will need to identify what is the nginx container ip, then create a loopback pointing to nginx ip within the container from which the calls are made. As I will not be exposing api to the public when the app is deployed, these calls will be happening in the background. – Andrius Solopovas Feb 13 '20 at 13:10
  • https://docs.docker.com/v17.12/datacenter/ucp/2.2/guides/user/services/use-domain-names-to-access-services/ this looks like a solution but its available only in the enterprise version of docker. Interesting if something similar can be achieved with the standard version or docked is specifically designed to make such feature premium. – Andrius Solopovas Feb 13 '20 at 13:20
  • I think I know what I will do I will try to server api on a different port within nginx this way I will have only service within nginx available at that port. – Andrius Solopovas Feb 13 '20 at 13:22
  • How about you have your nginx listening on different ports and then route stuff or proxy stuff? then you can differentiate by port what you want to do. – Tschallacka Feb 13 '20 at 13:23
  • 1
    @Tschallacka Thanks for brainstorming with me I have figured it out. Exactly as you mentioned I just setup API on different port 8888 and then I make a call to the container directly http://nginx:8888 which serves as my API. – Andrius Solopovas Feb 13 '20 at 13:30
  • it fixes the problem for backend but it does create new problem with app such as nuxt.js, it does call in the background when page is loaded initially, but if I switch from `route a` to `route b` it does in a frontend creating the necessity to make a call to publicly available endpoint. so the API has to be accessible in the docker network and in the public. So the problem isn't fixed completely I need to be able to access the domain from both ends, docker and public domain. – Andrius Solopovas Feb 13 '20 at 14:14
  • Let us [continue this discussion in chat](https://chat.stackoverflow.com/rooms/207768/discussion-between-tschallacka-and-andrius-solopovas). – Tschallacka Feb 13 '20 at 14:20

0 Answers0