0

With below preset:

version: "3.5"

services:

  FronServer:

    image: node:16-alpine
    container_name: Example-Production-FrontServer

    working_dir: /var/www/example.com

    volumes:
      - .:/var/www/example.com
      - FrontServerDependencies:/var/www/example.com/node_modules:nocopy

    command: sh -c "echo 'Installing dependencies ...' \
        && npm install --no-package-lock \
        && node FrontServerEntryPoint.js --environment production"

    ports: [ "8080:8080" ]
    environment:
      - DATABASE_HOST=Database

    depends_on: [ Database ]

  Database:

    image: postgres
    container_name: Example-Production-Database
    ports: [ "5432:5432" ]

    environment:
      - POSTGRES_PASSWORD=pass1234

    volumes:
      - DatabaseData:/data/example.com

volumes:
  FrontServerDependencies: { driver: "local" }
  DatabaseData: {}

the launching of the docker-compose causes the empty node_modules directory on local machine:

enter image description here

How to make it not appear on host machine without additional Dockerfile?

In my case, I need the installed node_modules lost after container has been stopped. Maybe tmpfs is what I need but have not found the example suited with my case.

Please note that package.json included to production build directory is not even with the package.json of the source code: there are many dependencies which are requires only at local development stage, and also front-end dependencies has been pre-bundled (with copyright comments, of course) - no need to install them anymore.

Takeshi Tokugawa YD
  • 670
  • 5
  • 40
  • 124
  • Why don't you want a `Dockerfile`? Using the above, you'll have to re-install all your dependencies every time you start – Phil Apr 24 '23 at 04:02
  • @Phil Maybe I'll use the Dockerfile in the future when the project will become more complicated but currently I want to keep the application simple, with single "docker-compose.yaml". – Takeshi Tokugawa YD Apr 24 '23 at 05:25
  • Most Compose stacks include at least one Dockerfile, especially Node projects specifically because of the dependencies – Phil Apr 24 '23 at 06:17
  • Why is it important to delete and reinstall your library dependencies over and over? I'm not sure I understand the use case, or what benefit you're getting from using Docker hewe. – David Maze Apr 24 '23 at 10:18
  • @DavidMaze, Thank you for the comment. Talking only about "FrontServer", I just want each new version of the application was not influenced by the previous version including the node_modules. The "FrontServer" does not store any data while the "Database" does. "what benefit you're getting from using Docker here." - The standard benefits of the Docker. It's cool that Docker has the reusable volumes feature and I use this feature for the database, but sometimes I don't need the feature.. – Takeshi Tokugawa YD Apr 25 '23 at 07:12

2 Answers2

3

What you want is an anonymous volume mount. This will keep your host and container versions of node_modules separate which is especially important if you have any native built dependencies and your architectures differ (eg Linux container, MacOS or Windows host).

I would also strongly suggest you add a Dockerfile for your Node app to install dependencies during the build stage. Otherwise you'll be installing them every time you start the service.

# Dockerfile
FROM node:16-alpine

WORKDIR /var/www/example.com

# Copy package.json and package-lock.json
COPY package*.json .

# Install dependencies
RUN npm install --no-package-lock

# COPY . .
# Not necessary when using a volume mount but if you wanted to run the 
# container stand-alone, copy over all the files

CMD ["node", "FrontServerEntryPoint.js", "--environment", "production"]
# .dockerignore
node_modules/
# compose.yaml

services:
  FronServer:
    build: . # use the Dockerfile
    container_name: Example-Production-FrontServer

    volumes:
      - .:/var/www/example.com
      - /var/www/example.com/node_modules # anonymous volume

    ports:
      - "8080:8080"

    environment:
      - DATABASE_HOST=Database

    depends_on:
      - Database

  # ...
Phil
  • 157,677
  • 23
  • 242
  • 245
  • Thank you for the answer. Unfortunately, with anonymous `/var/www/example.com/node_modules` but without separate Dockerfile I still have the `node_modules` directory in my host machine. I understand that you are recommending to use the separate Dockerfile for the FrontServer, but this topics is not "Should I use the separate Docker file?" - it is "How to do ... without separate Dockerfile". I'll upvote your answer as the gratitude for your efforts, but I can not accept your answer because it does not satisfy to the conditions. – Takeshi Tokugawa YD Apr 28 '23 at 07:38
  • Would you please to modify your answer according to "For you conditions, the solution will be: `...` . But I strongly suggest you add a "Dockerfile"... (`your current code`)"? – Takeshi Tokugawa YD Apr 28 '23 at 07:39
2

Docker images are a core part of Docker, and I'd embrace them here. It seems like an image captures your requirement of the node_modules directory not existing on the host and being rebuilt as necessary.

It's important to not use volumes: to inject code into the container. It's especially important to not use an anonymous volume or any other kind of volume for node_modules: while Docker copies content from an image into a named or anonymous volume on first use, it has no way to update that content, and you'll be stuck with a specific version of your module tree.

A Node Dockerfile is fairly boilerplate, and mirrors many lines you already have in the Compose file:

FROM node:16-alpine           # image:
WORKDIR /var/www/example.com  # working_dir:
COPY package.json package-lock.json ./
RUN npm ci                    # was part of command:
COPY ./ ./
CMD node FrontServerEntryPoint.js --environment production

Also make sure you have a .dockerignore file that includes the line

node_modules

to keep the host's library tree out of the image.

Since all of these settings are in the image, you can remove them from the Compose file. That can be trimmed down to

version: "3.8"
services:
  FronServer:
    build: .
    # no image:, container_name:, working_dir: volumes:, command:
    ports: [ "8080:8080" ]
    environment:
      - DATABASE_HOST=Database
    depends_on: [ Database ]

  Database:
    image: postgres
    ports: [ "5432:5432" ]
    environment:
      - POSTGRES_PASSWORD=pass1234
    volumes:
      - DatabaseData:/var/lib/postgresql/data  # fixed path

volumes:
  DatabaseData: {}

There are two ways to use this setup. If you run

docker-compose up -d --build

it will do exactly what you initially requested: it will build an isolated copy of the node_modules tree in the image, separate from any files that exist on the host. The specific ordering in the Dockerfile combined with Docker's layer-caching mechanism means that the expensive library installation won't happen if neither the package.json nor package-lock.json files have changed.

You can also run

docker-compose down
docker-compose up -d Database
npm install
npm run dev

to run Node on your host system, but with the database in Docker. This still captures many of the benefits of Docker (especially, it is very easy to completely reset your database, and your per-project database is isolated from other projects on your local system) but you also get the simplicity of a local development environment (no special IDE support required, live reloading will work reliably, tools exist locally when you need to run them).

David Maze
  • 130,717
  • 29
  • 175
  • 215
  • Thankyou for answer. I am appreciating your explanations and recommendations, thus I'll upvote your answer as the gratitude. However, I can' accept your answer because it does not satisfy to conditions of this topics. This topics is neither "Should I use the separate Docker file?" nor "Why I should use the separate docker file?", it is highly specialized on solution only with single `docker-compose.yaml`. – Takeshi Tokugawa YD Apr 28 '23 at 07:52
  • Would you please to modify your answer according: "For you conditions, the solution will be: `(new code)`. However, avoiding the separate Docker file, you are ignoring that Docker images are a core part of Docker *(... your current answer)*". – Takeshi Tokugawa YD Apr 28 '23 at 07:53