0

I am learning Docker and would appreciate some wisdom.

My current assumption of Docker is that the Dockerfile and resulting Image build from it only store the actual environment itself. And not the application files/server. We utilize Volumes to store and run application code in a container. In a production Docker example that needs to be mounted on the fly. How does one use application code (because it's not stored in the Image... right?) in the Image itself.

When I use my Docker Image in development I start a container with a mounted volume of my local working directory. Here's my command to do that.

docker run -v $(pwd):/app image_name

1. Do I pull from GitHub in the actual Dockerfile to add and use the application code in the container?

2. Is there a way to store the application code on the Docker Image itself?

For example, this is my very simple Dockerfile:

FROM ruby:3.1.1-alpine3.15

RUN apk update -q
RUN apk add bash yarn git build-base nodejs mariadb-dev tzdata

ENV RAILS_ENV production

WORKDIR /app

COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock

COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock

RUN gem install bundler
RUN gem update bundler
RUN bundle install

COPY entrypoint.sh /usr/bin
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]

EXPOSE 3000

CMD ["rails", "server", "-b", "0.0.0.0"]

Thank you in advance, any advice would be greatly appreciated.

EDIT 05/16/22

Thank you to Joe and David Maze for clarifying how building Docker Images work.

This is my revised Dockerfile which includes my application code in the Docker Image once it is built. Utilizing COPY . .

FROM ruby:3.1.1-alpine3.15

RUN apk update -q
RUN apk add bash yarn git build-base nodejs mariadb-dev tzdata

ENV RAILS_ENV production

WORKDIR /app

COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock

COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock

RUN gem install bundler
RUN gem update bundler
RUN bundle install

COPY . .

COPY entrypoint.sh /usr/bin
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]

EXPOSE 3000

CMD ["rails", "server", "-b", "0.0.0.0"]

I still use

COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock

COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock

According to David Maze's answer on this Stackoverflow post. Because bundling and installing dependencies takes a long time. In the future when I change my application code but my dependencies haven't changed, Docker can detect that they haven't changed and reuse previous Image layers, also skipping my RUN bundle install step. Making Image build time faster.

This revised Dockerfile allows me to start a Docker Container from an Image with my application code already with this command.

docker run image_name

I can also start the Rails server in development environment by overriding the Dockerfile's environment variables.

docker run -e RAILS_ENV=development image_name

Thanks again!

  • A Docker image normally contains a complete application. It's possible to use it as an inconvenient way to just get a language runtime, but the normal use is to `COPY` the entire application source code in. See for example the Docker [Sample application](https://docs.docker.com/get-started/02_our_app/) tutorial. In this use you would not bind-mount the application source code, since it is included in the image. – David Maze May 16 '22 at 10:09
  • @DavidMaze thank you for the excellent resource you provided. I had not yet stumbled upon that. – Austin Drysdale May 16 '22 at 10:55

1 Answers1

1

Dockerfile is used to create an automated build with a list of command line instuctions. You can copy the entire codebase in your image, a simple rails dockerfile is like:

FROM ruby:2.6.6-alpine

RUN apk update && apk add bash build-base nodejs postgresql-dev tzdata git openssh
RUN mkdir /project

WORKDIR /project

ENV RAILS_ENV development

COPY Gemfile Gemfile.lock ./
RUN gem install bundler --no-document
RUN bundle install --no-binstubs --jobs $(nproc) --retry 3

COPY . .

CMD ["rails", "server", "-b", "0.0.0.0"]

Here COPY . . will copy the content of the current directory (your code) in the final image. You can use docker-compose to build multiple images, eg. webapp + db:

docker-compose.yml

services:
  db:
    image: 'postgres:11-alpine'
    volumes:
      - 'postgres:/var/lib/postgresql/data'
    ports:
      - '5432:5432'
    environment:
      - POSTGRES_HOST_AUTH_METHOD=trust
    container_name: 'project-db'
    restart: on-failure

  web:
    depends_on:
      - 'db'
    build:
      context: .
      dockerfile: ./Dockerfile
    command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
    ports:
      - '3000:3000'
    environment:
      - DATABASE_HOST=db
    container_name: 'project-rails'
    restart: on-failure

volumes:
  db:
  postgres:

Here you define volumes, folders to persist when image goes down. When you first build your images (eg. docker-compose up --build), Docker will download base images if not already on your machine, it will build and run them. When you update your codebase and lauch a new build, Docker will cache unchanged images layers and rebuild only what you modified.

Vi.
  • 940
  • 6
  • 24
  • First of all thank you for the answer. For clarification on my party. 1. In Dockerfile the copy statement actually copies the code of my working directory into the images working directory, and not when building a container itself. That way it is permanently built into the image. Is that correct? 2. The docker-compose example uses the Dockerfile to boot up the web service. So what is the advantage of using Docker volumes? Thanks again! – Austin Drysdale May 16 '22 at 08:07
  • 1
    1) The copy happens at building time, if you add a new file in your project, when re-building the image this file will be copied in the new image. 2) The advantage of volumes is to persist files like logs, uploaded files by users... – Vi. May 16 '22 at 08:22