27

I thought I understood the docs, but maybe I didn't. I was under the impression that the -v /HOST/PATH:/CONTAINER/PATH flag is bi-directional. If we have file or directories in the container, they would be mirrored on the host giving us a way to retain the directories and files even after removing a docker container.

In the official MySQL docker images, this works. The /var/lib/mysql can be bound to the host and survive restarts and replacement of container while maintaining the data on the host.

I wrote a docker file for sphinxsearch-2.2.9 just as a practice and for the sake of learning and understanding, here it is:

FROM debian

ENV SPHINX_VERSION=2.2.9-release

RUN apt-get update -qq && DEBIAN_FRONTEND=noninteractive apt-get install -yqq\
    build-essential\
    wget\
    curl\
    mysql-client\
    libmysql++-dev\
    libmysqlclient15-dev\
    checkinstall

RUN wget http://sphinxsearch.com/files/sphinx-${SPHINX_VERSION}.tar.gz && tar xzvf sphinx-${SPHINX_VERSION}.tar.gz && rm sphinx-${SPHINX_VERSION}.tar.gz

RUN cd sphinx-${SPHINX_VERSION} && ./configure --prefix=/usr/local/sphinx

EXPOSE 9306 9312

RUN cd sphinx-${SPHINX_VERSION} && make

RUN cd sphinx-${SPHINX_VERSION} && make install

RUN rm -rf sphinx-${SPHINX_VERSION}

VOLUME /usr/local/sphinx/etc
VOLUME /usr/local/sphinx/var

Very simple and easy to get your head wrapped around while learning. I am assigning the /etc & /var directories from the sphinx build to the VOLUME command thinking that it will allow me to do something like -v ~/dev/sphinx/etc:/usr/local/sphinx/etc -v ~/dev/sphinx/var:/usr/local/sphinx/var, but it's not, instead it's overwriting the directories inside the container and leaving them blank. When i remove the -v flags and create the container, the directories have the expected files and they are not overwritten.

This is what I run to create the docker file after navigating to the directory that it's in: docker build -t sphinxsearch .

And once I have that created, I do the following to create a container based on that image: docker run -it --hostname some-sphinx --name some-sphinx --volume ~/dev/docker/some-sphinx/etc:/usr/local/sphinx/etc -d sphinxsearch

I really would appreciate any help and insight on how to get this to work. I looked at the MySQL images and don't see anything magical that they did to make the directory bindable, they used VOLUME.

Thank you in advance.

jww
  • 97,681
  • 90
  • 411
  • 885
Hatem Jaber
  • 2,341
  • 2
  • 22
  • 38
  • You do realise that in the Dockerfile, "VOLUME /usr/local/sphinx/etc" is not the same as docker run -v /usr/local/sphinx/etc:/usr/local/sphinx/etc" don't you? The former bypasses the layered filesystem, the later maps it to a path of your choosing on the host. Often both are used together. – hookenz Oct 20 '15 at 04:19
  • Yeah, I was aware of that. I wanted to access the directories from the container so that I can have the data on the host. That was the only way that I could think of how to do it. I posted on the docker forums and did not get any help or advice, this was the result of playing around with it and looking at countless examples. – Hatem Jaber Oct 20 '15 at 10:10

3 Answers3

16

After countless hours of research, I decided to extend my image with the following Dockerfile:

FROM sphinxsearch

VOLUME /usr/local/sphinx/etc
VOLUME /usr/local/sphinx/var

RUN mkdir -p /sphinx && cd /sphinx && cp -avr /usr/local/sphinx/etc . && cp -avr /usr/local/sphinx/var .

ADD docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh

ENTRYPOINT ["/docker-entrypoint.sh"]

Extending it benefited it me in that I didn't have to build the entire image from scratch as I was testing, and only building the parts that were relevant.

I created an ENTRYPOINT to execute a bash script that would copy the files back to the required destination for sphinx to run properly, here is that code:

#!/bin/sh
set -e

target=/usr/local/sphinx/etc

# check if directory exists
if [ -d "$target" ]; then
    # check if we have files
    if find "$target" -mindepth 1 -print -quit | grep -q .; then
        # no files don't do anything
        # we may use this if condition for something else later
        echo not empty, don\'t do anything...
    else
        # we don't have any files, let's copy the
        # files from etc and var to the right locations
        cp -avr /sphinx/etc/* /usr/local/sphinx/etc && cp -avr /sphinx/var/* /usr/local/sphinx/var
    fi
else
    # directory doesn't exist, we will have to do something here
    echo need to creates the directory...
fi

exec "$@"

Having access to the /etc & /var directories on the host allows me to adjust the files while keeping them preserved on the host in between restarts and so forth... I also have the data saved on the host which should survive the restarts.

I know it's a debated topic on data containers vs. storing on the host, at this moment I am leaning towards storing on the host, but will try the other method later. If anyone has any tips, advice, etc... to improve what I have or a better way, please share.

Thank you @h3nrik for suggestions and for offering help!

Hatem Jaber
  • 2,341
  • 2
  • 22
  • 38
6

Mounting container directories to the host is against the docker concepts. That would break the process/resources encapsulation principle.

The other way around - mounting a host folder into a container - is possible. But I would rather suggest to use volume containers, instead.

Jing Li
  • 14,547
  • 7
  • 57
  • 69
Henrik Sachse
  • 51,228
  • 7
  • 46
  • 59
  • How does it work with MySQL and mounting the /var/lib/mysql directory to persist the data on the host? I would like to do the same thing. – Hatem Jaber Jul 30 '15 at 14:30
  • Maybe mounting is the wrong word, how about sharing a directory from the container to the host, for example a config directory? – Hatem Jaber Jul 30 '15 at 14:32
  • 1
    Simply create a volume container for `/var/lib/mysql` with: `docker run -it --name mysqldata -v /var/lib/mysql mysql /bin/true`. Then link to it from your mysql service container via: `docker run -d --name mysqlserver --volumes-from mysqldata mysql`. – Henrik Sachse Jul 30 '15 at 14:32
  • A config directory can be shared like this: `docker run -d -v /config/directory:/server/config/directory:ro ...`. Even single config files can be bound via: `docker run -d -v /config/file.cnf:/server/config/file.cnf:ro ...` – Henrik Sachse Jul 30 '15 at 14:34
  • Ok, can you point out in my code what is wrong with the -v ... that I wrote? From what I understand, it looks like what you did. I am not interested in the volume containers at this point, I am just interested in going from container to host. – Hatem Jaber Jul 30 '15 at 14:40
  • 2
    It behaves like with linux mounts. Assume you have a directory with existing files. Then you mount an empty disk drive into that directory. The result would be that the files that exist in that directory are "hidden" by the mount and the directory appears to be empty. Something similar happens with bound docker volumes. So do not mount a volume into `/etc` or the like. That would hide all the previously existing configuration files. Instead use the described single file mount where possible when you want to provide only a small subset of files into the `/etc` directory. – Henrik Sachse Jul 30 '15 at 14:44
  • 4
    If that's against concepts then how does one develop app inside the Docker? If I build the Docker image after every code change then it will be slow. If I run app outside Docker then it kills Docker's advantages. – Gherman Mar 05 '18 at 13:44
0

because mysql do init After the mapping,so before mapping there have no data at /var/lib/mysql.

so if you have data before start container, the -v action will override your data.

see entrypoint.sh

wei zhang
  • 1
  • 1