0

I have separate docker-compose files with common configurations and application configurations. I want to connect containers defined in both docker-compose files by the common network which is defined in docker-compose.yml with the application's images. I need this to connect to the database on the container host. How can I define the same network in another docker-compose-producer file or can?

My common docker-compose.yml looks like this:

  version: '3.3'
    services:
      kafka:
        image: spotify/kafka
        ports:
         - "9092:9092"
        networks:
          - docker-elk
        environment:
        - ADVERTISED_HOST=localhost
      neo4jdb:
        image: neo4j:latest
        container_name: neo4jdb
        ports:
          - "7474:7474"
          - "7473:7473"
          - "7687:7687"
        networks:
          - docker-elk
        volumes:
          - /var/lib/neo4j/import:/var/lib/neo4j/import
          - /var/lib/neo4j/data:/datax
          - /var/lib/neo4j/conf:/conf
        environment:
          - NEO4J_dbms_active__database=graphImport.db
      elasticsearch:
        image: elasticsearch:latest
        ports:
          - "9200:9200"
          - "9300:9300"
        networks:
          - docker-elk
        volumes:
            - esdata1:/usr/share/elasticsearch/data
      kibana:
        image: kibana:latest
        ports:
          - "5601:5601"
        networks:
          - docker-elk
    volumes:
      esdata1:
        driver: local

networks:
  docker-elk:
    driver: bridge

My docker-compose-producer file:

    version: '3.3'
    services:
      producer-demo:
        build:
          context: .
          dockerfile: Dockerfile
          args:
            - ARG_CLASS=producer
            - HOST=neo4jdb
        volumes:
          - ./:/workdir
        working_dir: /workdir
        networks:
      - common_docker-elk

networks:
  common_docker-elk:
    external: true

Dockerfile:

FROM java:8
ARG ARG_CLASS
ARG HOST
ARG SPARK_CONFIG
ARG NEO4J_CONFIG
ENV MAIN_CLASS $ARG_CLASS
ENV SCALA_VERSION 2.11.8
ENV SBT_VERSION 1.1.1
ENV SPARK_VERSION 2.2.0
ENV SPARK_DIST spark-$SPARK_VERSION-bin-hadoop2.6
ENV SPARK_ARCH $SPARK_DIST.tgz
ENV SPARK_MASTER $SPARK_CONFIG
ENV DB_CONFIG neo4j_local
ENV KAFKA_STREAMS_NUMBER 5
ENV KAFKA_EVENTS_NUMBER 10
ENV MESSAGES_BATCH_SIZE 16777216
ENV LINGER_MESSAGES_TIME 5
ENV HOSTNAME bolt://$HOST:7687

VOLUME /workdir

WORKDIR /opt

# Install Scala
RUN \
  cd /root && \
  curl -o scala-$SCALA_VERSION.tgz http://downloads.typesafe.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.tgz && \
  tar -xf scala-$SCALA_VERSION.tgz && \
  rm scala-$SCALA_VERSION.tgz && \
  echo >> /root/.bashrc && \
  echo 'export PATH=~/scala-$SCALA_VERSION/bin:$PATH' >> /root/.bashrc

# Install SBT
RUN \
  curl -L -o sbt-$SBT_VERSION.deb https://dl.bintray.com/sbt/debian/sbt-$SBT_VERSION.deb && \
  dpkg -i sbt-$SBT_VERSION.deb && \
  rm sbt-$SBT_VERSION.deb


# Install Spark
RUN \
    cd /opt && \
    curl -o $SPARK_ARCH http://d3kbcqa49mib13.cloudfront.net/$SPARK_ARCH && \
    tar xvfz $SPARK_ARCH && \
    rm $SPARK_ARCH && \
    echo 'export PATH=$SPARK_DIST/bin:$PATH' >> /root/.bashrc


EXPOSE 9851 9852 4040 9092 9200 9300 5601 7474 7687 7473

CMD /workdir/runDemo.sh "$MAIN_CLASS" "$SPARK_MASTER" "$DB_CONFIG" "$KAFKA_STREAMS_NUMBER" "$KAFKA_EVENTS_NUMBER" "$MESSAGES_BATCH_SIZE" "$LINGER_MESSAGES_TIME"

Bash script for loading the project:

#!/usr/bin/env bash
if [ "$1" = "consumer" ]
then
    java -cp "jars/spark_consumer.jar" consumer.SparkConsumer $2 $3 $4
elif [ "$1" = "producer" ]
then
    java -cp "jars/kafka_producer.jar" producer.KafkaCheckinsProducer $5 $3 $6 $7
else
    echo "Wrong parameter. It should be consumer or producer, but it is $1"
fi
Cassie
  • 2,941
  • 8
  • 44
  • 92

1 Answers1

3

Seems like you want to communicate between multiple docker compose.

Please check this answer, Communication between multiple docker-compose projects

Update:

I just noticed, hostname is not defined for the neo4jdb in docker-compose file.

Please add hostname: neo4jdb under neo4jdb build section in docker-compose.yml file.

Community
  • 1
  • 1
Rohit Jindal
  • 667
  • 5
  • 13
  • Another file still can't use neo4jdb as a host, although I create a network between containers based on the question`s answer – Cassie Jun 29 '18 at 13:26
  • can you please provide updated Docker compose file and DockerFile used? – Rohit Jindal Jun 29 '18 at 13:55
  • Sure. I've updated docker-compose and added the Dockerfile – Cassie Jun 29 '18 at 14:02
  • please change the network configuration in both compose file. in docker-compose.yml {networks: docker-elk: driver: bridge} and in docker-compose-produce.yml {network: common_docker-elk: external: true}.. i am trying to produce the issue on my system but stuck as not having rundemo.sh.. please try after making suggest changes. – Rohit Jindal Jun 29 '18 at 15:26
  • i have tried with a work around (inspite of CMD runDemo.sh i have given CMD sleep 10000, so that container can up). Then i connected to container and executed "ping neo4jdb" and it successfully resolved the server. – Rohit Jindal Jun 29 '18 at 16:11
  • It doesn't work that way for some reason and I get this error: `ServiceUnavailableException: Unable to connect to neo4jdb:7687` – Cassie Jun 30 '18 at 10:22
  • Ok. Then can you please provide runDemo.sh also. And one more question, if i am not wrong first you will start docker-compose.yml and then docker-compose-producer.yml. please correct me if i am wrong. One more thing, Whoever is starting first will start the the network and will have network driver and 2nd will join the same network as external. – Rohit Jindal Jun 30 '18 at 10:39
  • That's right I start docker-compose first and then I run docker-compose-producer. I've added my bash script and updated docker-compose files in the question – Cassie Jul 01 '18 at 12:49
  • updated the answer(hope will resolve the issue). i have tested the solution with some git-hub neo4j project. Please let me know if still not worked. i will provide you the whole set up what i tested with all steps. in addition, just want to confirm you are running both compose on same host(not as different system on same network). – Rohit Jindal Jul 01 '18 at 20:22
  • Yeo, it still doesn't work. How can I check whether they are running on the same host? With docker-inspect? – Cassie Jul 02 '18 at 10:45
  • I hope you are creating fresh images from Docker compose and first your are updating the neo4jdb password and then starting the producer image. In sometime i will provide you the what and how i have tested. I have used your files as it is, but concentrating only in neo4jdb. – Rohit Jindal Jul 02 '18 at 15:38
  • 1
    please check on https://github.com/rjindalrohit/neo4jdb_docker_shared_network.git. steps mentioned in Readme file – Rohit Jindal Jul 02 '18 at 16:00