2

I'm building an image for github's Linkurious project, based on an image already in the hub for the neo4j database. the neo image automatically runs the server on port 7474 and my image runs on port 8000.

when I run my image I publish both ports (could I do this with EXPOSE?):

docker run -d --publish=7474:7474 --publish=8000:8000 linkurious

but only my server seems to run. if I hit http://[ip]:7474/ I get nothing. is there something special I have to do to make sure they both run?

* Edit I *

here's my Dockerfile:

FROM neo4j/neo4j:latest
RUN apt-get -y update
RUN apt-get install -y git
RUN apt-get install -y npm
RUN apt-get install -y nodejs-legacy
RUN git clone git://github.com/Linkurious/linkurious.js.git
RUN cd linkurious.js && npm install && npm run build
CMD cd linkurious.js && npm start

* Edit II *

to perhaps help explain my quandary, I've asked a different question

Community
  • 1
  • 1
ekkis
  • 9,804
  • 13
  • 55
  • 105
  • From the wording of the question it sounds like you're expecting both services to be running in the same container. Are you using an init system or running one in the background? If you provide your `Dockerfile` I think we'll be able to get a better idea of what is happening. – dnephin Oct 23 '15 at 21:47
  • I do expect both services to run, but perhaps I misunderstand the function of having a base image. in my conception, if I declare a base image for my image, I get everything that's in the base, therefore I should be able to run neo and my own server (on different ports). now, perhaps the problem is that I need to run neo myself because the CMD in the base package doesn't run... in any case, I've edited to post my docker file – ekkis Oct 24 '15 at 18:27
  • @ekkis I have edited my question to address your edit and your Dockerfile. – VonC Oct 24 '15 at 18:42

1 Answers1

3

EXPOSE is there to allow inter-containers communication (within the same docker daemon), with the docker run --link option.
Port mapping is there to map EXPOSEd ports to the host, to allow client-to-container communication. So you need --publish.

See also "Difference between “expose” and “publish” in docker".

See also an example with "Advanced Usecase with Docker: Connecting Containers"

http://codentrick.com/home/wp-content/uploads/2015/07/Docker_Linking_Dual-640x480.jpg


Make sure though that the ip is the right one ($(docker-machine ip default)).


If you are using a VM (meaning, you are not using docker directly on a Linux host, but on a Linux VM with VirtualBox), make sure the mapped ports 7474 and 8000 are port forwarded from the host to the VM.

VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,7474,,7474"
VBoxManage controlvm boot2docker-vm natpf1 "name,tcp,,8000,,8000"

In the OP's case, this is using neo4j: see "Neo4j with Docker", based on the neo4j/neo4j/ image and its Dockerfile:

ENTRYPOINT ["/docker-entrypoint.sh"] 
CMD ["neo4j"]

It is not meant to be used for installing another service (like nodejs), where the CMD cd linkurious.js && npm start would completely override the neo4j base image CMD (meaning neo4j would never start).

It is meant to be run on its own:

# interactive with terminal
docker run -i -t --rm --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j

# as daemon running in the background
docker run -d --name neo4j -v $HOME/neo4j-data:/data -p 8474:7474 neo4j/neo4j

And then used by another image, with a --link neo4j:neo4j directive.

Community
  • 1
  • 1
VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
  • Could you add an example of `EXPOSE` in Dockerfiles and clarify that the lower part uses VirtualBox? Will make it easier for newbies :) – michaelbahr Oct 23 '15 at 07:13
  • @michaelbahr That is true. I have edited the answer accordingly. – VonC Oct 23 '15 at 07:19
  • I think your response (re: port forwarding to the host) applies to an older version of docker. on OSX boot2docker is no longer used and when I attempt your commands I get the error: `VBoxManage: error: Could not find a registered machine named 'boot2docker-vm' VBoxManage: error: Details: code VBOX_E_OBJECT_NOT_FOUND (0x80bb0001), component VirtualBoxWrap, interface IVirtualBox, callee nsISupports VBoxManage: error: Context: "FindMachine(Bstr(a->argv[0]).raw(), machine.asOutParam())" at line 96 of file VBoxManageControlVM.cpp` – ekkis Oct 23 '15 at 16:40
  • something else that's curious is that, as I previously mentioned, a request for `http://[ip]:8000/` works but for `http://[ip]:7474/` does not, so I don't think it's a question of port forwarding. the difference is that one server is kicked off by the base image and the other by my image, ergo my question as to whether 2 servers can run in this way or if there's something else that needs to be done – ekkis Oct 23 '15 at 16:42
  • @ekkis port forwarding is still needed, as long as you are using a VM. Open VirtualBox (or do a docker-machine ls): you will see the name of your docker VM. In VirtualBox, section Network, you will be able to check what has been forwarded. Also make your neo container is running (`docker ps -a`). – VonC Oct 23 '15 at 17:53
  • @VonC, from reading the advanced usecase document you linked to, I think the answer is I don't want to include neo and/or node.js in my image. I need to just document that these two images are needed and let the user manually run them... there will need to be some connectivity between the containers but that link seems to describe it. I will get through the doc and have a better idea. thanks. – ekkis Oct 24 '15 at 19:08
  • @ekkis Yes, docker containers are meant to run one service at a time, if only because of the "PID 1 zombie reaping" issue I detail in http://stackoverflow.com/a/33119321/6309. – VonC Oct 24 '15 at 19:10
  • @VonC, so if the container running node.js is to have access to neo, neo must publish it's port (i.e. available to the host). that means that if I wanted to give clients access to the node container but not the neo container I couldn't do it because neo must publish its port (because it runs separately from the node container). is that correct? I can't --link unless the neo container publishes, right? – ekkis Oct 25 '15 at 21:09
  • @ekkis no, for intra-container communication, managed by the same docker daemon, a container has to expose a port (in its Dockerfile). Not map a port (on the host). You can --link to neo without neo having mapped any port at all on the host. That way, neo is "invisible" for users (no direct access). – VonC Oct 25 '15 at 21:11
  • yea!! ok. I think I get it. I will close this question when I have it all running, and thanks for the hand-holding. just out of curiosity (and the PID zombie issue notwithstanding), I could have a CMD in my Dockerfile that runs both neo and node right? – ekkis Oct 25 '15 at 21:13
  • @ekkis if you can manage the closing process issue, yes. That is why a base image like https://github.com/phusion/baseimage-docker exists. But the idea behind containers is to facilitate isolation, because when one piece of the system is acting out, their is less side-effects if that piece is doing only one job (instead of two or three). – VonC Oct 25 '15 at 21:18