0

I'm rather new to working with docker, so it's very possible I just have no idea what I am doing. I have a react frontend that connects to a loopback server. When running them locally, I have the package.json file setup to where I just run "npm start" from the base project directory, and both sides of the application run with one on port 3000 and the other at port 3001. This works well locally, and when I try to hit any API from the frontend, I just need to aim towards localhost:3001/.

I wanted to put them together in a container to sort of match how I have it setup locally, but obviously the path I'm using to communicate between them wont work anymore if that's the case since "localhost" will just look at the users localhost as opposed to the instance in the container. Is there way to keep these two running services on a singular container and have communication between them? I only need the port for one of them (the react frontend) to be exposed to users as that's the only part of the application they would be directly interacting with. Is this type of functionality only possible by making each its' own container and then communicating that way? If possible, I'd like to avoid that.

I've tried searching this up, and I'm sure it's out there somewhere, but I don't think I know enough about docker to properly express what I'm searching for. I'd appreciate anyone with some experience helping me out if possible. Thanks!

Keanu
  • 207
  • 2
  • 3
  • 9

2 Answers2

2

This architecture will be easier to handle if the same host/port serves both the front- and back-ends. In your front-end code you can make calls to relative URLs like /api/things and it will use the same host:port; this saves you from having to recompile the code per deployment and addresses some CORS-related frustrations.

There's essentially three ways to do this:

  1. Use a tool like Webpack to compile the front-end to static files. (If you're using Create React App, run npm run build or yarn build.) Have the back-end code serve these files directly. Don't run a "front-end server". In a Docker context, there would just be one container (the back-end) and the UI code might be built in a multi-stage Dockerfile.

  2. Configure the Webpack dev server to proxy a URL path to the back-end server. (Or in CRA, add a "proxy" setting to package.json.) In a Docker context, the browser would connect to this dev server, and the proxy address would get set to http://backend:3000 using the Docker-internal name.

  3. Set up a reverse proxy (frequently Nginx) to route /api to http://backend:3000 and route other paths to http://frontend:3000. In a Docker context, you would be running three containers: the browser would connect to this proxy, and the proxy would use the Docker-internal names for the other two services.

As you note in the question, the browser is an important player in this setup. The browser (and therefore fetch or XMLHttpRequest calls from your React application) can never directly call the Docker-internal host names. It must connect to the host name of the server running the containers (which can be localhost if the browser and containers are on the same system) and a published port number (docker run -p, Docker Compose ports:).

You should generally run one process per container. Trying to run a React dev server and a back-end server in the same container won't really simplify things, because the browser is what will be making calls to the back-end. If you use one of the proxying solutions described here, you can route everything into a single published container, and the other containers don't need to publish ports; they can exist only within the Docker system and be reachable only via the proxy.

David Maze
  • 130,717
  • 29
  • 175
  • 215
  • Appreciate the in depth response! Went with hosting the files statically on my loop back backend as that ended up being a lot easier than I expected. Thanks again! – Keanu Jun 26 '20 at 18:35
0

It's definitely possible. Let's use docker approach first, when you create containers in docker, you create them on separate networks. You can use --net flag when creating containers which will make containers work in same network space, but this will not just for you because they'll have different IPs like "172.x.x.x". If you will use them on your local machine then you can use --net=host flag to run them on your local machine's network and you can access them with localhost:3000, localhost:3001 and your containers can do too.

If you ever want to use them on Kubernetes then you should add sidecar containers in pod. This will also make them communicate using "localhost".

I will just throw away my opinion on this. You should really make communication between apps on IP/DNS configurable via environment variable so you don't have to match IPs everytime. Good luck.

Akin Ozer
  • 1,001
  • 6
  • 14