0

I was maintaining application which develop on C, running by Systemd and it was microservices. Each services can communicate by Linux shared memory (IPCS), and use HTTP to communicate to outside. My question is, is it good to move this all services to one Docker container? I new in the container topic and people was recommended me to learn and use it.

The simple design of my application is below: Simple my application design

Note: MS is Microservice

Alex San
  • 15
  • 5
  • #1 send messages between your microservices using IPC is like a sockets? #2 What kind of messages are exchanged between your services (ms1, ms2, ms3, ms4) ? – JRichardsz Jul 30 '21 at 04:35
  • #1. Sending message via IPC is like queue via shared memory, different like sockets. #2. The message are exchanged is C structure stored in memory. – Alex San Jul 30 '21 at 07:45

1 Answers1

1

Docker official web says :

It is generally recommended that you separate areas of concern by using one service per container

When the docker starts, it needs to link to a live and foreground process. If this process ends, the entire container ends. Also the default behavior in docker is related to logs is "catch" the stdout of the single process.

several process

If you have several process, a no one is the "main", I think it is possible to start them as background process but you will need a heavy while in bash to simulate a foreground process. Inside this loop you could check if your services still running because has no sense a live container when its internal process are exited or has errors.

while sleep 60; do
  ps aux |grep my_first_process |grep -q -v grep
  PROCESS_1_STATUS=$?
  ps aux |grep my_second_process |grep -q -v grep
  PROCESS_2_STATUS=$?
  # If the greps above find anything, they exit with 0 status
  # If they are not both 0, then something is wrong
  if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
    echo "One of the processes has already exited."
    exit 1
  fi
done

One process

As apache and other tools make, you could create one live process and then, start your another child process from inside. This is called spawn process. Also as you mentioned http, this process could expose http endpoints to exchange information with the outside.

I'm not a C expert, but system method could be an option to launch another process:

#include <stdlib.h>
int main()
{
    system("commands to launch service1");
    system("commands to launch service2");
    return 0;
}

Here some links:

Also to create a basic http server in c, you could check this:

restinio::run(
        restinio::on_this_thread()
        .port(8080)
        .address("localhost")
        .request_handler([](auto req) {
            return req->create_response().set_body("Hello, World!").done();
        }));

This c program will keep live when it starts because is a server. So this will be perfect for docker.

Rest Api is the most common strategy to exchange information over internet between servers and/or devices.

If you achieve this, your c program will have these features:

  • start another required process (ms1,ms2,ms3, etc)
  • expose rest http endpoints to send a receive some information between your services and the world. Sample
method: get 
url: https://alexsan.com/domotic-services/ms1/message/1
description: rest endpoint which returns the message 1 from service ms1 queue
returns: 
{
  "command": "close gateway=5"
}
method: post 
url: https://alexsan.com/domotic-services/ms2/message
description: rest endpoint which receives a message containing a command to be executed on service ms2
receive:
{
  "id": 100,
  "command" : "open gateway=2"
}
returns: 
{
  "command": "close gateway=5"
}

These http endpoints could be invoked from webs, mobiles, etc

Use high level languages

You could use python, nodejs, or java to start a server and from its inside, launch your services and if you want expose some http endpoints. Here a basic example with python:

FROM python:3

WORKDIR /usr/src/app

# create requirements
RUN echo "bottle==0.12.17" > requirements.txt

# app.py is creating with echo just for demo purposes
# in real scenario, app.py should be another file

RUN echo "from bottle import route, run" >> app.py
RUN echo "import os" >> app.py
RUN echo "os.spawnl(os.P_DETACH, '/opt/services/ms1.acme')" >> app.py
RUN echo "os.spawnl(os.P_DETACH, '/opt/services/ms2.acme')" >> app.py
RUN echo "os.spawnl(os.P_DETACH, '/opt/services/ms3.acme')" >> app.py
RUN echo "os.spawnl(os.P_DETACH, '/opt/services/ms4.acme')" >> app.py
RUN echo "@route('/domotic-services/ms2/message')" >> app.py
RUN echo "def index():" >> app.py
RUN echo "    return 'I will query the message'" >> app.py
RUN echo "run(host='0.0.0.0', port=80)" >> app.py

RUN pip install --no-cache-dir -r requirements.txt

CMD [ "python", "./app.py" ]

Also you can use nodejs:

JRichardsz
  • 14,356
  • 6
  • 59
  • 94
  • I was concluded that my application was not container friendly and no needs container right now. Thank you for the explanation from Docker. – Alex San Aug 05 '21 at 09:07
  • Docker will help you to distribute your solution in a easy an agnostic way. If you want I will help you with a Dockerfile that builds with c++ and start with python or nodejs. – JRichardsz Aug 05 '21 at 16:46