0

In a multithreaded Linux/C++-program, I want to use fork() with a signal handler for SIGCHLD.

In the child process I use open() to create two new file descriptors, sendfile() and close(), then the child exits.

I planned to use fork() to implement the following requirements:

The threads in the parent process shall be able to

  1. detect the normal termination of the child process, and in that case shall be able to create another fork() doing the open()/sendfile()/close() for a range of files
  2. kill the sendfile()-child process in case of a specific event and detect the intentional termination to clean up

For requirement 1 I could just wait for the result of sendfile(). Requirement 2 is why I think I need to use fork() in the first place.

After reading the following posts

I think that my solution might not be a good one.

My questions are:

  • Is there any other solution to implement requirement 2 ?
  • Or how can I make sure that the library calls open(), close() and sendfile() will be okay?

Update:

  • The program will run on a Busybox Linux / ARM
  • I've assumed that I should use sendfile() for having the most efficient file transfer due to several posts I've read regarding this topic. A safe way to implement my requirement could be using fork() and exec*() with cp, with the disadvantage that the file transfer might be less efficient

Update 2:

  • it's sufficient to fork() once in case of a specific event (instead of once per file) since I switched to exec*() with rsync in the child process. However the program needs invoke that rsync always in case of a specific event.
Community
  • 1
  • 1
radix
  • 352
  • 1
  • 2
  • 14
  • Or just use read/write on a background thread instead of fork. – Richard Critten Aug 11 '16 at 08:53
  • Forking from a multithreaded program has undefined behaviour in general. More specifically, the forked child is in an async-signal context in which the only permissible operations are (essentially) exit or exec. What you can very decidedly *not* do is something like memory allocation. – Kerrek SB Aug 11 '16 at 09:10
  • @RichardCritten: I don't know of any way to tell the bg thread to stop while it's waiting for the `sendfile()` to return, and I don't want to wait in case a huge file is processed – radix Aug 11 '16 at 09:43
  • @radix that's why read/write – Richard Critten Aug 11 '16 at 10:20
  • Can you elaborate on that? What do you mean with read/write? I don't get how I can abort a `sendfile()`, or a `system()` for `cp` / `rsync` which is started in the background thread. As I see it the background thread will invoke the `system( "rsync...")` and will have to wait until it returns. – radix Aug 16 '16 at 19:32

1 Answers1

3

You can use threads, but forcefully terminating threads typically leads to memory leaks and other problems.

My linux experience is somewhat limited, but I would probably try to fork the program early, before it gets multithreaded. Now that you have two instances, the single threaded instance can be safely used to manage the starting and stopping of additional instances.

Sven Nilsson
  • 1,861
  • 10
  • 11
  • That probably could be a solution if considered at start of development, but unfortunately it extremely affects the design of the SW in my case. – radix Aug 12 '16 at 09:18
  • I think you will have to explain more why that is the case. You are aware that you can use shared memory and similar for interprocess communication? The forked instances need to have the data required to accomplish their tasks. Do you require other resources than just data? – Sven Nilsson Aug 12 '16 at 09:42
  • One issue is that the "transfer thread" receives a pointer to a class instance for reporting errors. The log function is also used by other classes whose instances run in the main thread. This could basically be solved via IPC, like the rest of the configuration for the thread. I hoped to find a simpler solution. – radix Aug 16 '16 at 20:02