1

I have this single monolithic daemon which does multiple operations like interfacing with north bound APIs, interfacing with south bound APIs, executing state machines, building internal databases.

Currently I ended up with scalability issues and I want to redesign the daemon so that all the multiple actions inside the daemon are made concurrent. But using threads will complicate the logic since I end up having to:

  • Add locks for synchronization.
  • Take proper care for future extensions.
  • Debug the timing issues.

So my question is please suggest a design approach where I can still make the actions concurrent and remove the complexity of threads.

My application is currently in C. Any sample open source (FOSS) project as example would also help me to understand the design approach.

Cloud
  • 18,753
  • 15
  • 79
  • 153
codingfreak
  • 4,467
  • 11
  • 48
  • 60
  • Coroutines might be a solution but there is no standard C implementation. Wikipedia has full details. – Jeff Hammond Feb 04 '16 at 03:53
  • 1
    @Jeff: Co-Routines are serial concurrency. OP wants the cake and eat it. This is not possible. – too honest for this site Feb 04 '16 at 12:24
  • 1
    You cannot have the cake and eat it! – too honest for this site Feb 04 '16 at 12:31
  • I'm aware of that. But question said 'no threads'. Do you think it means to say concurrency from processes? An OS process is just s thread running in a different address space. – Jeff Hammond Feb 04 '16 at 13:04
  • @Jeff Also, multi-process means more expensive context switching. https://stackoverflow.com/questions/5440128/thread-context-switch-vs-process-context-switch – Cloud Feb 04 '16 at 16:29
  • @Olaf Pretty much. A point I'm trying to convey to OP is that if one wants concurrency for non-trivial applications, they need to learn to code, and handle sync, timing/race conditions, etc. If one is trying to avoid these concepts, then he/she needs to take more time to hone their skills. In a production environment, one would be laughed at in most coding environments when trying to avoid the basic constructs that allow for safe concurrency. – Cloud Feb 04 '16 at 16:31
  • @Dogbert Multiprocessing doesn't imply context-switching. Each process may have its own hardware thread. Unfortunately, the post you linked contains no data. I understand that the TLB situation is different, but application performance sensitivity to the TLB varies widely. – Jeff Hammond Feb 04 '16 at 16:38
  • @Jeff: No, but it does have multiple contexts. So either you have one CPu per process/thread, which implies you have hardware-overhead when you transfer data or you have overhead with context switch on a single CPU. For modern OSes and without additional effort, you very likely have both, as the OS may migrate even threads (not to mention processes) on-the fly between CPUs, depending on overall system utilisation. – too honest for this site Feb 04 '16 at 17:12
  • Not sure why my question is ridiculous since I can see 3 voted already to close it and 2 downvoted the question. Any idea ? – codingfreak Feb 05 '16 at 03:26
  • @codingfreak The disdain for the question likely comes from asking for a means of concurrency without willing to utilize the mechanisms that make it safe. It's like asking for help to safely use a band saw but you refuse to wear goggle or earplugs, and insist on having long luscious wavy hair dangling about rather than being tied back in a knot: the two scenarios are mutually exclusive (ie: safe concurrency, or no concurrency at all, hence Olaf's comment on cake, I assume). There are alternatives to threads, but they will likely be more convoluted solutions. – Cloud Feb 05 '16 at 16:48
  • @Dogbert - I agree that I need to use threads for concurrency. As a e.g. even though chemotherapy can cure cancer with its own side affects is it wrong someone want to know if there is something that can solve the same issue ? Before I really jump into threads and solve the issue I really wanted to know if there is an alternative – codingfreak Feb 05 '16 at 22:44
  • @codingfreak I think Olaf's comment is accurate. In most production environments, you're using threads, forks, etc, for concurrency, unless you have specialized hardware. Why do you need to avoid them? Are you dealing with some non-deterministic hardware or database engines? – Cloud Feb 05 '16 at 22:51
  • @Dogbert - I am uncomfortable only with threads but not with other concurrency approaches as it needs time to actually stabilize a multithreaded application (embedded space). Thats the reason I am trying to avoid it and go ahead with any other approach. I was thinking of distributed application approach. – codingfreak Feb 05 '16 at 22:58
  • @codingfreak Well, you could just spawn a single server app, and have a bunch of clients, but this is just an extension of the multi-process approach I described. I've used threaded approaches in embedded RTOS products (some mission critical), and haven't had any issues with them. So long as you thoroughly understand the underlying problem and how to sync the threads, their use is a non-issue. – Cloud Feb 05 '16 at 23:05

3 Answers3

0

Your only remaining options are:

  • Multi-process approach (ie: spawn(), fork(), exec(), etc. You still need to synchronize data, setup shared memory, etc. Threads would likely be easier).
  • Bite the bullet and live with no concurrency.
  • Become proficient in "lock free / lockless programming" approaches, which will still likely require atomic operations at the least.

Software sync, protection, and future-proofing/scalability are common problems in any production code that does non-trivial operations. Trying to avoid them outright usually indicates you have bigger concerns than avoiding threaded models.

Cloud
  • 18,753
  • 15
  • 79
  • 153
  • Thanks for the reply @Dogbert. My main concern with threads is especially the timing issues and debugging. Since the daemon is running in a embedded box debugging with a threaded model will be little complicated. – codingfreak Feb 04 '16 at 06:10
  • Lock-free programming as such can add runtime penalties because they retry/loop. OP seems to want non-blocking, too. However, that will not work without threads. Processes are much worse due to the overhead of crossing/sharing memory. – too honest for this site Feb 04 '16 at 12:27
0

This sounds like a perfect case for go which provides a concurrency model based on Hoare's Communicating Sequential Processes (CSP)*. Fortunately you don't have to use go to get CSP. Martin Sustrik of ZeroMQ fame has given us libmill, which provides the go concurrency primitives in C. Still, you might consider go for its other features.

* Rather than try to describe CSP directly, I'd suggest you watch some of Rob Pike's excellent videos, like this one: Go Concurrency Patterns.

dan4thewin
  • 1,134
  • 7
  • 10
  • OP asks about C and design-patterns. Go is a programming language. And that would still involve Co-Routines (serial) or threads/processes. As apparently already threads are too complicated for OP, do you really think he will take the effort learning a new PL, and port all his code? You do not get a redesign at no cost. – too honest for this site Feb 04 '16 at 12:28
  • 2
    Note, I specifically proposed libmilll, a **C** library. The channel-based approach is a comparatively simple and straightforward alternative to coordinating threads - which is what I think the OP wants. – dan4thewin Feb 04 '16 at 19:15
  • @dancancode - Thanks for the answer. I would like to know if you any information with respect to performance impact using ;ibmill vs pthreads ? – codingfreak Sep 09 '17 at 00:21
0

One way you can achieve asynchronous execution without running multiple threads is using command pattern and command queue. You can implement it in any programming language. Of course things will not be really executing in parallel but this is the way to do asynchronous programming in environments where resources are very limited. Robect C Martin describles this really well in his video.

Example scenario:

  • You add a initial command to the queue (for the sake of example it's just single simple command).
  • You start infinite loop which does only one thing:
    1. Take next command from the queue
    2. Execute taken command on the current thread
  • Our command (lets call it CheckButtonPressed) can do some simple check (for example if button was clicked or some web service responded with some value)
    • if condition check is negative command will add itself back to the queue (queue is never empty and we are checking all the time if button was pressed)
    • if condition check is positive we add to the queue the HandleButtonClick command that contains whatever code we want to run in respond to this event.
  • When HandleButtonClick command will be processed it will execute whatever code is required and at the end it will add CheckButtonPressed again to the queue so the button can be pressed again and queue is never empty.

As you see except the initial commands (the ones that are added to the queue before starting queue processing loop) all other commands are added to the queue by other commands. Commands can be statefull but there is no need for threads synchronization because there is only one thread.

Community
  • 1
  • 1
Krzysztof Branicki
  • 7,417
  • 3
  • 38
  • 41
  • That still uses synchronisation techniques: queues. – too honest for this site Feb 04 '16 at 12:28
  • Synchronization of what? Certainly not threads. There is only one thread and only thing it's doing is taking next command from queue and execute it. It executes the commands synchronously in the same thread. The command itself can add new commands to the command queue. Question was about way to write asynchronously executing code without having multiple threads and this is one of available solutions I don't see why to to down vote it. – Krzysztof Branicki Feb 05 '16 at 08:22
  • Please look up the meaning of synchronisation! Queues are also some kind of synchronisation mechanism. What do you think happens if the consumer tries to read from an empty queue or the producer write to a full queue? Also the queue internally uses some kind of synchronisation technique to ensure its integrity. If you exchange date between threads, you cannot avoid some kind of synchronisation. Unles you accept the potential of faults and hickups. – too honest for this site Feb 05 '16 at 12:54
  • You have written "If you exchange date between threads, you cannot avoid some kind of synchronisation." I totally agree with that but what I'm saying is that in the case I'm trying to explain there is only one thread. I edited the answer and added example scenario. – Krzysztof Branicki Feb 05 '16 at 13:38
  • And the queue is filled through a quantum tunnel or divine intervention? – too honest for this site Feb 05 '16 at 13:42