0

How does one implement a multithreaded single process model in linux fedora under c where a single scheduler is used on a "main" core reading i/o availability (ex. tcp/ip, udp) then having a single-thread-per-core (started at init), the "execution thread", parse the data then update a small amount of info update to shared memory space (it is my understanding pthreads share data under a single process).

I beleive my options are:

Pthreads or the linux OS scheduler

I have a naive model in mind consisting of starting a certain number of these execution threads a single scheduler thread.

What is the best solution one could think when I know that I can use this sort of model.

BAR
  • 15,909
  • 27
  • 97
  • 185

4 Answers4

1

Modifying the Linux scheduler is quite a tough work. I would just forget about it. Pthread is usually prefered. If I understand well, you want to have one core dedicated to the control plan, and a pool of other cores dedicated to the data plan processing? Then create a pool of threads from your master thread and setup core affinity for these slave threads with pthread_setaffinity_np(...).

Indeed threads of a process share the same address-space, and global variables are accessible by any threads of that process.

Benny
  • 4,095
  • 1
  • 26
  • 27
1

Completing Benoit's answer, in order to communicate between your master and your worker threads, you could use conditional variable. The workers do something like:

while (true)
{
    pthread_mutex_lock(workQueueMutex);
    while (workQueue.empty())
        pthread_cond_wait(workQueueCond, workQueueMutex);
    /* if we get were then (a) we have work (b) we hold workQueueMutex */
    work = pop(workQueue);
    pthread_mutex_unlock(workQueueMutex);
    /* do work */
}

and the master:

/* I/O received */
pthread_mutex_lock(workQueueMutex);
push(workQueue, work);
pthread_cond_signal(workQueueCond);
pthread_mutex_unlock(workQueueMutex);

This would wake up one idle work to immediately process the request. If no worker is available, the work will be dequeued and processed later.

user1202136
  • 11,171
  • 4
  • 41
  • 62
0

It looks to me that you have a version of the producer-consumer problem with a single consumer aggregating the results of n producers. This is a pretty standard problem, so I definitely think that pthread is more than enough for you. You don't need to go and mess around with the scheduler.

As one of the answer's states, a thread safe queue like the one described here works nicely for this sort of issue. Your original idea of spawning a bunch of threads is a good idea. You seem to be worried that the ability of the threads to share global state will cause you problems. I don't think that this is an issue if you keep shared state to a minimum and use sane locking discipline. Sharing state is fine as long as you do so responsibly.

Finally, unless you really know what you're doing, I would advise against manually messing with thread affinity. Just spawn the threads and let the scheduler handle when and on what core a thread runs. The thing to optimize is the number of threads you use. One for each core may not actually be the fastest approach if other threads are running.

Community
  • 1
  • 1
Abhay Buch
  • 4,548
  • 1
  • 21
  • 26
0

Generally speaking, this is more or less exactly what the posix select and linux specific epoll functions are for.

richo
  • 8,717
  • 3
  • 29
  • 47