1

Almost every resource that I have looked up has talked about how to enforce mutual exclusion, or deal with the producer/consumer problem.

The problem is that I need to get certain threads to execute before other threads, but can't figure out how. I am trying to use semaphores, but don't really see how they can help in my case.

I have

  • a read thread,
  • N number of search threads, and
  • a write thread.

The read thread fills a buffer with data, then the search threads parse the data and outputs it to a different buffer, which the write thread then writes to a file.

Any idea as to how I would accomplish this?

I can post the code I have so far if anyone thinks that would help.

alk
  • 69,737
  • 10
  • 105
  • 255
Danzo
  • 553
  • 3
  • 13
  • 26
  • This is exactly what semaphores are generally used for, are you sure you have a good understanding of how they work and what they do? – Tim Mar 07 '14 at 20:41
  • 1
    Why do you split your code into seperate threads if you have a linear program logic and do not want your threads to run simultaneously? – Tim Mar 07 '14 at 20:44
  • My idea was to initialize the semaphore to negative (number of search threads), then have each search thread signal when it was finished to increment the semaphore. The write thread would be waiting on the semaphore to be zero, so it would wake up when all the search threads finish. But apparently you can't initialize a semaphore to a negative value. – Danzo Mar 07 '14 at 20:45
  • And the search threads do run simultaneously, i just need them to all be finished before the write thread reads from the buffer. – Danzo Mar 07 '14 at 20:46
  • Thats true. The semaphore's count is stored in an `unsigned int` which cannot be lower than 0 – Tim Mar 07 '14 at 20:53

4 Answers4

0

I think what you're looking for is a monitor

  • Some additional information available [here](http://stackoverflow.com/questions/7335950/semaphore-vs-monitors-whats-the-difference) – Rich Hendricks Mar 07 '14 at 20:49
  • Might be helpful to post some of that information, here while the links are still up – Tim Mar 07 '14 at 20:56
  • Hmm, I don't really see a need to C&P wikipedia or an internal SO link. Back to the question, you could modify the monitor behavior to allow multiple consumers access to the data, but block all of them when the producer needs access. – Rich Hendricks Mar 07 '14 at 21:02
0

I would use a few condition variables.

You have read buffers. Probably two. If the search threads are slow you want read to wait rather than use all the memory on buffers.

So: A read_ready condition variable and a read_ready_mutex. Set to true when there is an open buffer.

Next: A search_ready condition variable and a search_ready_mutex. Set to true when there is a complete read buffer.

Next: A write_ready variable and a write_ready mutex. Set to true when there is work to do for the write thread.

Instead of true/false you could use integers of the number of buffers that are ready. As long as you verify the condition while the mutex is held and only modify the condition while the mutex is held, it will work.

Zan Lynx
  • 53,022
  • 10
  • 79
  • 131
0

[Too long for a comment]

Cutting this down to two assumptions:

  • Searchig cannot be done before reading has finished.
  • Writing cannot be done before searching has finished.

I conclude:

  • Do not use threads for reading and writing, but do it from the main thread.
  • Just do the the searching in parallel using threads.
alk
  • 69,737
  • 10
  • 105
  • 255
0

Generally speaking, threads are used precisely when we don't care about the order of execution. If you want to execute some statements S1, S2, ... , SN in that order, then you catenate them into a program that is run by a single thread: { S1; S2; ...; SN }.

The specific problem you describe can be solved with a synchronization primitive known as a barrier (implemented as the POSIX function pthread_barrier_wait).

A barrier is initialized with a number, the barrier count N. Threads which call the barrier wait operation are suspended, until N threads accumulate. Then they are are all released. One of the threads receives a return value which tells it that it is the "serial thread".

So for instance, suppose we have N threads doing this read, process-in-paralle, and write sequence. It goes like this (pseudocode):

i_am_serial = barrier.wait();  # at first, everyone waits at the barrier

if (i_am_serial)               # serial thread does the reading, preparing the data
  do_read_task(); 

barrier.wait();                # everyone rendezvous at the barrier again

do_paralallel_processing();    # everyone performs the processing on the data

i_am_serial = barrier.wait();  # rendezvous after parallel processing

if (i_am_serial)
  do_write_report_task();      # serialized integration and reporting of results
Kaz
  • 55,781
  • 9
  • 100
  • 149