1

Consider the following pseudo-code in C++:

// somewhere in common code, properly scoped
boost::mutex data_ready_lock;
bool data_ready;

// Thread 1:
void SomeThreadFunc() {
  // ... push data onto a shared data structure that is properly locked
  data_ready_lock.lock();
  data_ready = true;
  data_ready_lock.unlock();
}

// Thread 2:  (actually a function called from the main() thread)
// Returns the number of bytes written to output_data
size_t RequestData(uint8_t* const output_data) {
  data_ready_lock.lock();
  if (data_ready) {
    // reset the flag, so I don't read out the same data twice
    data_ready = false;
    data_ready_lock.unlock();
    // copy over data, etc.
    return kDataSize;
  } else {
    data_ready_lock.unlock();
    return 0;
  }
}

Is there a better way to accomplish this? I was thinking about condition variables, but I need the ability to reset the flag to ensure that back to back calls to RequestData() don't yield the same data.

As always, thanks in advance for the help.

It'sPete
  • 5,083
  • 8
  • 39
  • 72
  • Since you are using boost why don't you use a [lock free queue](http://www.boost.org/doc/libs/1_53_0/doc/html/boost/lockfree/queue.html)? – NathanOliver Oct 05 '15 at 14:57
  • The data isn't a FIFO per se and I need the ability to access into the queue. For example, the caller has two request_data calls. One will give you the whole data set, and another will give you only the most recent piece of data pushed (i.e. a LIFO). – It'sPete Oct 05 '15 at 14:59
  • What do you mean by better? You want event-driven call schedule for RequestData? – SergeyA Oct 05 '15 at 15:12
  • Just wondering if there's any simpler and/or more efficient way to accomplish my goal of having calls to RequestData not read out duplicated data or return garbage before anything is pushed. – It'sPete Oct 05 '15 at 15:14
  • Why don't you simply remove the data when you read it? And have writer signalling through condition variable when new stuff comes in? – SergeyA Oct 05 '15 at 15:16
  • So there are two ways the caller can access the data (and maybe I didn't explain this right). First, the caller can request ALL the data. Basically I copy over the entire FIFO into a 2D array that the caller can then use. The other way is that the caller is only interested in the most recent element. As a result, the data structure is acting like a LIFO in this case and I return the last element. However, I still need that last element in the structure for any subsequent calls to get the entire data structure (i.e., return the last N sets of data encountered). – It'sPete Oct 05 '15 at 15:23
  • 1
    Did I understand this right, the the second form can only return one last datum, i.e. you can not really keep popping with it? If I am correct, I believe, you might split up the structures, and have the queue and a single data source. – SergeyA Oct 05 '15 at 15:29

2 Answers2

2

I don't know what your end goal is, but maybe using an actual thread-safe queue would simplify your code. Here is one:

http://www.boost.org/doc/libs/1_53_0/doc/html/boost/lockfree/queue.html

David Grayson
  • 84,103
  • 24
  • 152
  • 189
  • No go for what I'm trying to do. This question should provide more context: http://stackoverflow.com/questions/32745282/choosing-an-appropriate-fifo-data-structure – It'sPete Oct 05 '15 at 15:11
  • 3
    The question you linked to only describes a proecure and data structure you want to use. It doesn't say what your end goal is, or what problem would be solved by your efforts. See the [XY Problem](http://meta.stackexchange.com/questions/66377/what-is-the-xy-problem). – David Grayson Oct 05 '15 at 15:43
1

If the flag is the only your concern then you might try using atomic.

// somewhere in common code, properly scoped
boost::atomic< bool > data_ready(false); // can be std::atomic and std::memory_order_* below

// Thread 1:
void SomeThreadFunc() {
  // ... push data onto a shared data structure that is properly locked
  data_ready.store(true, boost::memory_order_release);
}

// Thread 2:  (actually a function called from the main() thread)
// Returns the number of bytes written to output_data
size_t RequestData(uint8_t* const output_data) {
  if (data_ready.exchange(false, boost::memory_order_acquire)) {
    // copy over data, etc.
    return kDataSize;
  } else {
    return 0;
  }
}

However, in a real code you will have a race between the 'push data' and 'copy over data' pieces of code, unless they are synchronized separately.

Andrey Semashev
  • 10,046
  • 1
  • 17
  • 27