0

I am using an std::queue to buffer messages on my network (CAN bus in this case). During an interrupt I am adding the message to the "inbox". Then my main program checks every cycle if the queue is empty, if not handles the messages. Problem is, the queue is pop'd until empty (it exits from while (! inbox.empty()), but the next time I push data to it, it works as normal BUT the old data is still hanging out at the back.

For example, first message pushes a "1" to the queue. Loop reads

  • 1

Next message is "2". Next read is

  • 2
  • 1

If I were to get in TWO messages before another read, "3", "4", then next read would be

  • 3
  • 4
  • 2
  • 1

I am very confused. I am also working with an STM32F0 ARM chip and mbed online, and have no idea if this is working poorly on the hardware or what!

I was concerned about thread safety, so I added an extra buffer queue and only push to the inbox when it "unlocked". And once I ran this I have not seen any conflict occur anyway!

Pusher code:

if (bInboxUnlocked) {
    while (! inboxBuffer.empty()) {
        inbox.push (inboxBuffer.front());
        inboxBuffer.pop();
    }
    inbox.push(msg);
} else {
    inboxBuffer.push(msg);
    printf("LOCKED!");
}

Main program read code

bInboxUnlocked = 0;
while (! inbox.empty()) {
    printf("%d\r\n", inbox.front().data);
    inbox.pop();
}
bInboxUnlocked = 1;

Thoughts anyone? Am I using this wrong? Any other ways to easily accomplish what I am doing? I expect the buffers to be small enough to implement a small circular array, but with queue on hand I was hoping not to have to do that.

ptpaterson
  • 9,131
  • 4
  • 26
  • 40

1 Answers1

1

Based on what I can figure out from a basic Google search, your CPU is a single core CPU, essentially. If so, then there should not be any memory fencing issues to deal with, here.

If, on the other hand, you had multiple CPU cores to deal with here, it will be necessary to either cram in explicit fences, in key places, or employ C++11 classes like std::mutex, that will take care of this for you.

But going with the original use case of a single CPU, and no memory fencing issues, if you can guarantee that:

A) There's some definite upper limit on the number of messages you expect to buffer by your interrupt handling code in the queue before it gets drained, and:

B) the messages you're buffering are PODs

Then a potential alternative to std::queue worth exploring here is to roll your own simple queue, using nothing more than a static std::array, or maybe a std::vector, an int head pointer, and an int tail pointer. A google search should find plenty of examples of implementing this simple algorithm:

The puller checks "if head != tail", if so, reads the message in queue[head] and increments head. Increment means: head=(head+1)%queuesize. The puller checks if incrementing tail (also modulo queuesize) results in head, if so the queue has filled up (something that shouldn't happen, according to the prerequisites of this approach). If not, put the message into queue[tail], and increment tail.

If all of these operations are done in the right order, the net effect would be the same as using std::queue but:

1) Without the overhead of std::queue and the heap allocation it uses. Should be a major win on an embedded platform.

2) Since the queue is a vector, in contiguous memory, this should take advantage of CPU caching that's often the case here, with traditional CPUs.

Community
  • 1
  • 1
Sam Varshavchik
  • 114,536
  • 5
  • 94
  • 148
  • I am not convinced that I not missing something with the hardware, it might be holding on to old data somehow and calling the interrupt multiple times, thus loading my queue up with old data. It's hard to debug on mbed... In anycase, I believe I will use this method to track my messages! – ptpaterson Jan 13 '16 at 04:04