I have below a SPSC queue for my logger.
It is certainly not a general-use SPSC lock-free queue.
However, given a bunch of assumptions around how it will be used, target architecture etc, and a number of acceptable tradeoffs, which I go into detail below, my questions is basically, is it safe / does it work?
- It will only be used on
x86_64
architecture, so writes touint16_t
will be atomic. - Only the producer updates the
tail
. - Only the consumer updates the
head
. - If the producer reads an old value of
head
, it will look like there is less space in the queue than reality, which is an acceptable limitation in the context in which is is used. - If the consumer reads an old value of
tail
, it will look like there is less data waiting in the queue than reality, again an acceptable limitation.
The limitations above are acceptable because:
- the consumer may not get the latest
tail
immediately, but eventually the latesttail
will arrive, and queued data will be logged. - the producer may not get the latest
head
immediately, so the queue will look more full than it really is. In our load testing we have found the amount we log vs the size of the queue, and the speed at which the logger drains the queue, this limitation has no effect - there is always space in the queue.
A final point, the use of volatile
is necessary to prevent the variable which each thread only reads from being optimised out.
My questions:
- Is this logic correct?
- Is the queue thread safe?
- Is
volatile
sufficient? - Is
volatile
necessary?
My queue:
class LogBuffer
{
public:
bool is_empty() const { return head_ == tail_; }
bool is_full() const { return uint16_t(tail_ + 1) == head_; }
LogLine& head() { return log_buffer_[head_]; }
LogLine& tail() { return log_buffer_[tail_]; }
void advance_head() { ++head_; }
void advance_hail() { ++tail_; }
private:
volatile uint16_t tail_ = 0; // write position
LogLine log_buffer_[0xffff + 1]; // relies on the uint16_t overflowing
volatile uint16_t head_ = 0; // read position
};