I have a thread running that reads a stream of bytes from a serial port. It does this continuously in the background, and reads from the stream come in at different, separate times. I store the data in a container like so:
using ByteVector = std::vector<std::uint8_t>;
ByteVector receive_queue;
When data comes in from the serial port, I append it to the end of the byte queue:
ByteVector read_bytes = serial_port->ReadBytes(100); // read 100 bytes; returns as a "ByteVector"
receive_queue.insert(receive_queue.end(), read_bytes.begin(), read_bytes.end());
When I am ready to read data in the receive queue, I remove it from the front:
unsigned read_bytes = 100;
// Read 100 bytes from the front of the vector by using indices or iterators, then:
receive_queue.erase(receive_queue.begin(), receive_queue.begin() + read_bytes);
This isn't the full code, but gives a good idea of how I'm utilizing the vector for this data streaming mechanism.
My main concern with this implementation is the removal from the front, which requires shifting each element removed (I'm not sure how optimized erase()
is for vector, but in the worst case, each element removal results in a shift of the entire vector). On the flip side, vectors are candidates for CPU cache locality because of the contiguous nature of the data (but CPU cache usage is not guaranteed).
I've thought of maybe using boost::circular_buffer
, but I'm not sure if it's the right tool for the job.
I have not yet coded an upper-limit for the growth of the receive queue, however I could easily do a reserve(MAX_RECEIVE_BYTES)
somewhere, and make sure that size()
is never greater than MAX_RECEIVE_BYTES
as I continue to append to the back of it.
Is this approach generally OK? If not, what performance concerns are there? What container would be more appropriate here?