2

I have an application that I am currently developing for communicating with a device using serial communication. For this I am using the boost library basic_serial_port. Right now, I am just attempting to read from the device and am using the async_wait_until function coupled with a async_wait from the deadline_timer class. The code that sets up the read and wait look like this:

async_read_until(port,readData,io_params.delim,
                  boost::bind(&SerialComm::readCompleted,
                  this,boost::asio::placeholders::error,
                  boost::asio::placeholders::bytes_transferred));

timer.expires_from_now(boost::posix_time::seconds(1));
timer.async_wait(boost::bind(&SerialComm::timeoutExpired,this,
                 boost::asio::placeholders::error));

The callback on the async_read_until looks like

void SerialComm::readCompleted(const boost::system::error_code& error,
                               const size_t bytesTransferred){
    if (!error){
        wait_result = success;
        bytes_transferred = bytesTransferred;
    }
    else {
        if (error.value() != 125) wait_result = error_out;
        else wait_result = op_canceled;

        cout << "Port handler called with error code " + to_string(error.value()) << endl;
    }

}

and the following code is triggered on successful read

string msg;
getline(istream(&readData), msg, '\r');
boost::trim_right_if(msg, boost::is_any_of("\r"));

In the case of this device, all messages are terminated with a carriage return, so specifying the carriage return in the async_read_until should retrieve a single message. However, what I am seeing is that, while the handler is triggered, new data is not necessarily entered into the buffer. So, what I might see is, if the handler is triggered 20x

  • one line pumped into the buffer in the first call
  • none in the next 6 calls
  • 6 lines in the next call
  • no data in the next 10
  • 10 lines following ...

I am obviously not doing something correctly, but what is it?

cirrusio
  • 580
  • 5
  • 28

2 Answers2

0

async_read_until does not guarantee it only read up until the first delimiter.

Due to the underlying implementation details, it will just "read what is available" on most systems and will return if the streambuf contains the delimiter. Additional data will be in the streambuf. Moreover, EOF might be returned even if you didn't expect it yet.

See for background Read until a string delimiter in boost::asio::streambuf

sehe
  • 374,641
  • 47
  • 450
  • 633
  • I am sorry - I don't understand this. The post you linked to suggests that if I have something that is returned that looks like ``cmd1\r``, I should expect ``async_read_until`` to place that into the buffer. This is not the behavior I am seeing. As described above, the handler fires (i.e. a delimiter is reached) but **no data is placed in the buffer** (not even a stray ``\r``). – cirrusio Apr 07 '17 at 14:48
  • The post I linked to suggests that if you have something that is returned that looks like `cmd1\rcmd2\rcmd3\rcmd4\rcmd5\r`, I could expect `async_read_until` to place `cmd1\rcm` in that buffer. Or `cmd1\rcmd2\rcmd3\rc`. Or indeed, the full data if it was already available when you did the read. – sehe Apr 07 '17 at 15:24
  • Depending on how exactly you manage the streambuf, this would explain that you seem to read "no new data" (it's already there). Until, of course, the whole buffer is consumed and you do an actual new read. If you make your post a SSCCE I will show you a fixed sample. – sehe Apr 07 '17 at 15:26
  • SSCCE? Maybe I am just I am just misunderstanding. Here is the sequence of events: 1) I send a request for data, 2) I set the ``async_wait_until`` for reading, 3) I set my timer up for timeouts, 4) when the handler fires, we set a flag to read the buffer, 5) we attempt to read new data from the buffer. What appears to be happening (but maybe I just don't understand ``boost::asio::streambuf``) is that new data is shuffled to the front. But periodically I get garbage in the buffer and this is evident from a null byte at the start. Should I expect new data to load at the front of the buffer? – cirrusio Apr 07 '17 at 16:31
  • I should state that while the functions are asynchronous, the calls are serialized such that we do not send a request from the data until either handler has fired (and the handler for ``async_wait`` cancels asynchronous port operations). – cirrusio Apr 07 '17 at 16:37
  • So, I am misunderstanding this. According to [the std::basic_streambuf reference](http://en.cppreference.com/w/cpp/io/basic_streambuf), the input buffer will contain three pointers that define the area. But the documentation does state that the "beginning pointer, always points at the lowest element...". Does this mean that it will point at the zero-th element in the buffer? – cirrusio Apr 07 '17 at 18:12
  • So, I implemented a solution similar to that described in the post for reading from the buffer and it changes nothing (that simply does what should be done using the ``getline`` function). I also verified that every time the handler is triggered, we attempt to read from the buffer. And every time the handler is triggered, it indicates 49 bytes (the size of the packet) were transferred. But when I attempt to transfer this data, it comes back with nothing. – cirrusio Apr 07 '17 at 18:32
  • Looks like I might be stepping on the port by not properly processing events. Hang tight. – cirrusio Apr 07 '17 at 18:42
  • 1
    Do you have google? SSCCE means [Simple Selfcontained Complete Correct Example](https://meta.stackexchange.com/questions/22754/sscce-how-to-provide-examples-for-programming-questions). You _certainly_ need one. And it will probably cause you to see the problem yourself. If not, I'll help. – sehe Apr 07 '17 at 23:38
0

So, found the problem here. The way this program is intended to work is that it should

  1. Send a request for data
  2. Start an async_read_until to read data on the port.
  3. Start an async_wait so that it we don't wait forever.
  4. Use io_service::run_one to wait for a timeout or a successful read.

The code for step four looked like this:

for (;;){
    // This blocks until an event on io_service_ is set.
    n_handlers = io_service_.run_one();


    // Brackets in success case limit scope of new variables
    switch(wait_result){
    case success:{

        char c_[1024];
        //string msg;
        string delims = "\r";

        std::string msg{buffers_begin(readData.data()), buffers_begin(readData.data()) + bytes_transferred- delims.size()};
            // Consume through the first delimiter.
            readData.consume(bytes_transferred);

        data_out = msg;
        cout << msg << endl;

        data_handler(msg);

        return data_out;

        }
    case timeout_expired:

        //Set up for wait and read.
        wait_result = in_progress;
        cout << "Time is up..." << endl;
        return data_out;
        break;
    case error_out:
        cout << "Error out..." << endl;
        return data_out;
        break ;
    case op_canceled:
        return data_out;
        break;

    case in_progress:
        cout << "In progress..." << endl;
        break;
    }

}

Only two cases should trigger an exit from the loop - timeout_expired and success. But, as you can see, the system will exit if an operation is cancelled (op_canceled) or if there is an error (error_out).

The problem is that when an async operation is cancelled (i.e. deadline_timer::cancel()) it will trigger an event picked up by io_service::run_one which will set the state evaluated by the switch statement to op_canceled. This can leave async operations stacking up in the event loop. The simple fix is to just comment out the return statement in all cases except for success and timeout_expired.

cirrusio
  • 580
  • 5
  • 28