2

Okay, so I am trying to send a struct with boost asio. The send on the client-side works fine and the read_until also seems fine. However, when it tries to deserialize the data back to the struct it won't work when the size of the archive is greater than about 475 in length. The rest of the struct gets ignored for some reason and only the data field gets printed. I also added screenshots of the output. Basically, when the whole struct is not received there is an input stream error on the line ba >> frame. I also tested both with a larger file and get the same error. I even tried serializing a vector as well so not sure where my error is.

EDIT:

I figured out the issue. When I was reading from the socket I had something like this...

boost::asio::read_until(socket, buf, "\0");

This was causing weird issues reading in all the data from the boost binary archive. To fix this issue I made a custom delimiter that I appended to the archive I was sending over the socket like...

boost::asio::read_until(socket, buf, "StopReadingHere");

This fixed the weird issue of the entire boost archive string not being read into the streambuf.

AjackX
  • 21
  • 2

1 Answers1

0
  1. First Issue

    ostringstream oss;
    boost::archive::text_oarchive ba(oss);
    ba << frame;
    string archived_data = oss.str();
    

    Here you take the string without ensuring that the archive is complete. Fix:

    ostringstream oss;
    {
        boost::archive::text_oarchive ba(oss);
        ba << frame;
    }
    string archived_data = oss.str();
    
  2. Second issue:

    boost::asio::read_until(socket, buf, "\0");
    string s((istreambuf_iterator<char>(&buf)), istreambuf_iterator<char>());
    

    Here you potentially read too much into s - buf may contain additional data after the '\0'. Use the return value from read_until and e.g. std::copy_n, following buf.consume(n).

    If you then keep the buf instance for subsequent reads you will still have the previously read remaining data in the the buffer. If you discard it, that will lead to problems deserializing the next message.

  3. Risky Code?

    void write(tcp::socket& socket, string data, int timeout) {
        auto time = std::chrono::seconds(timeout);
        async_write(socket, boost::asio::buffer(data), transfer_all(), [&] (error_code error, size_t bytes_transferred) {   
        });
        service.await_operation(time, socket);
    }
    

    You're using async operation, but passing local variables (data) as buffer.The risk is that data becomes invalid as soon as write returns.

    Are you making sure that async_write is always completed before exiting from write? (It is possible that await_operation achieves this for you.

    Perhaps you are even using await_operation from my own old answer here How to simulate boost::asio::write with a timeout . It's possible since things were added that some assumptions no longer hold. I can always review some larger piece of code to check.

sehe
  • 374,641
  • 47
  • 450
  • 633
  • Yeah, I was trying to adopt that old answer you posted to try and make async_read timeout if nothing had been received in a certain amount of time. That part is still in progress at the moment. I was trying to send larger files with a bigger packet size when I came across this issue. – AjackX Apr 07 '21 at 04:10
  • just to clarify, the reason I believe the issue is mainly with the server-side is that when I print the archived data string everything is there. Then it seems to be missing when I print out the value of s. This only occurs at a higher range. I will add some additional code snippets. – AjackX Apr 07 '21 at 04:36
  • I'm sorry, the added code really doesn't clarify much, if anything. If you can turn it into a [MVCE](https://stackoverflow.com/help/minimal-reproducible-example)/[SSCCE](http://sscce.org) I'm happy to look more. – sehe Apr 07 '21 at 14:42
  • To the added _"is it possible I am maxing out the data that will send through the socket?"_ the answer is no: you're using composed read operations. It **could** be your buffer is too small, what is `packet_size`? – sehe Apr 07 '21 at 14:43
  • I think I may have fixed the issue. I added a special marker to the end of the archived_data file like ----end of packet---- and then made read_until read until that and everything got read in. – AjackX Apr 07 '21 at 19:05
  • There's something leaving me nervous there. Did you fail to include the `\0` when sending, or did the data sometimes contain a raw NUL byte? Anyhow, glad it works. Consider posting an answer that mayt help others. Comments are not indexed/guaranteed to exist long. – sehe Apr 08 '21 at 20:39
  • There is no route to that host. This will e.g. happen when you bind to the wrong NIC, or you use a non-routable endpoint address. Try it from a shell just using `ping hostname` or `tracert hostname`. It should tell you. – sehe Apr 09 '21 at 21:06
  • 1
    it was a blocked port lol. Anyways thanks for the help this was my first time using boost for communication. Hopefully, you won't mind one more question. Is there any way to stop a sync read operation since it just sits and waits for data until something is received? I would like to stop this operation after a certain amount of time. I have the time part working but after the thread joins the read stays open. Any tips? – AjackX Apr 10 '21 at 18:31
  • execute `_socket.cancel()` on the strand. You can also be brute and `_socket.shutdown(shutdown_both)` or even `_socket.close()` but beware of race conditions when you do. – sehe Apr 10 '21 at 18:43