I'm developing cross-platform tool that captures multiple udp streams with various bit-rate. boost::asio is used for networking. Is there any way to detect the situation when the udp buffer was full and data loss on socket could take place? The only way I can see now is reading /proc/%pid%/net/udp, but it's not aplicable for windows as you know :). Also I'd like to use boost features for it if possible.
-
3That wouldn't buy you much. Your local buffer is only one of the many places those UDP packets could have been dropped along the route. – Mat Nov 06 '11 at 14:44
-
Thanks, I know what udp is. But the streams are huge (tens/hundreds mbps) and the processing is complicated. So it would be not bad to detect the situation when there's no enough resources to handle such amount of data. – nameless Nov 06 '11 at 14:57
2 Answers
If you need this capability, you have to code it into the protocol you are using. UDP is incapable of doing this by itself. For example, you could put a sequence number in each datagram. Missing datagrams would correspond to missing sequence numbers.

- 179,497
- 17
- 214
- 278
-
Oh, it wold be nice to use this solution but I can't make changes on the protocol. – nameless Nov 06 '11 at 15:46
-
The check to see if the protocol provides some other way. It's hard to imagine how a protocol would be useful if it provided no way to assess if data was being dropped. – David Schwartz Nov 06 '11 at 15:48
I've just hit the same issue (although for me Linux-specific), and despite the question being old might as well document my findings for others.
As far as I know, there are no portable way to do this, and nothing directly supported by Boost.
That said, there are some platform-specific ways of doing it. In Linux, it can be done by setting the SO_RXQ_OVFL socket-option, and then getting the replies using recvmsg(). It's poorly documented though, but you may be helped by http://lists.openwall.net/netdev/2009/10/09/75.
One way to avoid it in the first place is to increase the receive-buffers (I assume you've investigated it already, but including it for completeness). The SO_RCVBUF options seems fairly well-supported cross-platform. http://pubs.opengroup.org/onlinepubs/7908799/xns/setsockopt.html http://msdn.microsoft.com/en-us/library/windows/hardware/ff570832(v=vs.85).aspx OS:es puts an upper limit on this though, which an administrator might have to increase. On Linux, I.E. it can be increased using /proc/sys/net/core/rmem_max.
Finally, one way for your application to assess it's "load", which with large input-buffers might serve for early detection of overloading, could be to introduce a timestamp before and after the async operations. In pseudo_code (not boost::async-adapted):
work_time = 0
idle_time = 0
b = clock.now()
while running:
a = clock.now()
work_time += a-b
data = wait_for_input()
b = clock.now()
idle_time += b-a
process(data)
Then every second or so, you can check and reset work_time / (work_time+idle_time)
. If it approaches 1, you know you're heading for trouble and can send out an alert or take other actions.

- 1,480
- 1
- 11
- 23
-
Thanks for mentioning SO_RXQ_OVFL. Just to add, there's a helpful example of its use here: https://github.com/linux-can/can-utils/blob/master/candump.c – DavidA Nov 29 '16 at 11:30