0

Looking at the select() function it seems that it is used to examine multiple event sources.

I have a single socket bound to one port on my server.

Basically want to time out the recvfrom() function after 500ms.

Is select() the best/only way to do this or is it over kill?

thanks!

Community
  • 1
  • 1
T.T.T.
  • 33,367
  • 47
  • 130
  • 168

2 Answers2

1

select is the best way to set the timeout on the socket file descriptor. It is not overkill it's actually the proper call that will put your program to sleep until the data is available or timeout occurs which means your program won't lock the system.

Ahmed Masud
  • 21,655
  • 3
  • 33
  • 58
  • Why is it better than SO_RCVTIMEO? – user207421 Oct 22 '11 at 03:29
  • No better actually... SO_RCVTIMEO may be better in this case, I just wanted to point out that select is not an overkill in terms of overhead and puts the process to sleep. In either case you would have to check for EAGAIN ... – Ahmed Masud Oct 22 '11 at 12:33
1

If you use the socket in blocking mode, then using select() to wait for data to arrive before then calling recvfrom() is one (and the more common) approach, but another approach is to use setsockopt() to set the socket's SO_RCVTIMEO option, which sets a timeout for blocking read operations (see SO_SENDTIMEO for blocking sending operations). You can then call recvfrom() and let it time out internally.

If you use the socket in non-blocking mode, then you can receive asynchronous FD_READ notifications using WSAASyncSelect() or WSAASyncEvent(). No need to wait for timeouts.

If you use the socket in overlapped mode, then you can receive asynchronous read notifications from WSARecvFrom() using WSAGetOverlappedResult() or GetQueuedCompletionStatus(). No need to wait for timeouts.

Remy Lebeau
  • 555,201
  • 31
  • 458
  • 770