0

I'm new to networking and I want to write a simple, Client-side TCP/IP script; however, I encounter an issue when using select() to receive the server's answer.

I want to use select because I need the timeout functionality. I am using select() in a custom function with a non-zero timeout value, generating a DLL out of it, then calling the function in a client main().

Everything works as intended, except the timeout. Either I receive the message instantly (which is good), or the select() function times out instantly (which is not). It seems to me like the timeout value is not taken into account at all.

Here is the code I am using. I included the function in which select() is placed, as well as the client-side implementation.

#ifndef WIN32_LEAN_AND_MEAN
#define WIN32_LEAN_AND_MEAN
#endif

#include <Windows.h>
#include <WinSock2.h>
#include <WS2tcpip.h>
#include <iphlpapi.h>
#include <stdio.h>
#include <iostream>
#include <string>
#include <time.h>

#pragma comment(lib, "Ws2_32.lib")

int ReceiveClientMessage()
{
  int iResult = -1;
  
  FD_SET ReadSet;
  int Socket_notifs = 0;
  struct timeval timeout;

  timeout.tv_sec = 20000000;
  timeout.tv_usec = 0;
  FD_ZERO(&ReadSet);
  FD_SET(socket_Client, &ReadSet);

  Socket_notifs = select(0, &ReadSet, NULL, NULL, &timeout);
  if (Socket_notifs == SOCKET_ERROR)
  {
    printf("Select returned with error %d\n", WSAGetLastError());
    return 1;
  }
  printf("Select successful\n");

  if (Socket_notifs > 0)
  {
    int receiving_buffer_length = DEFAULT_BUFLEN;
    char receiving_buffer[DEFAULT_BUFLEN] = "";

    // Start receiving
    iResult = recv(socket_Client, receiving_buffer, receiving_buffer_length, 0);
    if (iResult > 0)
      // Message received, display save yada yada
    else
      // Error receiving
  }
  else if (Socket_notifs == 0)
  {
    // Timeout
    printf("Select timed out\n\n");
    return 2;
  }
  else
  {
    // Other issue with select
    printf("Unknown error with Select\n\n");
    return 3;
  }

  return 0;
}

//----------------------------------------------------------------
// Client main()

string message, response;

while (true)
{
  /* Part where we send a message, irrelevant here */
  
  // Receive a message on client side
  iResult = -1;
  iResult = ReceiveClientMessage();
  cout << iResult << endl;
  if (iResult != 0)
    // Error, clean up and break
  else
    // Display message
}

I tried to remove most irrelevant parts of my code, and only left the parts relevant to the select, timeout and receive implementation.

I have already tried setting various values for timeout.tv_sec and timeout.tv_usec (from 20 to 20000000), in case it wasn't a value in seconds or something, without results. Sometimes I send a message, and instantly see the "Select timed out" prompt (which, from my understanding, should not happen). Any idea on how to solve this (either by finding out why the timeout values are not taken into account, or by using another method that has a timeout functionality)?

Snow
  • 13
  • 3
  • And, you didn't forget to call [WSAStartup()](https://learn.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-wsastartup) in your `main()`? – Scheff's Cat Sep 27 '22 at 09:08
  • It is called. Everything else works (WSA error codes, connecting, sending). Even receiving works, but from time to time it times out so fast the server can't answer in time, and the next send-receive pair will receive 2 messages instead of 1. – Snow Sep 27 '22 at 09:14
  • I guess one work around would be to loop the call to `ReceiveClientMessage()` until it receives something, but if possible I would rather have a working long timeout option – Snow Sep 27 '22 at 09:16
  • It's a bit unusual to expect that receive will get exact one message for each call. Even in a "handshake" protocol, I wouldn't trust on the fact that a message is received at once. Hence, I usually use another approach: Either the message contains a known fix block which reports how much will follow, or the received contents is fed into some kind of parser which consumes it as a stream and reports something (e.g. emits a signal) as often as a complete message could be parsed. With this, I usually use timeouts of a few milliseconds e.g. to interleave select with e.g. GUI event processing. – Scheff's Cat Sep 27 '22 at 13:29
  • With this in mind, I actually never cared about how exact the timeout is considered. Googling a bit, I found [Spurious readiness notification for Select System call](https://stackoverflow.com/q/858282/7478597) Too bad, that this is about Linux (wrong OS) but maybe there is something similar possible on Windows... – Scheff's Cat Sep 27 '22 at 13:32
  • Yes, this is meant for a very specific implementation, where I know for sure that every message I send will send back 1 response. But I also know that this response can take some time to come (up to 2 seconds). Thus, ideally I still want the `select()` to be blocking (I don't have anything else that I want to do before receiving the response), but I also want to make sure I receive the message, even if it is delayed in coming. As such, `timeout` was great for me, but it doesn't work unfortunately. The work around I described does work; it's a bit annoying, but at least the job is done – Snow Sep 27 '22 at 14:32
  • @Snow I've never had any problems with `select()` timeouts not working correctly. Looking at the code provided, I see no way for it to timeout prematurely, which makes me wonder if this is not your real code to begin with. On the other hand, `select()` is not the only way to implement a read timeout. Since you "*don't have anything else that [you] want to do before receiving the response*", then why not simply use `recv()` in a blocking manner with an `SO_RCVTIMEO` timeout applied to it? Then you don't need `select()` at all. Also, what is the point of using a timeout set to **231 days**? – Remy Lebeau Sep 27 '22 at 17:04
  • @RemyLebeau I did not know `recv()` had a timeout to begin with, thanks! Only reason I switched to `select()` was because sometimes it got stuck for what seemed an infinite amount of time (I probably just never was patient enough to hit the default value for `SO_RCVTIMEO`). About the timeout being 231 days, I said I tested ridiculous values in case it was applied in milliseconds or any other units. I want a 20s timeout. And it is my code ; I removed irrelevant parts for clarity purposes, but didn't modify anything relevant. Hence my confusion, I thought it would work. – Snow Sep 28 '22 at 09:10
  • By relevant parts I mean parts that use/call `select()`. I removed what pertained to connecting sockets, sending messages to the server and logging errors and successes to a file (all things that should not interfere with the `select()` timeout, in my opinion). I will try using `recv()` with custom `SO_RCVTIMEO` though ! – Snow Sep 28 '22 at 09:14

0 Answers0