0

I'm trying to test how to handle a client when a client sends data after a tcp server crash. I wrote a simple client and server code to provide some visual example. The client connects to the tcp server, sends data to the server, and the server reads the data. Now I add a sleep(20) to both implementations so I have time to kill the server process (ctrl-c). The client calls send() again and returns the length of the message. The server is not connected so probably the client will not receive a ACK packet. I assume that the client gets a RST packet but the send() has already returned. The client calls send() a 3rd time but this time, the process ends abruptly without showing any error. The last lines: cout << rsize << endl and everything below it are never called or at least that's what looks like.

When I run this, the client prints rsize values for the first two messages, but not the last one. The server prints only the first message received.

My questions are (1) why is this happening?, and (2)how can I handle correctly a server crash if the client ends abruptly?

I already read other questions related to the topic, but they don't show actual code of how to handle this.

Client code

#include <iostream>
#include <cerrno>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>

using namespace std;

int main()
{
  int serverPort = 65003;
  char serverHost[] = "127.0.0.1";
  int mySocket;
  int bufferSize = 524388;
  char responseBuffer[bufferSize];

  struct sockaddr clientAddr;
  struct sockaddr_in serverAddr;
  struct in_addr ipv4addr;

  serverAddr.sin_family = AF_INET;
  mySocket = socket(AF_INET, SOCK_STREAM, 0);
  serverAddr.sin_port = htons (serverPort);
  inet_aton(serverHost, &serverAddr.sin_addr);

  connect(mySocket, (struct sockaddr*) &serverAddr, sizeof(serverAddr));
  int addrLen = sizeof(serverAddr);
  getsockname(mySocket, &clientAddr, (socketlen_t*)&addrLen);

  ssize_t rsize;

  char requestMsg [] = "<This is my test Msg>";
  cout << rsize << endl;
  rsize = send(mySocket, requestMsg, strlen(requestMsg), 0);
  if (rsize != (ssize_t)strlen(requestMsg))
  {
    cout << strlen(requestMsg) << endl;
    cout << strerror(errno) << endl;
  }

  sleep(20);
  char requestMsg2 [] = "<This is my 2nd test Msg>";
  rsize = send(mySocket, requestMsg2, strlen(requestMsg2), 0);
  cout << rsize << endl;
  if (rsize != (ssize_t)strlen(requestMsg2))
  {
    cout << strlen(requestMsg2) << endl;
    cout << strerror(errno) << endl;
  }

  char requestMsg3 [] = "<This is my 3rd test Msg>";
  rsize = send(mySocket, requestMsg3, strlen(requestMsg3), 0);
  cout << rsize << endl;
  if (rsize != (ssize_t)strlen(requestMsg3))
  {
    cout << strlen(requestMsg3) << endl;
    cout << strerror(errno) << endl;
  }

  return 0;
}

server code

#include <iostream>
#include <cerrno>
#include <sys/socket.h>
#include <netinet/in.h>
#include <arpa/inet.h>

using namespace std;

int main()
{
  int mySocket, connSocket, serverPort;
  socketlen_t clientLen;
  struct sockaddr_in serverAddr, clientAddr;
  int bufferSize = 256;
  char buffer[bufferSize];
  ssize_t rsize;

  mySocket = socket(AF_INET, SOCK_STREAM, 0);
  memset(&serverAddr, 0, sizeof(serverAddr));
  serverPort = 65003;
  serverAddr.sin_family = AF_INET;
  serverAddr.sin_addr.s_addr = INADDR_ANY;
  serverAddr.sin_port = htons(serverPort);

  bind(mySocket, (struct sockaddr *) &serverAddr, sizeof(serverAddr));
  listen(mySocket,1);
  clientLen = sizeof(clientAddr);
  connSocket = accept(mySocket, (struct sockaddr *) &clientAddr, &clientLen);

  memset(&buffer, 0, sizeof(buffer));
  rsize = read(connSocket, buffer, 255);
  cout << buffer << endl;

  sleep(20);

}
Ed Rivera
  • 59
  • 7
  • Aside from a complete lack of error handling on both sides, if the server process crashes, the client process will not also crash. If you are not seeing output then something else is going on. For instance, you are using blocking sockets, so it could be that the last `send()` is simply blocking if the send buffer has filled up waiting for an ACK/RST to arrive. Killing the server abruptly does not guarantee a timely reply, so you may just have to wait for the socket to timeout. Consider using `SO_SNDTIMEO` or `select()`/`epoll()` for timeout handling. – Remy Lebeau Oct 11 '16 at 16:23
  • @RemyLebeau I removed the error handling on purpose I don't want to add unnecessary code. The process ends as it reach the last send. It's not blocking that's my issue here. – Ed Rivera Oct 11 '16 at 16:29
  • @RemyLebeau I was expecting to get -1 in the 2nd or 3rd send() function but I don't get anything. – Ed Rivera Oct 11 '16 at 16:33
  • There is no possible way the *client* process can just terminate abruptly like you describe when the *server* process is killed. Something else has to be going on, you need to actually debug your code. When the server is killed, `send()` on the client will happily continue buffering outgoing data until the socket's buffer is full, and then it will block subsequent sends, until the client's socket stack eventually detects the lost connection, which may take awhile. You could try enabling `SO_KEEPALIVE` to speed up that detection, for instance. – Remy Lebeau Oct 11 '16 at 19:02
  • @RemyLebeau - I don't know if you have run my code, but you'll see the behavior I'm trying to explain. This is just for testing. If the client calls read() after I terminate the server process it would return 0 so that way the client knows the connection is closed. The purpose of this test is to understand more what happens when the client calls send after a server crash. I also add the SO_SNDTIMEO and still see the same behavior. If I don't terminate the server process, the client works as expected returning -1 with error: connection reset by peer. – Ed Rivera Oct 11 '16 at 20:24
  • I don't have to run your code to know how `send()` behaves. I've been using BSD socket APIs for almost 20 years, and what you describe is NOT how `send()` behaves on any platform. `send()` **WILL NOT** crash the calling process when the connected peer is killed. If you are experiencing a real crash, it has to be caused by something else, not `send()` itself. Unless you have a bad socket driver or something else at the lower levels. But how are you getting a "connection reset" error if you don't terminate the server? What is resetting the connection? – Remy Lebeau Oct 11 '16 at 20:31
  • @RemyLebeau - I just found out what's happening. The third time the client calls send(), the client is trying to write data in a socket that it's already disconnected. This cause SIGPIPE which by default terminate the process. The client can avoid this by setting MSG_NOSIGNAL and handling the signal. Thanks Remy for trying to help and reading my question. – Ed Rivera Oct 11 '16 at 21:03
  • Or, enable the `SO_NOSIGPIPE` option on the socket. See [How to prevent SIGPIPEs (or handle them properly)](http://stackoverflow.com/questions/108183/). – Remy Lebeau Oct 11 '16 at 21:10

0 Answers0