I have an implementation of a TCP client and server in C where the client continuously sends data from a file and the server is continuously reading the data. I successfully connect the two together and can send successfully using the following code snippets:
/* File: client.c */
while (fread(buffer, 1, TCP_BUFFER_SIZE, in_file) > 0)
{
send_socket_data(socket_desc, buffer, TCP_BUFFER_SIZE);
}
where TCP_BUFFER_SIZE = 2 << 20 /* approx 2MB */
and send_socket_data
is defined:
/* File: client.c */
void send_socket_data(int socket_desc, void *buffer, int buffer_size)
{
/* Send data to server */
if (send(socket_desc, buffer, buffer_size, 0) < 0)
{
fprintf(stderr, "Send failed\n");
exit(EXIT_FAILURE);
}
}
(... and in the server I do the following)
/* File: server.c */
while ((read_size = recv(new_socket, buffer, TCP_BUFFER_SIZE, 0)) > 0)
{
/* Write to binary output file */
fwrite(buffer, TCP_BUFFER_SIZE, 1, out_file);
}
I do checking of read error or client disconnection etc as well in the file.
However, my problem is during the duration of the program, the recv()
is being called multiple times when only one send()
has been called and after using clock
, I could see that the receiving side runs much faster than the sending. Therefore, if I'm sending a file of 322MB it winds up being stored as a 1GB file on the server end.
How can I resolve this problem? Or is my implementation completely wrong?
I've seen people talking about implementing an application protocol on top of TCP kind of like what HTTP does etc. Can anyone please prescribe for me a path I must go down. Thanks.