I'm using C to implement a client server application. The client sends info to the server and the server uses it to send information back. I'm currently in the process of writing the code to handle the receiving of data to ensure all of it is, in fact, received.
The issue I'm having is best explained after showing some code:
int totalRead = 0;
char *pos = pBuffer;
while(totalRead < 6){
if(int byteCount = read(hSocket, pos, BUFFER_SIZE - (pos-pBuffer)>0)){
printf("Read %d bytes from client\n", byteCount);
pos += byteCount;
totalRead += byteCount;
}else return -1;
}
The code above runs on the server side and will print out "Read 1 bytes from client" 6 times and the program will continue working fine. I've hard-coded 6 here knowing I'm writing 6 bytes from the client side but I'll make my protocol require the first byte sent to be the length of rest of the buffer.
int byteCount = read(hSocket, pBuffer, BUFFER_SIZE);
printf("Read %d bytes from client", byteCount);
The code above, used in place of the first code segment, will print "Read 6 bytes from client" and continue working fine but it doesn't guarantee I've received every byte if only 5 were read for instance.
Can anyone explain to me why this is happening and a possible solution? I guess the first method ensures all bytes are being delivered but it seems inefficient reading one byte at a time...
Oh and this is taking place in a forked child process and I'm using tcp/ip.
Note: My goal is to implement the first code segment successfully so I can ensure I'm reading all bytes, I'm having trouble implementing it correctly.