if send() returns x bytes, does recv() get the same amount of bytes in one call?
In general, certainly no !!
For example, for TCP/IP sockets (see tcp(7) & socket(7)) going through wifi routers and/or intercontinental routers, packets could be fragmented and/or reassembled. So a given send
can correspond to several recv
and vice versa, and the "boundaries" of messages are not respected. Hence, for applications, TCP is a stream of bytes without any message boundaries. Read also about sliding window protocol and TCP congestiion control used inside TCP.
In practice, you might observe, e.g. between two computers on the same Ethernet cable, that packets are not fragmented or reassembled. But you should not code with that hypothesis.
Concretely, application level protocols like HTTP or SMTP or JSONRPC or X11 protocols should be designed to define message boundaries and both server and client sides should do buffering.
You'll want to use poll(2), see this answer.
if the send() returns 10 bytes, are those ten bytes still only at the receiver side, but ready to be sent.
It is not easy to define what "being still at the reciever side" really means (because you don't really care about what happens inside the kernel, or inside the network controller, or on intercontinental cables). Therefore the above sentence is meaningless.
Your application code should only care about system calls (listed in syscalls(2)...) like poll(2), send(2) and related, write(2), recv(2) and related, read(2), socket(2), accept(2), connect(2), bind(2) etc...
You might want to use messaging libraries like 0mq.
The network cable got eaten by a dog the moment after my send() function returned 1.
Why do you care that much about such a scenario. Your dog could also have dropen your laptop, or have pee-ed on it. Once send
has told your application than ten bytes have been emitted, you should trust your kernel. But the receiving program might not yet have gotten these bytes (on another continent, you'll need to wait dozens of milliseconds, which is a quite big delay for a computer). Very probably, the ten bytes are in the middle of the ocean when your dog have bitten your Ethernet cable (and you can reasonably code as if they have been emitted).
The TCP protocol will detect that the link has been interrupted, but that error would be given to your program much later (perhaps as an error for the next call to send
happening ten seconds after).
(there are some large macroscopic delays in the TCP definition, perhaps as large as 128 seconds -I forgot the details- and these delays are too small for interplanetary communication; so TCP can't be used to Mars)
You should (most of the time) simply reason at the system call level.
(of course, in some cases -think of remote neurosurgical robots- that might not be enough)
I surely have misunderstood the benefits of TCP vs UDP.
If you just used UDP, a given packet could be fragmented, lost or received several times. With TCP, that cannot reasonably happen (at least when the next packets have been successfully sent and received).