I know this is an old thread, but for the benefit of those who stumble onto this via search engine, I will answer the question, as it hasn't really been answered above.
Before I start, get over the system call hangup - you cannot interact with kernel-based (*nix) network stacks without switching in and out of kernel space. Your goal should be to understand the stack features, so you can get the best out of your system.
How can I tell if a read socket buffer is full
This part has been answered - you don't because it's not how you should be thinking.
If the sender is (badly) fragmenting it's TCP frames (usually due to not buffering marshaled data on output, and having the Nagle algorithm turned off with TCP_NDELAY), your idea of reducing the number of system calls you make is a good idea. The approach you should be using involves setting a "low watermark" for reading. First, you establish what you think is a reasonable receive buffer size by setting SO_RCVBUF using setsockopt(). Then read back the actual read buffer size using getsockopt(), as you might not get what you ask for. :) Unfortunately, not all implementations allow you to read SO_RCVBUF back again, so your mileage may vary. Next, decide how much data you want to be present for reading before you want to read it. Set SO_RCVLOWAT with this size, using setsockopt(). Now, the socket's file descriptor will only select as readable when there is at least that amount of data read to read.
or a write socket buffer is empty?
This is an interesting one, as I needed to do this recently to ensure that my MODBUS/TCP ADU's each occupied their own TCP frames, which the MODBUS specification requires (@steve: controlling fragmentation is one time you do need to know when the send buffer is empty!). As far as the original poster is concerned, I doubt very much that he really wants this, and believe he would be much better served knowing the send buffer size before he starts, and checking the amount of data in the send buffer periodically during sending, using techniques already described. That would provide finer-grained information about the proportion of the send buffer used, which could be used to throttle production more smoothly.
For those still interested in how to detect (asynchronously) when the send buffer is empty (once you're sure it's really what you want), the answer is simple - you set the send low-watermark (SO_SNDLOWAT) equal to the send buffer size. That way the socket's file descriptor will only select as writable when the send buffer is empty.
It's no coincidence that my answers to your questions revolve around the use of select().
In almost all cases (and I realize I'm heading into religious territory now!) apps that need to move a lot of data around (intra- and inter-host) are best structured as single-threaded state machines, using selection masks and a processing loop based around pselect(). These days some OS's (Linux to name one) even allow you to manage your signal handling using file descriptor selections. What luxury - when I was a boy... :)
Peter