A long lingering CLOSE_WAIT is really a programming error (the OS performs the connection shutdown, but your application doesn't remember to free the socket in a timely manner -- or at all).
TIME_WAIT, however, is not really an exceptional condition. It is necessary to provide a clean closing on a connection that might have lost the very last ACK segment during the normal connection shutdown. Without it, the retransmition of the FIN+ACK segment would be responded with a connection reset, and some sensitive applications might not like it.
The most common way to have a smaller number of sockets in TIME_WAIT state is to shorten its duration globally, by tuning a global OS-level parameter. IIRC, there is also a way to disable it completely on a single socket through setsockopt()
(I don't remember what option, however), but then you might occasionally send possibly unwanted RST segments to peers that lose packets during connection shutdown.
As to why you see them only on one of the sides of the connection, it is probably on the side that requested to close the connection first. It is it that sends the first FIN, receives the FIN+ACK, and sends the last ACK. If that last ACK is lost, it will receive the FIN+ACK again, and should resend the ACK, not RST. The other side, however, knows for sure that the connection is completely finished when the last ACK arrives, and then there is no need to wait for anything else on that socket -- if anything arrives to that host with the same pair of address+TCP port endpoints as the just-closed socket, than it should either be a new connection request (in which case a new connection might be opened), or it is some TCP state machine violation (and must be responded to with RST, or maybe some ICMP prohibited message).