Is it possible to receive zero bytes message? My app is a kind of message router, it receives, inspects and sends messages.
I use Socket.Poll
to wait for incoming messages. The app works. No issues during normal communication.
The bad things happen after ca 15 minutes of inactivity. I register socket readable state, the socket reports IsConnected = true
and Available = 0
. Since the protocol requires packets to be X.690 compliant, it doesn't have a clue what to do with zero bytes message and crashes.
Since I have no clue what this weird state means, I decided to ignore this as possible .NET bug and just enter socket polling loop again. As I expected - it waits there patiently, but then when I request some activity from the client - my router says something about being forcibly disconnected. What's going on?
My app inspects LDAP traffic. I was told LDAP servers don't time out and I tested it. They don't, at least not after mere 15 minutes. It's something wrong either with my application (in sense of my understanding of the transport protocol) or with .NET itself.
I already found one bug in .NET which causes Socket.Poll to be unreliable when using it before reading data from NetworkStream
and SslStream
. The workaround is to wait 1ms before executing read. There's no reason this should be necessary in my app since it's carefully designed to use only synchronous calls and use as few meticulously synchronized threads as possible. I even tested that 5 microseconds is enough for it to work, but to achieve full stability I decided to wait whole 1 millisecond, since it doesn't introduce significant lags to my process. I tried to synchronize everything with everything, make it full simplex, introduce waiting on each possible line of code, all for nothing except exactly one and only exact place - directly between registering readable state and executing Stream.Read()
. I tested this over a month, no clue why this happens, 1ms wait solves the problem, not a single crash since I wait.
So I don't rule out another bug in .NET itself yet. What else? Could it be the client? The client is LDP application. The server is LDS. All on my localhost.
What should I do when receiving something like this? Disconnect the session? Wouldn't it cause the client to throw an error message? And what's the most important - WHY does it happen? All sockets in my app have their timeouts set to zero (which means indefinite wait).
Obviously the session is a Task
, run as LongRunning
. Could it be somehow timed out by the system? But if this was the case, how could I receive IOException
from this task?
What I want to achieve is my app doesn't time out ever. When a session is started it must be alive until explicitly disconnected. My app detects disconnection properly. Every decent client sends a special packet, normal, with non-zero length meaning "disconnect please". But in this weird case of timing out - I don't detect ANYTHING except readable state with no content to read.
About the duplicate of the other question about zero length packet:
The client doesn't seem disconnected at all. It sees the active session. It seems like my app, not the client drops the connection. It's not in my code, so it's in .NET or Windows. Answer I receive a disconnect request doesn't solve my problem. I'm back at square one. I don't request disconnection. I want my connection to be kept alive, but Socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, true);
didn't help at all. What's more, if I tried to drop my connection after receiving this packet, client crashed on first communication attempt, as it din't expect being disconnected at this time. I will try to notify the client before dropping the connection as a workaround, but STILL I DON'T KNOW what causes that disconnection request. This is why my question is different. It's not about what zero bytes packet means, it's more WHY I get it, and even more about how to prevent it from happening.
So about how it ended: When I receive 0 length state, I emit a "null packet" to the other endpoint (let's call it B) of my application. Then my endpoint A disconnects. A couple of clock ticks after the other endpoint B receives the "null packet" and also disconnects. When the client tries to send another request it detects the state properly, reconnects and does its stuff. No errors reported whatsoever. Works flawlessly. I was a little surprised because when I disconnected the client manually (even by killing it's task from task manager) it sent a special end-session message. Here I received something different. I learned it doesn't matter which side requested the disconnection, I just need to handle it with closing the socket. At least it worked.