I am a beginner when it comes to tcp/ip applications. I'm coding a game and recently I added network multiplayer. I used TcpListener and TcpClient objects from System.Net.Sockets to implement networking. Game works great when I test on localhost or LAN. Later, I tested it over a greater distance: between my pc and my azure VM. Results are shocking. Client received only about 7% of messages sent by server. Server received 84% of messages. I know that TCP/IP doesn't understand what message is because it sends data as a stream. This is what I consider a message:
NetworkStream networkStream = ClientSocket.GetStream();
networkStream.Write(_bytes, 0, _bytes.Length); //_bytes is array of bytes
networkStream.Flush();
My server sends about 20-40 messages per second but 99% of them are 10-15 bytes long. Client sends ~4 messages per second. My machine has access to fast and reliable internet connection. I guess that windows azure data center should have good connection as well. How can I improve network performance of my application ?
EDIT: How client is receiving messages:
NetworkStream serverStream = ClientSocket.GetStream();
byte[] inStream = new byte[10025];
serverStream.Read(inStream, 0, inStream.Length);
I just realized that it might be interpretation error, meaning that data is received but it's somehow misinterpreted. For instace, I also send inside a message number that represents the total count of sent messages. This number is interpreted fine by this 7% of messages received by cleint. However, messages received by server have some strange numbers in them. For example i received message 31,32,33 and then 570425344 then 35 and then 0. So I guess bytes might be offset. I don't know how and why would that happen.