I have an asynchronous console .NET socket client application that establishes dozens of connections to a remote socket server.
It's a stress test application so each connection sends as many messages as possible waiting for an acknowledgement from the remote server after each message gets sent.
The socket server has a public IP but I can also use the internal IP because it's in the same network.
As the number of connection increases the number of messages per minute goes down pretty bad and I've been trying to find out why.
But for some reason, the performance only seems to go down for the connections sharing the same destination IP. In other words, if I connect 100 users to the internal IP and 5 users to the external IP, the latter get top-notch performance while the former suffer.
But this makes no sense because all the communication is happening exactly between the same two machines (my desktop and the one-and-only server).
The amount of data being transferred doesn't seem that high for it to be some kind of network flooding. Honestly, it looks like some sort of artificial cap for the number of simultaneous sockets waiting for answers on a given destination IP. I've profiled my code and that's where it hangs, waiting for the remote server to answer. Is there a setting like that somewhere in the config files for .Net?
UPDATE: there's just one NIC in the server, the external IP we use to reach it works via port forwarding.
More internal IPs were added to the server but they ended up competing with each other for resources i.e. only the external IP is behaving differently. This makes me suspect the bottleneck is on the server side, maybe some kind of load balancing per client IP (when we connect via external IP the client IP they receive would be different as well)