I'm writing a client application, that has to connect to a server application via TCP socket. The framework of choice is .NET Core 2.0 (it is not ASP.NET Core it is just a console app). I'm using a TcpClient
class, and its .BeginConnect()
and .EndConnect()
methods, to be able to set a connection timeout. Here is the code:
public class Program
{
public static void Main(String[] args)
{
var c = new TcpClient();
int retryCount = 0;
var success = false;
IAsyncResult res;
do
{
if (retryCount > 0) Console.WriteLine("Retry: {0}", retryCount);
retryCount++;
c.Close();
c = new TcpClient();
res = c.BeginConnect("10.64.4.49", 13000, null, null);
success = res.AsyncWaitHandle.WaitOne(TimeSpan.FromSeconds(2));
Console.WriteLine(success.ToString());
}
while (!c.Connected);
c.EndConnect(res);
Console.WriteLine("Connected");
Console.ReadLine();
}
When I compile, publish and run this Console App, and nothing is listening on the IP address and port, the results if the app is running on Windows or Linux are different.
Here are the results on Windows:
Here is what it looks like on Linux:
The results are pretty the same, the only difference is on Windows it tries to connect every two seconds, but on Linux, it acts like this two seconds are ignored and goes on a "rampage connection session" as I call it. I'm not sure if this is a .NET Core issue or some Linux tune-up, that Windows already have predefined. Can anyone advice what might be the problem, and eventually propose a solution.
Thanks in advance,
Julian Dimitrov