0

In some tutoral-based codes, I connected a C# web application to a Java socket server through my web application's WebMethod in a webservice. Unfortunately this is happening pretty slowly. For example, when the Java server echoes some data to the C# client I get the following results:

  • Size of data sent= 32MB, total time= 980 ms (no problem)
  • Size of data sent= 4MB, total time= 530 ms (becomes somewhat slower)
  • Size of data sent= 1MB, total time= 520 ms (absolutely bottlenecked)
  • Size of data sent= 1kB, total time= 516 ms (this must be some constant latency of something)

I've read that people can make real-time communications (~60/s) and sometimes even millions of streams/s with some server apps. What could be the problem with my implementation? It is sending multiple messages over a single open connection, so the object creation overhead should only show up for the first message? Why am I getting ~500 ms overhead on my messaging?

The C# webmethod is initiated when the web-app starts and connects to the same Java server for every call to this webmethod.

public static IPHostEntry ipHostInfo = Dns.Resolve(Dns.GetHostName());
public static IPAddress ipAddress = ipHostInfo.AddressList[0];
public static IPEndPoint remoteEP = new IPEndPoint(ipAddress, 9999);

// Create a TCP/IP  socket.
public static Socket sender = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
public static int z = 0; 

[WebMethod]
public BenchmarkData_ StartClient()
{
    lock(lck)
    {
        z++;
        if (z == 1)
        {
            sender.Connect(remoteEP);
        }
    }
    int bytesRec = 0;
    int boy = 0;
    byte[] bytes = new byte[1024 * 1024];
    int bytesSent = 0;
    SocketFlags sf = new SocketFlags();
    Stopwatch sw = new Stopwatch(); Stopwatch sw2 = new Stopwatch();

    #region r
    lock (lck)
    {
        sw.Start();
        // Data buffer for incoming data.

        // Connect to a remote device.
        try
        {
            // Establish the remote endpoint for the socket.
            // This example uses port 11000 on the local computer.

            // Create a TCP/IP  socket.
            sender.ReceiveBufferSize = 1024 * 1024;
            sender.ReceiveTimeout = 1;

            // Connect the socket to the remote endpoint. Catch any errors.
            try
            {
                Console.WriteLine("Socket connected to {0}", sender.RemoteEndPoint.ToString());
                // Encode the data string into a byte array.
                byte[] msg = Encoding.ASCII.GetBytes("This is a test<EOF>");

                // Send the data through the socket.
                bytesSent = sender.Send(msg);

                // Receive the response from the remote device.
                sw.Stop();

                sw2.Start();
                while ((bytesRec = sender.Receive(bytes)) > 0)
                {
                    boy += bytesRec;
                }

                Console.WriteLine("Echoed test = {0}", Encoding.ASCII.GetString(bytes, 0, bytesRec));

                // Release the socket.
                // sender.Shutdown(SocketShutdown.Both);
                // sender.Close();
                sw2.Stop();
            }
            catch (ArgumentNullException ane)
            {
                Console.WriteLine("ArgumentNullException : {0}", ane.ToString());
            }
            catch (SocketException se)
            {
                Console.WriteLine("SocketException : {0}", se.ToString());
            }
            catch (Exception e)
            {
                Console.WriteLine("Unexpected exception : {0}", e.ToString());
            }
        }
        catch (Exception e)
        {
            Console.WriteLine(e.ToString());
        }
    }
    #endregion

    return new BenchmarkData_() { .... };
}

Here is the Java code (half-pseudo code)

serverSocket=new ServerSocket(port); // in listener thread
Socket socket=serverSocket.accept(); // in listener thread

// in a dedicated thread per connection made:
out=new  BufferedOutputStream( socket.getOutputStream());
in=new DataInputStream(socket.getInputStream());        

boolean reading=true;
ArrayList<Byte> incoming=new ArrayList<Byte>();

while (in.available() == 0)
{
    Thread.sleep(3);    
}

while (in.available() > 0)
{
    int bayt=-2;
    try {
        bayt=in.read();
    } catch (IOException e) { e.printStackTrace(); }

    if (bayt == -1)
    {
        reading = false;
    }
    else
    {
        incoming.add((byte) bayt);                      
    }
}

byte [] incomingBuf=new byte[incoming.size()];
for(int i = 0; i < incomingBuf.length; i++)
{
    incomingBuf[i] = incoming.get(i);
}

msg = new String(incomingBuf, StandardCharsets.UTF_8);
if (msg.length() < 8192)
    System.out.println("Socket Thread:  "+msg);
else
    System.out.println("Socket Thread: long msg.");

OutputStreamWriter outW = new OutputStreamWriter(out);
System.out.println(socket.getReceiveBufferSize());
outW.write(testStr.toString()); // 32MB, 4MB, ... 1kB versions
outW.flush();
Dan Oberlam
  • 2,435
  • 9
  • 36
  • 54
huseyin tugrul buyukisik
  • 11,469
  • 4
  • 45
  • 97
  • I assume you are including time to transfer actual data in your measured time. You could be running a bunch of these at the same time, but transferring 32Mb of data has the right to take close to a second even over a decent local area network. – MK. Aug 05 '15 at 20:43
  • It re-runs the same function on javascript's callBack functions to have a warm-up sequence, rightnow only a single of this runs at a time. – huseyin tugrul buyukisik Aug 05 '15 at 20:44
  • Don't use that `while(in.available()==0) { Thread.sleep(3); }` A good designed program should never need sleep. (btw: you will never get a exact 3 ms sleep. At leat 16 ms depending on the windows version you are using) – Eser Aug 05 '15 at 20:45
  • No, I'm talking about your claim of millions of streams -- the fact that 1Mb transfer takes a second doesn't mean you couldn't be running thousands of these in parallel, all doing 1Mb/sec (if you have bandwidth). – MK. Aug 05 '15 at 20:46
  • Thread constantly reads buffer without this sleep/wait part and doesnt get a single byte from client. – huseyin tugrul buyukisik Aug 05 '15 at 20:47
  • @MK, what about 1kB part? Its 500ms too. – huseyin tugrul buyukisik Aug 05 '15 at 20:48
  • what is your ping time between these 2? – MK. Aug 05 '15 at 20:48
  • What could be a minimalistic example of pinging between? – huseyin tugrul buyukisik Aug 05 '15 at 20:51
  • @huseyintugrulbuyukisik see this minimalist code about TCP http://stackoverflow.com/a/21510978/932418 – Eser Aug 05 '15 at 20:52
  • It says zero milliseconds on windows cmd – huseyin tugrul buyukisik Aug 05 '15 at 20:54
  • ok, houw about ping -l 1024 192.168.1.1 – MK. Aug 05 '15 at 20:59
  • also your java code reads one byte at a time. You should be reading at least 8192 buffer at a time. – MK. Aug 05 '15 at 21:01
  • @MK zero ms again. Tried with differend buffer sizes but did not change latency. – huseyin tugrul buyukisik Aug 05 '15 at 21:02
  • are you sure we are talking about the same thing? I'm saying this bayt=in.read(); needs to be replaced with int nbytes = in.read(buffer); where buffer is byte[] buffer = new byte[8192]; – MK. Aug 05 '15 at 21:06
  • Yes, code is messy so I pseudoed it and forgot some words sorry. – huseyin tugrul buyukisik Aug 05 '15 at 21:14
  • well your next steps is to first replace both sides with running nc (netcat) or some other tool like that and confirm that the network speed is actually good. Then replace client and server one by one and see if one of them is fast when talking to netcat. This will at least help you isolate it to which side is broken. – MK. Aug 06 '15 at 17:33
  • The lagging part is reading echo from server but I dont know why. Java to Java socket is microseconds just as fast as C# to C# in microseconds. – huseyin tugrul buyukisik Aug 06 '15 at 17:46

1 Answers1

0

Problem solved in replacement of

while ((bytesRec = sender.Receive(bytes))>0)
{
   boy += bytesRec;
}

with

 while (sender.Available <= 0) ;

 while (sender.Available>0)
 {
      bytesRec = sender.Receive(bytes);
      boy += bytesRec;
 }

now its in microseconds for 1kB reads instead of 500ms. Because its checking a single integer instead of trying to read on whole buffer? Maybe. But it now doesnt read all the message sent from server. It needs some type of header to know how much to read on. Reads about several kilobytes even when server sends megabytes.

When server sends 3MB and client reads exactly same amount, it takes 30ms. Both in same machine. Trying to read more than server has sent, (even a single byte), raises an exception so TCP really sends exact same amount needed by client.

huseyin tugrul buyukisik
  • 11,469
  • 4
  • 45
  • 97