3

I wrote the server that accepts connection and bombards messages ( ~100 bytes ) using text protocol and my implementation is able to send about loopback 400K/sec messages with the 3rt party client. I picked Netty for this task, SUSE 11 RealTime, JRockit RTS. But when I started developing my own client based on Netty I faced drastic throughput reduction ( down from 400K to 1.3K msg/sec ). The code of the client is pretty straightforward. Could you, please, give an advice or show examples how to write much more effective client. I,actually, more care about latency, but started with throughput tests and I don't think that it is normal to have 1.5Kmsg/sec on loopback. P.S. client purpose is only receiving messages from server and very seldom send heartbits.

Client.java

public class Client {

private static ClientBootstrap bootstrap;
private static Channel connector;
public static boolean start()
{
    ChannelFactory factory =
        new NioClientSocketChannelFactory(
                Executors.newCachedThreadPool(),
                Executors.newCachedThreadPool());
    ExecutionHandler executionHandler = new ExecutionHandler( new OrderedMemoryAwareThreadPoolExecutor(16, 1048576, 1048576));

    bootstrap = new ClientBootstrap(factory);

    bootstrap.setPipelineFactory( new ClientPipelineFactory() );

    bootstrap.setOption("tcpNoDelay", true);
    bootstrap.setOption("keepAlive", true);
    bootstrap.setOption("receiveBufferSize", 1048576);
    ChannelFuture future = bootstrap
            .connect(new InetSocketAddress("localhost", 9013));
    if (!future.awaitUninterruptibly().isSuccess()) {
        System.out.println("--- CLIENT - Failed to connect to server at " +
                           "localhost:9013.");
        bootstrap.releaseExternalResources();
        return false;
    }

    connector = future.getChannel();

    return connector.isConnected();
}
public static void main( String[] args )
{
    boolean started = start();
    if ( started )
        System.out.println( "Client connected to the server" );
}

}

ClientPipelineFactory.java

public class ClientPipelineFactory  implements ChannelPipelineFactory{

private final ExecutionHandler executionHandler;
public ClientPipelineFactory( ExecutionHandler executionHandle )
{
    this.executionHandler = executionHandle;
}
@Override
public ChannelPipeline getPipeline() throws Exception {
    ChannelPipeline pipeline = pipeline();
    pipeline.addLast("framer", new DelimiterBasedFrameDecoder(
              1024, Delimiters.lineDelimiter()));
    pipeline.addLast( "executor", executionHandler);
    pipeline.addLast("handler", new MessageHandler() );

    return pipeline;
}

}

MessageHandler.java
public class MessageHandler extends SimpleChannelHandler{

long max_msg = 10000;
long cur_msg = 0;
long startTime = System.nanoTime();
@Override
public void messageReceived(ChannelHandlerContext ctx, MessageEvent e) {
    cur_msg++;

    if ( cur_msg == max_msg )
    {
        System.out.println( "Throughput (msg/sec) : " + max_msg* NANOS_IN_SEC/(     System.nanoTime() - startTime )   );
        cur_msg = 0;
        startTime = System.nanoTime();
    }
}

@Override
public void exceptionCaught(ChannelHandlerContext ctx, ExceptionEvent e) {
    e.getCause().printStackTrace();
    e.getChannel().close();
}

}

Update. On the server side there is a periodic thread that writes to the accepted client channel. And the channel soon become unwritable. Update N2. Added OrderedMemoryAwareExecutor in the pipeline, but still there is very low throughput ( about 4k msg/sec )

Fixed. I put executor in front of the whole pipeline stack and it worked out!

Egor Lakomkin
  • 1,374
  • 14
  • 26
  • I would send the timestamp in the messages and get the latency of each message. This may give you more detail as to what the delay is. If you are only communicating on the same host, and latency is critical to you, you might consider using shared memory instead. – Peter Lawrey Jan 24 '12 at 15:43
  • Actually, the server will be remote. I implemented the emulator to test how much messages per sec can the client process. And it turns out that naive netty client implementation is slow. – Egor Lakomkin Jan 24 '12 at 18:43
  • Your code says ItchClientPipelineFactory, but then the pasted code is for ClientPipelineFactory. Is this just a naming error or is it the case that ItchClientPipelineFactory has non optimal code in it (ie is not using the right messagehandler) and you forgot you were still using it ? – user467257 Jan 24 '12 at 20:41
  • Did you see this post, and try its suggestions: http://stackoverflow.com/questions/8444267/how-to-write-a-high-performance-netty-client – user467257 Jan 26 '12 at 10:31
  • thx for link, but i've seen this and it doesn't help. It seems like the server writes much more fast that client can process ( the channel on server eventually become unwritable ). But I cannot figure out what is wrong with the client. Why it can't process much more messages... – Egor Lakomkin Jan 26 '12 at 10:57

1 Answers1

3

If the server is sending messages with a fixed size (~100 bytes), you can set the ReceiveBufferSizePredictor to the client bootstrap, this will optimize the read

bootstrap.setOption("receiveBufferSizePredictorFactory",
            new AdaptiveReceiveBufferSizePredictorFactory(MIN_PACKET_SIZE, INITIAL_PACKET_SIZE, MAX_PACKET_SIZE));

According to the code segment you have posted: The client's nio worker thread is doing everything in the pipeline, so it will be busy with decoding and executing the message handlers. You have to add a execution handler.

You have said that, channel is becoming unwritable from server side, so you may have to adjust the watermark sizes in the server bootstrap. you can periodically monitor the write buffer size (write queue size) and make sure that channel is becoming unwritable because of messages can not written to the network. It can be done by having a util class like below.

package org.jboss.netty.channel.socket.nio;

import org.jboss.netty.channel.Channel;

public final class NioChannelUtil {
  public static long getWriteTaskQueueCount(Channel channel) {
    NioSocketChannel nioChannel = (NioSocketChannel) channel;
    return nioChannel.writeBufferSize.get();
  }
}
Jestan Nirojan
  • 2,456
  • 19
  • 23
  • Jestan, thank you for the answer. I added ExecutionHandler in the pipeline before actual business logic handler ( where I measure throughput ), but it didn't work out. Still the throuhput is very low and server channel is not writable. I tried to find class NioSocketChannel, but failed. – Egor Lakomkin Jan 30 '12 at 08:26
  • @EgorLakomkin, so your issue is not solved yet?, NioSocketChannel is missing or package `org.jboss.netty.channel.socket.nio` is missing? what is the Netty version? – Jestan Nirojan Feb 02 '12 at 09:40
  • Netty 4.x doesn't seem to allow access to a writeBufferSize object, and it seems the package has changed as well. – Scot Aug 06 '14 at 22:59