1

I see from the server side, the benefit of NIO is the capability to manage multiple network connections with fewer thread comparing to the comparing to one thread per connection blocking IO.

However, if I have a IO client which connects to thousand of servers at the same time, can I just have similar approach to manage these connections IO using fewer threads. I tried the approach in Netty 4 multiple client and found it spawn a "Reader" thread for each channel it created.

So, my questions are:

1) what are the benefits using netty/NIO in the client side? 2) is it possible to manage multiple connections with fewer threads in the client side?

Thanks!

I have uploaded the code samples in github: https://github.com/hippoz/ogop-lseb

The sample server/client class is moc.ogop.ahsp.demo.nio.MultipleConnectionNioMain and moc.ogop.ahsp.demo.nio.NettyNioServerMain

Community
  • 1
  • 1
  • None, unless you're planning on having a large number of outbound connections, which isn't usual. – user207421 Jul 19 '16 at 11:37
  • yes, actually, we are using a persistent connection between each client/server. so each client can possibly connect to about 1000 servers at the same time. – James Zheng Jul 19 '16 at 12:31
  • Well in that case (which doesn't follw from 'persistent connection'), you can conserve threads, at the expense of significantly more complex coding. I would get it working with `java.net` first and then see if you have a scalability problem. – user207421 Jul 20 '16 at 03:44
  • Even i have a same question in my mind but did not get answer yet ? please share if you got your answer – T-Bag Mar 18 '17 at 06:16

1 Answers1

0

Having lots of threads creates a context-switch problem in the kernel where lots more memory is being loaded and unloaded from each core as the kernel tries to reschedule the threads across the cores.

The benefit of NIO anywhere is performance. Thats pretty much the only reason we use it. Using Blocking IO is MUCH more simple. Using the worker model and NIO you can limit the number of threads (and potential computational time) the process uses. So if you have two workers and they go bonkers using 100% cpu time the whole system won't go to a crawl because you have 2-4 more cores available.

Have fun!

Community
  • 1
  • 1
Johnny V
  • 795
  • 5
  • 14