6

For every single tutorials and examples I have seen on the internet for Linux/Unix socket tutorials, the server side code always involves an infinite loop that checks for client connection every single time. Example:

http://www.thegeekstuff.com/2011/12/c-socket-programming/

http://tldp.org/LDP/LG/issue74/tougher.html#3.2

Is there a more efficient way to structure the server side code so that it does not involve an infinite loop, or code the infinite loop in a way that it will take up less system resource?

leorex
  • 2,058
  • 1
  • 14
  • 15
  • 1
    Most, (efficient anyway, ie. not including create/terminate/join), threaded code is written as infinite loops with a blocking call somewhere at the top. Don't worry about it :) – Martin James Jul 27 '12 at 14:25

7 Answers7

7

the infinite loop in those examples is already efficient. the call to accept() is a blocking call: the function does not return until there is a client connecting to the server. code execution for the thread which called the accept() function is halted, and does not take any processing power.

think of accept() as a call to join() or like a wait on a mutex/lock/semaphore.

of course, there are many other ways to handle incoming connection, but those other ways deal with the blocking nature of accept(). this function is difficult to cancel, so there exists non-blocking alternatives which will allow the server to perform other actions while waiting for an incoming connection. one such alternative is using select(). other alternatives are less portable as they involve low-level operating system calls to signal the connection through a callback function, an event or any other asynchronous mechanism handled by the operating system...

Adrien Plisson
  • 22,486
  • 6
  • 42
  • 73
  • Exactly. From operating system's perspective (for Linux specifically) when you call `accept()` (blocking call) in your process and there is no incoming connections, your process goes to sleep: it's state changes from TASK_RUNNING to TASK_(UN)INTERRUPTIBLE and it is attached to one of kernel's wait queues. Then scheduler is called to pick up next process to run - so from OS perspective no processing time is wasted. Your process will be woken up when new connection requests arrive. – mzet Jul 27 '12 at 12:28
2

For C++ you could look into boost.asio. You could also look into e.g. asynchronous I/O functions. There is also SIGIO.

Of course, even when using these asynchronous methods, your main program still needs to sit in a loop, or the program will exit.

Some programmer dude
  • 400,186
  • 35
  • 402
  • 621
1

The infinite loop is there to maintain the server's running state, so when a client connection is accepted, the server won't quit immediately afterwards, instead it'll go back to listening for another client connection.

The listen() call is a blocking one - that is to say, it waits until it receives data. It does this is an extremely efficient way, using zero system resources (until a connection is made, of course) by making use of the operating systems network drivers that trigger an event (or hardware interrupt) that wakes the listening thread up.

gbjbaanb
  • 51,617
  • 12
  • 104
  • 148
  • Second paragraph is mostly incorrect. listen() does not wait and it does not receive data. accept() waits but it does not receive data either, it receives connections. This is not done via hardware interrupts or the network drivers but via the TCP/IP stack and a backlog queue. – user207421 Jul 27 '12 at 21:57
1

Here's a good overview of what techniques are available - The C10K problem.

Nikolai Fetissov
  • 82,306
  • 11
  • 110
  • 171
0

When you are implementing a server that listens for possibly infinite connections, there is imo no way around some sort of infinite loops. Usually this is not a problem at all, because when your socket is not marked as non-blocking, the call to accept() will block until a new connection arrives. Due to this blocking, no system resources are wasted.

Other libraries that provide like an event-based system are ultimately implemented in the way described above.

cli_hlt
  • 7,072
  • 2
  • 26
  • 22
0

In addition to what has already been posted, it's fairly easy to see what is going on with a debugger. You will be able to single-step through until you execute the accept() line, upon which the 'sigle-step' highlight will disappear and the app will run on - the next line is not reached. If you put a breadkpoint on the next line, it will not fire until a client connects.

Martin James
  • 24,453
  • 3
  • 36
  • 60
0

We need to follow the best practice on writing client -server programing. The best guide I can recommend you at this time is The C10K Problem . There are specific stuff we need to follow in this case. We can go for using select or poll or epoll. Each have there own advantages and disadvantages.

If you are running you code using latest kernel version, then I would recommend to go for epoll. Click to see sample program to understand epoll.

If you are using select, poll, epoll then you will be blocked until you get an event / trigger so that your server will not run in to infinite loop by consuming your system time.

On my personal experience, I feel epoll is the best way to go further as I observed the threshold of my server machine on having 80k ACTIVE connection was very less on comparing it will select and poll. The load average of my server machine was just 3.2 on having 80k active connection :)

On testing with poll, I find my server load average went up to 7.8 on reaching 30k active client connection :(.

Community
  • 1
  • 1
Viswesn
  • 4,674
  • 2
  • 28
  • 45