0

I am writing a Python TCP/IP server that accepts messages from one client. Once, I receive the message, I parse the TCP message and generate a response based on its message type and send the generated response to the Client.

My applications works very well, until you have multiple messages coming in very quickly. My server seems to be blocking I/O while it is in the middle of processing a message. For example, while I am processing the 3rd message, the 4th message will never be accepted by my server, since, I am still processing the 3rd one. Therefore, the 4th message gets lost in space.

I was thinking about handling this issue using threading. Thread1 can sit there and only accept the incoming messages from the client and put them into the Queue, while Thread2 will read the request message from Thread1 and generate a response to be sent to the client. After it sends the message, Thread2 will pop the message from the Thread1's Queue.

Is this the right away to go about this issue?

EDIT: After some more testing, I realized that I am always receiving the Client's messages; however, the Client times out after 5-10 seconds if it does not receive a response message. I need to speed up the message processing because my server thinks everything is fine because it actually receives all of the messages and does not lose any messages as I thought before. The problem is that the client is impatient. Any tips on how to speed up performance? A process per request that runs in parallel so the work is divided equally? The only issue I see with that is, that it is rare that a message comes in while I'm processing another message.

paxtonjf
  • 63
  • 1
  • 9
  • That's how you could do it. – Klaus D. Dec 14 '18 at 20:18
  • Why will the 4th message get lost? You'll just process it after you finish processing the 3rd message. – Barmar Dec 14 '18 at 20:18
  • @Barmar It seems as if the Socket is in a block I/O – paxtonjf Dec 14 '18 at 20:20
  • TCP has built-in flow control. If messages are coming in faster than the application can read them from the kernel's socket buffer, the TCP window closes and the senders slow down. – Barmar Dec 14 '18 at 20:22
  • Google "python threading tutorial" – Barmar Dec 14 '18 at 20:23
  • Requests for off-side resources are not allowed on SO.0 – Klaus D. Dec 14 '18 at 20:24
  • @Barmar I should add that the Client waits for 'x' amount of seconds to receive a response from the server. If it doesn't get a response within 'x' seconds, then, it times out, which results into the Client sending me a timeout reversal the next time it needs to send data. – paxtonjf Dec 14 '18 at 20:24
  • Using two threads with a queue between them won't help, then. Thread 2 won't respond to the client until it finishes processing the previous message in the queue. You're just replacing the socket buffer with the queue. – Barmar Dec 14 '18 at 20:27
  • What you really want is a worker pool, where you have multiple worker threads that can all process messages concurrently. Thread 1 puts the message in the queue, all the workers pull from the queue and process it. – Barmar Dec 14 '18 at 20:29
  • See https://stackoverflow.com/questions/3033952/threading-pool-similar-to-the-multiprocessing-pool – Barmar Dec 14 '18 at 20:29
  • 2
    TCP isn't message-based. It is a stream I/O. If you `send(b'abc')` and `send(b'def')`, you could `recv(1024)` and get `b'abcdef'`. Check that this isn't occurring. You can buffer your receives and extract only complete messages. You'll need a protocol to define what a complete message is. – Mark Tolonen Dec 15 '18 at 00:12

1 Answers1

0

The threading solution should work but if you're using version 3.4 or later, you might prefer to look into using asyncio, which uses coroutines rather than threads, and is easier to use in my opinion. However this will also depend on what server library you're using.

Asyncio works well with Sanic but not with CherryPy. It all depends on your tools.

Gil Hamilton
  • 11,973
  • 28
  • 51
Silas Coker
  • 480
  • 2
  • 7