3

I'm writing a QTcpServer. I used telnet.exe as a client for testing. Upon a new client connection my server sends a Hi! message to the client which is displayed - all is well and fine so far.

But when I type something in the telnet.exe window, a readyRead() is emitted for each character. I only want it to be sent after \r\n! What's the problem? Is it the nature of telnet.exe in Windows? Cause I've used telnet on my linux box and it only sends the string after \r\n, as expected.

zb226
  • 9,586
  • 6
  • 49
  • 79
Neel Basu
  • 12,638
  • 12
  • 82
  • 146

3 Answers3

3

Unfortunately, that's how the Windows telnet.exe client works and there's no way to change that.

You must not rely on client-specific behavior like this when handling TCP streams. TCP does not guarantee message boundaries, but it does guarantee that, from your point of view, the data is delivered int he same order it was written by the client. You must take this into account when designing your protocol.

You'll need to buffer incoming data and handle this at the application protocol level. Common solutions include defining a message terminator sequence (and a mechanism for escaping that sequence if it can appear inside the normal messages) - for example, \r\n could be the terminator sequence in this scenario -, or you can packetize sent data prefixing it with the follow-up message length, or you can use dedicated messaging libraries (such as ZeroMQ or ActiveMQ - but then you can't use Qt's networking, unfortunately), etc.

Mihai Limbășan
  • 64,368
  • 4
  • 48
  • 59
3

Instead of typing your message, press CTRL + ], and then type send YOURMESSAGE\r\n

Himanshu
  • 31,810
  • 31
  • 111
  • 133
Chris
  • 31
  • 1
1

Yes, there are some differences between windows and linux with CR LF, it's "normal".

One approach that works nice is to make use of buffer and then wait for your data to be ready or timeout. For exmaple your seperator token can be “\r” and if you get an “\n” after just drop it.

Here is an example expecting a token from a custom protocol:

int Connection::readDataIntoBuffer(int maxSize)
 {
     if (maxSize > MaxBufferSize)
         return 0;

     int numBytesBeforeRead = buffer.size();
     if (numBytesBeforeRead == MaxBufferSize) {
         abort();
         return 0;
     }

     while (bytesAvailable() > 0 && buffer.size() < maxSize) {
         buffer.append(read(1));
         if (buffer.endsWith(SeparatorToken))
             break;
     }
     return buffer.size() - numBytesBeforeRead;
 }

See http://doc.qt.nokia.com/stable/network-network-chat-connection-cpp.html

Depending on what you need another suggestion is to try an stick to some standard protocol. Like this you can test with different type of clients.

If you want to stick to your custom protocol I suggest you write your own client and write proper test cases to collaborate with your server. Qt makes it easy and fast ;) Take a look at the network examples.

Edit:

You might consider readline() instead of read() on your QTcpSocket which is an QIODevice. It waits for a newline instead of read() (see the doc excerpt below). However this gives less control over when to end your line:

qint64 QIODevice::readLine ( char * data, qint64 maxSize )

From the doc:

Data is read until either of the following conditions are met:

  • The first '\n' character is read.
  • maxSize - 1 bytes are read.
  • The end of the device data is detected.

The secret ingredient in Qt is asynchronous signal driven-design. See the section Networking / State Machines section in the article Threads, Events and QObjects for some ideas.

Derick Schoonbee
  • 2,971
  • 1
  • 23
  • 39
  • Here I need to use RFB protocol. But libvnc is GPL. So I think Its better to create a homegrown protocol as compared to writing the RFB protocol from the scratch. – Neel Basu Apr 04 '11 at 14:10
  • @Neel: I've edit a part of the response that might be more specific to your question. If you have a relative simple need your homegrown protocol can work. However, since you were looking at RFB it's not a small feat to implement that one! – Derick Schoonbee Apr 04 '11 at 22:33
  • +1 for your Help. In real world scenario I'll use My own Client Program. So is it guaranteed even then that the bytes I send will go atomic ? e.g. If My client sends `hallo` Is it guaranteed that it will go `hallo` or it may even go as `h` `a` `llo` even if I use My own client program for internal nature of Socket ? and at the moment I am using `Hercules` TCP Client Program for testing instead of telnet and it sends it as I want . not character wise. – Neel Basu Apr 05 '11 at 07:47
  • @NeelBasu A rather old topic, but worth noting that you can't know if you will receive `hallo` or `ha` `llo` or any other combination of splits. That's a transmission detail. What you can be sure of is that if you receive `ha` and then block until you receive `llo`, and this was the rest of the message in that order, that `ha` ++ `llo` will be `hallo`. So receipt of a message is not the same thing as message delineation -- which is why these are called *streams*. – zxq9 Aug 13 '15 at 05:33