0

I have a threaded server, usually one thread per client, so whatever packets I receive will be from the same source.

I am designing a protocol based on

struct Packet
{
     int Data;
     char Data2 [size];
} Packet;

and any other permutations I may need.

The only way I can distinguish between packets so far is based on their size. Since both the server and the client have a the same struct declarations, sizeof(Packet) on the server will be the same as sizeof(Packet) (assuming identical hardware) on the client, and when I call

 int bytesReceived = recv(...);

 switch (bytesReceived) { (...) }

I can pass on the buffer to a packet-specific function to handle it.

This is imperfect at best, because

  1. Datatype sizes may differ per platform --> a mismatch can occur between server and client
  2. I may have two different packets of identical size.

What is a good workaround this problem? How can I design a protocol in a better way?

Aroll605
  • 376
  • 1
  • 4
  • 12
  • http://stackoverflow.com/a/20248772/412080 – Maxim Egorushkin Oct 22 '14 at 16:36
  • According to what I read there, the easiest way be to rely on the fact that TCP sends and receives data in order and have the first element of the struct be it's intended size? – Aroll605 Oct 22 '14 at 16:39
  • Well, you may not have to store the size in your structures if you can calculate them on the fly. – Maxim Egorushkin Oct 22 '14 at 16:41
  • Well I can't because size may differ per platform? According to C datatype specifications, for example, at int has to be *at least* 16 bits, but on most implementations today it's 32bits or even 64bits. Packet sizes are not static because of that. – Aroll605 Oct 22 '14 at 16:43
  • Is this "protocol" supposed to be based on UDP? BTW: having two members in a struct with the same name ("Data") is not allowed. – wildplasser Oct 22 '14 at 17:20
  • Why don't you use TLV: Type-Length-Value? – ninjalj Oct 22 '14 at 17:47
  • @wildplasser, the only reason I'm using struct packets is for structured parsing of information. It's a control measure. If there is better way, I'd like to consider it as well. – Aroll605 Oct 22 '14 at 19:34
  • @ninjalj, I've never heard of that, I'll look into it, thank you! – Aroll605 Oct 22 '14 at 19:35
  • 2
    In TCP there are no "message boundaries". In UDP there are. So, in TCP you will have problems to isolate the packets, either by type prefixing them or by length-prefixing them. And there still is the endianness-issue, even if you have the sizes correct ... BTW: the most stable protocols are still line based plain ASCII. For example, take a look at SMTP (rfc#822) – wildplasser Oct 22 '14 at 19:37
  • @wildplasser, that does look much simpler than what I'm designing. I think I'll keep my structs for UDP, and use a text-based protocol instead. Thank you! – Aroll605 Oct 22 '14 at 19:53
  • Does ASCII require host-to-network byte translation, and vice versa? – Aroll605 Oct 22 '14 at 19:57
  • 1
    No of course not. In the case of ASCII, it is just a stream of bytes (octeets), and a byte has no endianness. – wildplasser Oct 22 '14 at 20:09

1 Answers1

1

Datatype sizes may differ per platform --> a mismatch can occur between server and client

Use types from <stdint.h>, e.g. uint32_t. Also, make sure your maintain your protocol byte-order (little or big endian), so that if platform's byte order is different, you reverse the integers before sending and after receiving.

I may have two different packets of identical size.

Send packet length along with the packet type in your packet header. Something like:

+----------------+--------------+----------------------------+
| message-length | message-type | message-payload            |
| 4 bytes        | 2 bytes      | (message-length - 6) bytes |
+----------------+--------------+----------------------------+
Maxim Egorushkin
  • 131,725
  • 17
  • 180
  • 271