5

I'm creating a linux device driver that create a character device. The data that it returns on reads is logically divided into 16-byte units.

I was planning on implementing this division by returning however many units fit into the read buffer, but I'm not sure what to do if the read buffer is too small (<16 bytes).

What should I do here? Or is there a better way to achieve the division I'm trying to represent?

Gavin S. Yancey
  • 1,216
  • 1
  • 13
  • 34

1 Answers1

6

You could act like the datagram socket device driver: it always returns just a single datagram. If the read buffer is smaller, the excess is discarded -- it's the caller's responsibility to provide enough space for a whole datagram (typically, the application protocol specifies the maximum datagram size).

The documentation of your device should specify that it works in 16-byte units, so there's no reason why a caller would want to provide a buffer smaller than this. So any lost data due to the above discarding could be considered a bug in the calling application.

However, it would also be reasonable to return more than 16 at a time if the caller asks for it -- that suggests that the application will split it up into units itself. This could be more performance, since it minimizes system calls. But if the buffer isn't a multiple of 16, you could discard the remainder of the last unit. Just make sure this is documented, so they know to make it a multiple.

If you're worried about generic applications like cat, I don't think you need to. I would expect them to use very large input buffers, simply for performance reasons.

Barmar
  • 741,623
  • 53
  • 500
  • 612
  • 1
    Could this have a negative performance impact compared to filling the buffer as much as possible? – Gavin S. Yancey Jun 09 '16 at 22:27
  • Possibly. Since your messages are fixed size, you could return as much data as possible and allow the application to split it up into units. – Barmar Jun 09 '16 at 23:38