0

I have a development board that runs some Linux distribution, this board has some UART peripherals that are mapped into the system as a tty-like files.

On a specific UART port I have connected a LIN* transceiver which is connected to a LIN bus.

The LIN transceiver outputs me frames (two types: one type of frame has 3 bytes, and the other one has between minimum 6 bytes and maximum 12 bytes) with a minimum ~20ms of distance between them.

frames and the space between them

Now I want to write an application that is able to read this individual frames as data buffers (not byte-by-byte or any other possibility).

For setting the communication parameters (baud rate, parity bits, start/stop bits, etc), I'm using the stty** utility. I have played a bit with the min, and time [***] special settings parameters, but I didn't obtained the right behavior, big frames will always be spitted into at least three chunks.

Is there any way to achieve this?

[*] LIN: https://en.wikipedia.org/wiki/Local_Interconnect_Network

[**] stty: http://linux.die.net/man/1/stty

[***] I have used the following modes:

MIN == 0, TIME > 0 (read with timeout)

This won't work because I will always receive at least one individual byte (and the rest of the frame as a buffer).

MIN > 0, TIME > 0 (read with interbyte timeout)

In this mode setting the MIN to 3 (the smallest frame haves 3 bytes), and the TIME parameter to some higher value like 90 also won't do the trick, the short frames are received correctly in this case (at once), but the longer frames are splitted into 3 parts (the first part with 3 bytes, the second one with three bytes, and the last one with 5 bytes).

mariusmmg2
  • 713
  • 18
  • 37
  • A goal of always getting a complete and aligned message from **read()** is probably misguided IMO. See http://stackoverflow.com/questions/38140875/non-blocking-read-with-fixed-data-in-input#comment63713847_38140875 Robust code needs to be able to handle any data loss and fragmented messages, and regain message alignment. If a VTIME=90 (i.e. 9 seconds?) only returns VMIN bytes, then your initialization seems suspect (i.e non-blocking mode?). – sawdust Jul 07 '16 at 18:17
  • @sawdust The problem is that I can't do a lot of frame validation when I receive data, the goal is to immediately send it when is received. What I'm looking for is to somehow tell to the drivers to cache bytes into the internal buffer until there's no data received for let's say 5ms (assuming that the inter-byte delays are smaller that 5ms). Is there any possibility to do this? I'm using blocking readings. – mariusmmg2 Jul 10 '16 at 12:35
  • In my experience, the `VMIN>0 and VTIME>0` mode doesn't quite work as advertised. The timer seems to be a very short interval, much less than a 1/10th second. I haven't seen it work on ARM with 2.6, and I just tested Linux 3.13 on x86. At a fast baudrate (115200) and VMIN=1, VTIME=1, a **read()** sometimes returns 10 or more bytes. But more often it's just a partial read regardless of the VTIME value. Maybe this brokenness is preferred/desired. A minimum 0.1 sec message separation is simply too long (and not practical) at modern fast baudrates. – sawdust Jul 11 '16 at 08:42
  • *"the goal is to immediately send it when is received"* -- Looks like a **read()** is going to return with a partial message simply because it doesn't wait (despite the man page). So you simply have to perform another **read()** to wait for more bytes, and reconstruct the message. – sawdust Jul 11 '16 at 08:52

0 Answers0