0

This question pertains to programming on embedded systems. I'm working on an experimental communication stack on an embedded device. The stack receives stream data from the underlying channel, detects discrete packets, reassembles fragmented data, etc...

Each function is implemented in a separate layer. Some layers delay processing of packets (because data arrived in an interrupt handler and further processing is offloaded onto the main context). Some layers merge multiple incoming packets into a single packet forwarded to the next upper layer (i.e. reassembly of fragmented data). Accordingly, some layers split one incoming packet into multiple packets forwarded to the next lower layer. Of course, any layer may at any point drop a packet without further notice because, for example, a checksum didn't match the data.

My question is about memory allocation for these data packets.

Currently, I'm using malloc on each layer. Specifically, I allocate memory for the packet to be forwarded to the next upper layer, pass the pointer to the handler of the next layer and free the memory again after the call. It is the next layer's handler's responsibility to copy the required data. Thus, each layer maintains ownership of the data is allocated and it is hard to forget to free allocated memory. This works very well but leads to a lot of unnecessary copies.

Alternatively, I could forward ownership of the buffer to the next layer. Then the next layer can do its work directly on the buffer and forward the same buffer to the next layer, and so on. I suppose this is somewhat trickier to get right that no memory is leaked.

Ultimately, because it is an embedded device, I want to find a solution without dynamic memory allocation. If each layer keeps ownership of its own memory then implementation without malloc should be easy enough. But if ownership is passed on then it seems more complicated.

Do you have any input?

Lundin
  • 195,001
  • 40
  • 254
  • 396
Andy
  • 109
  • 1
  • 10
  • 1
    I'd simply avoid unnecessary copies. But the qustion is too broad IMO. – Jabberwocky Mar 05 '19 at 08:17
  • 1
    If dealing with packets of similar size, you can reduce fragmentation using a memory pool. If you are simply transferring ownership, I don't see the reason in copying the data? And, as you wrote, the third option would to use reference counting. – vgru Mar 05 '19 at 08:21
  • 1
    Perhaps you could try to ask on https://softwareengineering.stackexchange.com, this would be considered off-topic on SO. – vgru Mar 05 '19 at 08:24
  • I removed the last part of your question since tool/library recommendations are off-topic here and asking for such might get your question closed. – Lundin Mar 05 '19 at 09:27

2 Answers2

1

Look into LwIP packet buffers (pbuf), it resolves cases mentioned in your scenarios. http://www.nongnu.org/lwip/2_0_x/group__pbuf.html To robust your code executed by ISR, instead of malloc you can implement memory pools.

Nabuchodonozor
  • 704
  • 1
  • 6
  • 13
1

Allocate memory in one place. Since it is an embedded system, you'll have to use a static memory pool. Like a classic ADT implemented as opaque type:

// buffer.h

typedef struct buffer_t buffer_t;

buffer_t* buffer_create (/*params*/);

/* setter & getter functions here */

// buffer.c

#include "buffer.h"

struct buffer_t
{
  /* private contents */
};


static buffer_t mempool [MEMPOOL_SIZE];
static size_t mempool_size = 0;

buffer_t* buffer_create (/*params*/)
{
  if(mempool_size == MEMPOOL_SIZE)
  { /* out of memory, handle error */ }

  buffer_t* obj = &mempool[mempool_size]; 
  mempool_size++;

  /* initialize obj here */

  return obj;
}

/* setter & getter functions here */

Now all your various application layers and processes only get to pass around copies of a pointer. In case you do need to actually make a hardcopy, you implement a buffer_copy function in the above ADT.

In case of a multi-process system, you will have to consider re-entrancy in case multiple processes are allowed to allocate buffers at the same time.

Lundin
  • 195,001
  • 40
  • 254
  • 396