6

I am an Embedded Software developer and I want to interface to an external device. This device sends data via SPI. The structure of that data is predefined from the external device manufacturer and can't be edited. The manufacturer is providing some Header files with many typedefs of all the data send via SPI. The manufacturer also offers an API to handle the received packets in the correct way(I have access to the source of that API).

Now to my problem: The typedefed structures contain many uint8_t datatypes. Unfortunately, our MCU doesn't support uint8_t datatypes, because the smallest type is 16bit-wide(so even a char has 16-bit).

To use the API correctly the structures must be filled with the data received via SPI. Since the incoming data is byte-packet, we can't just copy this data into the struct, because our structs use 16-bit for those 8-bit types. As a result, we need to do many bitshift-operations to assign the received data correctly.

EXAMPLE:(manufacturers typedef struct)

typedef struct NETX_COMMUNICATION_CHANNEL_INFOtag
{
  uint8_t   bChannelType;              //uint16_t in our system
  uint8_t   bChannelId;                //uint16_t in our system
  uint8_t   bSizePositionOfHandshake;  //uint16_t in our system
  uint8_t   bNumberOfBlocks;           //uint16_t in our system
  uint32_t  ulSizeOfChannel;           
  uint16_t  usCommunicationClass;      
  uint16_t  usProtocolClass;           
  uint16_t  usProtocolConformanceClass;
  uint8_t   abReserved[2];             //uint16_t in our system
} NETX_COMMUNICATION_CHANNEL_INFO;

Can anybody think of an easy workaround to this problem? I really don't want to write a separate bitshift operation for every received packet type. (performance/time/space-waste)

My Idea (using bitfields to stuff 2xuint8_t into uint16_t or 4xuint8_t into uint32_t)

typedef struct NETX_COMMUNICATION_CHANNEL_INFOtag
{
  struct packet_uint8{
    uint32_t  bChannelType              :8;
    uint32_t  bChannelId                :8;
    uint32_t  bSizePositionOfHandshake  :8;
    uint32_t  bNumberOfBlocks           :8;
  }packet_uint8;
  uint32_t  ulSizeOfChannel;               
  uint16_t  usCommunicationClass;          
  uint16_t  usProtocolClass;               
  uint16_t  usProtocolConformanceClass;    
  uint16_t  abReserved;                    
} NETX_COMMUNICATION_CHANNEL_INFO;

Now I am not sure if this solution is going to work since the order of the bits inside the bitfield is not necessarily the order in the source file. (or is it if all the bitfields have the same size?)

I hope I described the problem well enough for you to understand.

Thanks and Regards.

Ilja Everilä
  • 50,538
  • 7
  • 126
  • 127
Muperman
  • 344
  • 2
  • 14
  • 1
    You're saying that your MCU doesn't support uint8_t. But from your description it rather sounds as if your C compiler had severe limitations. Even if your MCU has some limitations, the C compiler could still use a variety of commands to implement byte operations. It's very difficult to help you without knowing the limitations of the C compiler. Do you have a link to the description of the compiler limitations? What MCU are you using? – Codo Nov 08 '18 at 08:42
  • 4
    You're **either** writing C++ **or** C. Pick your pill :D – Antti Haapala -- Слава Україні Nov 08 '18 at 08:56
  • This is the compiler-description: http://www.ti.com/lit/ug/spru514q/spru514q.pdf See 6.4 for Compilertypes and maybe 16.15.4 for some compiler typedefs that might help, but I don't understand those. The MCU is the TMS320F28379D – Muperman Nov 08 '18 at 08:59
  • 1
    Chapter 6.15.6 in that manual suggest the same approach, of using bitfields. If you don't need properties of the `bp_16` type as described in chapter 6.15.6, you can likely still use uint16_t for the bitfields. The manual does not seem to describe the order of the bitfields though , you will have to determine that yourself by experimentation , although examples from the same chapter suggests bitfields are laid out as they are in source code. – nos Nov 08 '18 at 09:11
  • According to documentation the compiler supports c++03: The compiler uses the C++03 version of the C++ standard – darune Nov 08 '18 at 10:22
  • The solution is to not use exotic junk from TI. Or better yet, don't use anything at all from TI, since they have non-existent support and hate customers. If it was a part from someone else, you could simply have asked their support. – Lundin Nov 08 '18 at 13:51

3 Answers3

5

Your compiler manual should describe how the bit fields are laid out. Read it carefully. There is something called __attribute__((byte_peripheral)) too that should help with packing bitfields sanely in memory-mapped devices.


If you're unsure about the bitfields, just use uint16_t for these fields and an access macro with bit shifts, for example

#define FIRST(x) ((x) >> 8)
#define SECOND(x) ((x) & 0xFF)

...
    uint16_t channel_type_and_id;
...

int channel_type = FIRST(x->channel_type_and_id);
int channel_id = SECOND(x->channel_type_and_id);

Then you just need to be sure of the byte-order of the platform. If you need to change endianness which the MCU seems to support? you can just redefine these macros.


A bitfield would most probably still be implemented in terms of bitshifts so there wouldn't be much savings - and if there are byte-access functions for registers, then a compiler would know to optimize x & 0xff to use them

0

According to the linked to compiler documentation byte access is through intrinsics

To access data in increments of 8 bits, use the __byte() and __mov_byte() intrinsics described in Section 7.5.6.

If you wanted to, you could make a new type to encapsulate how bytes should be accessed - something like a pair of bytes or a TwoByte class that will have the size of 16 bits.

For inspiration take a look at how std::bitset template class is implemented in STL for an analogue problem. https://en.cppreference.com/w/cpp/utility/bitset

As I posted in my other answer, I still believe the your bitfield could work - even though it might be platform specific. Basicly if it works out, the compiler should put in the correct bitshift operations.

darune
  • 10,480
  • 2
  • 24
  • 62
  • I will first try the bitfield approach. It will be platform dependent, but if it works the compiler will do all the work for me. And he __byte() intrinsic also is platform dependent. – Muperman Nov 08 '18 at 09:40
  • @Muperman: I'd suggest writing a macro which chains to `__byte()` but could be replaced with something more appropriate for other platforms if the code is ported. – supercat Mar 08 '20 at 21:38
-4

The bitfield approach may work in practice. Although you do need some way to verify or make sure that it is packed in the correct way for your target platform. The bitfield approach will not be portable as you state yourself the order of bitfields is platform dependent.

darune
  • 10,480
  • 2
  • 24
  • 62
  • I can't find the specific paragraph int he compiler-description again, but it stated, that the result of sizeof will give the size calculated in 16bit, So char has 16bit and sizeof will still return 1 – Muperman Nov 08 '18 at 09:03
  • 1
    I just found it: http://www.ti.com/lit/ug/spru514q/spru514q.pdf 6.4 below the note below the table – Muperman Nov 08 '18 at 09:04
  • That paragraph also reads "To access data in increments of 8 bits, use the __byte() and __mov_byte() intrinsics described in Section 7.5.6." – darune Nov 08 '18 at 09:09
  • Btw. that also tells us it is not a standard conformant c++ compiler ... – darune Nov 08 '18 at 09:15
  • 1
    "A byte is at least large enough to contain any member of the basic execution character set (2.3) and the eight-bit code units of the Unicode UTF-8 encoding form and is composed of a contiguous sequence of bits, the number of which is implementation-defined." – Antti Haapala -- Слава Україні Nov 08 '18 at 13:35