1

I am writing a utility that deals with reading/parsing 3rd party files generally byte by byte. The utility deals with some basic encryption (XORing each byte), switching between 1 byte and 2 byte fields (where the 2 byte fields are read using little-endian). There are a few adhoc "compression" schemes along the way, and a compressed version of ASCII strings (instead of spaces, the left most bit is set to indicate a following space).

Anyway, depending on the byte or bytes read I can translate this into meaningful data using the documented file structures. The utility runs in Windows but I would like to support Mac and Linux operating systems too.

I am using uint8_t as the type when dealing on the byte level. In C# I can use the byte data type, but in C++ (for which this utility is now based upon) I do not have that luxury.

Originally I was using an unsigned character but I am not sure this will always be "8 bits" across other platforms. I switched over to uint8_t.

I am sorry if this seems like a poor question but I am not as confident in C++ as I am in C#.

Class Skeleton
  • 2,913
  • 6
  • 31
  • 51
  • I think on my platforms `uint8_t` and `unsigned char` are the same type. – Barry Jan 25 '15 at 05:10
  • [uint8_t vs unsigned char](http://stackoverflow.com/questions/1725855) has both the practical and pedantic answer to your question. – Drew Dormann Jan 25 '15 at 05:13
  • 2
    `std::uint8_t` is guaranteed to be 8-bit, but not guaranteed to exist. `unsigned char` is guaranteed to be 1-byte, but not guaranteed to be 8-bit. –  Jan 25 '15 at 05:18
  • I have reverted my code back to using char and added the following line in my header: `static_assert(CHAR_BIT == 8, "Unsupported char length - must be 8 bits");` - this is a fail safe for future work on other platforms. This has removed most of the casting from my code. – Class Skeleton Jan 25 '15 at 10:09

0 Answers0