I am trying to understand, what would be the best way to define BYTE
, WORD
and DWORD
macros, which are mentioned in answers of this question.
#define LOWORD(l) ((WORD)(l)) #define HIWORD(l) ((WORD)(((DWORD)(l) >> 16) & 0xFFFF)) #define LOBYTE(w) ((BYTE)(w)) #define HIBYTE(w) ((BYTE)(((WORD)(w) >> 8) & 0xFF))
Would it be correct to assume, that:
BYTE
is macro defined as#define BYTE __uint8_t
WORD
is macro defined as#define WORD __uint16_t
DWORD
is macro defined as#define DWORD __uint32_t
If yes, why cast to another macro instead of casting to __uint8_t
, __uint16_t
or __uint32_t
? Is it written like that to increase clarity?
I also found another question which answers include typedef
, with little bit more of research I've found answers to question about comparing #define
and typedef
. Would typedef
be better to use in this case?