Is it a good code style to cast -1
to unsigned int? For example:
#define MAX_UINT8 ((uint8_t) -1)
comparing to
#define MAX_UINT8 0xff
Is it a good code style to cast -1
to unsigned int? For example:
#define MAX_UINT8 ((uint8_t) -1)
comparing to
#define MAX_UINT8 0xff
Style and usefulness
Is it a good code style to cast
-1
to unsigned int?
Not when coding the maximum uint8_t
.
Use UINT8_MAX
from inttypes.h
@MikeCAT
#define MAX_UINT8 0xff
is OK for pre-processor operations. Do not rely on casts with pre-processor math. Thus #define MAX_UINT8 ((uint8_t) -1)
is less useful (and could lead to unexpected pre-processing) than #define MAX_UINT8 0xff
Try
#include <stdint.h>
#define MAX_UINT8 ((uint8_t) -1)
#if MAX_UINT8 < 0
#error "MAX_UINT8 < 0"
#endif
Type Differences
((uint8_t) -1)
is usually type unsigned char
. uint8_t
is an optional type1, yet very commonly implemented.
0xff
is type int
.
Using one or the other leads to a difference with _Generic
and sizeof
.
1uintN_t
These types are optional. However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two’s complement representation, it shall define the corresponding typedef names.
C17dr § 7.20.1.1 3