What is the difference between using literal suffix on a constant:
#define MY_LONG 0x1UL
and casting a constant:
#define MY_LONG (unsigned long)0x1
When will you choose the former and when will you choose the latter?
What is the difference between using literal suffix on a constant:
#define MY_LONG 0x1UL
and casting a constant:
#define MY_LONG (unsigned long)0x1
When will you choose the former and when will you choose the latter?
What is the difference between using literal suffix on a constant: ... and casting a constant:?
Let us try using the 2 defines as pre-processor steering. Pre-processor math does not understand various type widths. As part of the token processing:
For the purposes of this token conversion and evaluation, all signed integer types and all unsigned integer types act as if they have the same representation as, respectively, the types
intmax_t
anduintmax_t
... C11dr §6.10.1 4
#define MY_LONG1 0x123456789UL
#define MY_LONG2 (unsigned long)0x123456789
int main(void) {
#if MY_LONG1 == 0x123456789u
puts("MY_LONG1 == 0x123456789u");
#endif
#if MY_LONG2 == 0x123456789u
puts("MY_LONG2 == 0x123456789u");
#endif
}
#if MY_LONG2 == 0x123456789u
result in a compiler error.
error: missing binary operator before token "long"
#define MY_LONG2 (unsigned long)0x123456789
Note: the L
serves no purpose with pre-processing.
Note: the U
does serve a purpose with pre-processing.
With a value of 1, no difference.
Yet if the value exceeds ULONG_MAX
, (example: 32-bit), the below is unsigned long long
. The U
insures some unsigned type, the L
insures at least long
.
#define MY_LONG 0x123456789UL
A cast can change the value. In this case below, the value becomes 0x23456789 and type unsigned long
#define MY_LONG (unsigned long)0x123456789
When will you choose the former and when will you choose the latter?
The L
nudges the type to be at least long
, the cast enforces the type of long
.
Use L
when MY_LONG
should be at least long
or the value is small, within the of minimal range of long
[-2147483647 ... 2147483647]
Do not use the cast if the macro may be used with pre-processing.
Otherwise use the cast (long)
to enforce a long
. IMO, this is rarely the goal. An exception would be if the constant must be unsigned long
regardless of it value.
// Works well if unsigned long is 64-bit or less
#define MY_ULONG_EVERY_OTHER_BIT ((unsigned long) 0xAAAAAAAAAAAAAAAAu)
Note: in general, I avoid using L
in constants and let the implementation determine the type.
The below will be the narrowest type that fits in unsigned, unsigned long , unsigned long long
. Appending an L
would fit to the narrowest unsigned long, unsigned long long
.
#define MY_BIGU 0x123456789u
There is a subtle problem with OP's define.
#define MY_LONG (unsigned long)0x1
It should have been the below to insure tight bonding.
#define MY_LONG ((unsigned long)0x1)
Only a few operators have higher precedence that could mess up code. Pathological e.g.: MY_LONG[a]
would be (unsigned long)(0x1[a])
and not ((unsigned long)0x1)[a]
, yet still good practice to enclose macros with a ()
unless there is zero chance of evaluation problems.
There is no practical difference, these are 100% equivalent.
The only formal difference is that UL
is part of the integer constant, while the cast could be put in front of any expression, including a run-time evaluated expression.
The main reason for UL is to enforce a type of a integer constant, where it matters. It is often more readable than a cast. Example: 1UL << n
versus (unsigned long)1 << n
.
Casts on the other hand, have a wider use, as they cannot only be used to force a type at compile-time, but also to trigger a type conversion in run-time.
There is absolutely no difference. Both are compile time evaluable constant expressions of the same value and type, although you might be wise to write the second version as ((unsigned long)0x1)
.
Personally I'd choose the first one as it's clearer; in my mind at least, a cast is a run-time operator, not a compile-time one.
There is very little difference between the two: they both are constant expressions with a value of 1
and type unsigned long
.
One notable difference is you can use 0x1UL
as part of a preprocessor test expression, but you cannot use (unsigned long)1
:
#include <stdio.h>
#if 0x1UL
int main() {
return 0;
}
#else
#error 0x1UL should be true
#endif
Technically, there is a way to tell between them, but you really have to try pretty hard:
char a [10];
printf( "size is %zu\n", sizeof(MY_LONG [a]) );
In the above example, the literal (0x1UL
) version will have the array take the unsigned long
literal as an index, yielding 1
because it is sizeof(char)
. The cast version will have the array taking a int
literal as an index, then casting the resulting char to unsigned long
before passing it to sizeof()
. sizeof(unsigned long)
is typically 8
.
This is because the indexing operator []
outranks the casting operator ()
.