Coming to this late, but I thought of another approach.
If you know your system uses IEEE754 floating-point format, but not how big the floating-point types are relative to the integer types, you could do something like this:
bool isFloatIEEE754Negative(float f)
{
float d = f;
if (sizeof(float)==sizeof(unsigned short int)) {
return (*(unsigned short int *)(&d) >> (sizeof(unsigned short int)*CHAR_BIT - 1) == 1);
}
else if (sizeof(float)==sizeof(unsigned int)) {
return (*(unsigned int *)(&d) >> (sizeof(unsigned int)*CHAR_BIT - 1) == 1);
}
else if (sizeof(float)==sizeof(unsigned long)) {
return (*(unsigned long *)(&d) >> (sizeof(unsigned long)*CHAR_BIT - 1) == 1);
}
else if (sizeof(float)==sizeof(unsigned char)) {
return (*(unsigned char *)(&d) >> (sizeof(unsigned char)*CHAR_BIT - 1) == 1);
}
else if (sizeof(float)==sizeof(unsigned long long)) {
return (*(unsigned long long *)(&d) >> (sizeof(unsigned long long)*CHAR_BIT - 1) == 1);
}
return false; // Should never get here if you've covered all the potential types!
}
Essentially, you treat the bytes in your float as an unsigned integer type, then right-shift all but one of the bits (the sign bit) out of existence. '>>' works regardless of endianness so this bypasses that issue.
If it's possible to determine pre-execution which unsigned integer type is the same length as the floating point type, you could abbreviate this:
#define FLOAT_EQUIV_AS_UINT unsigned int // or whatever it is
bool isFloatIEEE754Negative(float f)
{
float d = f;
return (*(FLOAT_EQUIV_AS_UINT *)(&d) >> (sizeof(FLOAT_EQUIV_AS_UINT)*CHAR_BIT - 1) == 1);
}
This worked on my test systems; anyone see any caveats or overlooked 'gotchas'?