float sqrt_approx(float z) {
int val_int = *(int*)&z; /* Same bits, but as an int */
/*
* To justify the following code, prove that
*
* ((((val_int / 2^m) - b) / 2) + b) * 2^m = ((val_int - 2^m) / 2) + ((b + 1) / 2) * 2^m)
*
* where
*
* b = exponent bias
* m = number of mantissa bits
*
* .
*/
val_int -= 1 << 23; /* Subtract 2^m. */
val_int >>= 1; /* Divide by 2. */
val_int += 1 << 29; /* Add ((b + 1) / 2) * 2^m. */
return *(float*)&val_int; /* Interpret again as float */
}
I was reading a wiki article on methods of computing square root. I came to this code and starred at this line.
int val_int = *(int*)&z; /* Same bits, but as an int */
Why are they casting z to an int pointer then dereference it? Why not directly say val_int = z;
Why use pointers at all? PS: I'm beginner.