The problem at hand arises from the execution of the following code which constitutes example 16.15 (Chapter 16: The Preprocessor and the C Library) in Stephen Prata's book "C Primer Plus" (6th Edition). In what follows, I undertook the utmost care in copying the code as-is from the hardcopy.
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#define RAD_TO_DEG (180/(4 * atanl(1)))
// generic square root function
#define SQRT(X) _Generic((X),\
long double: sqrtl,\
default: sqrt,\
float: sqrtf)(X)
// generic sine function, angle in degrees
#define SIN(X) _Generic((X),\
long double: sinl((X)/RAD_TO_DEG),\
default: sin((X)/RAD_TO_DEG),\
float: sinf((X)/RAD_TO_DEG)\
)
/*
*
*/
int main(void)
{
float x = 45.0f;
double xx = 45.0;
long double xxx = 45.0L;
long double y = SQRT(x);
long double yy = SQRT(xx);
long double yyy = SQRT(xxx);
printf("%.17Lf\n", y); // matches float
printf("%.17Lf\n", yy); // matches default
printf("%.17Lf\n", yyy); // matches long double
int i = 45;
yy = SQRT(i); // matches default
printf("%.17Lf\n", yy);
yyy = SIN(xxx); // matches long double
printf("%.17Lf\n", yyy);
return 0;
}
When I build/compile/run the following results are printed,
0.00000000000000000
0.00000000000000000
0.00000000000000000
0.00000000000000000
0.00000000000000000
as opposed to the results tabulated in the book:
6.70820379257202148
6.70820393249936942
6.70820393249936909
6.70820393249936942
0.70710678118654752
The only(?) way to approximate most of the aforementioned results is to typecast y, yy, and yyy variables in the printf statements (save for the third one) as follows:
printf("%.17Lf\n", (float)y); // matches float
printf("%.17Lf\n", (double)yy); // matches default (double)
printf("%.17Lf\n", yyy); // matches long double
int i = 45;
yy = SQRT(i); // matches default
printf("%.17Lf\n", (double)yy);
yyy = SIN(xxx); // matches long double
printf("%.17Lf\n", (double)yyy);
In which case, the results I get are expectedly close to those of the hardcopy:
6.70820379257202150
6.70820393249936940
0.00000000000000000
6.70820393249936940
0.70710678118654757
My question is twofold:
- Why do I have to typecast in order to compute?
- Why the code does not compute as-is?
I'm new to C programming, and from the aforementioned book I have typed every example, and solved every exercise on my own up to chapter 15, but I haven't come across anything that I didn't grasp in due time, considering my developing, but currently limited understanding of the language.
I'm using the NetBeans IDE (version 8.2), coupled with GCC 9.2.0 on a Win10 64bit PC with 8GB RAM. GCC 64bit (built by MINGW) version was verified with the following code. All codes in this question were compiled according to the C11 standard (gcc -c -g -std=c11 -MMD -MP -MF).
#include <stdio.h>
#include <stdlib.h>
/*
*
*/
int main(void)
{
printf("gcc version: %d.%d.%d\n",__GNUC__,__GNUC_MINOR__,__GNUC_PATCHLEVEL__);
return 0;
}