EDITED TO BE REOPENED
People, how are you doing? So, I am trying to atribute the value 0.000010 to a variable, but it becomes a very huge number, and it shouldn't be the case of overflow, due to the type. And it is important to really be 0.000010, because it is used into a condition.
In the code below, it is the varibale dif
. During debug, as double
, 0.000010 becomes 4.571853192736056e-315. As float
, it becomes 9.99999975e-06. If I print it, after atribution, it giver me the right value (0.000010), but debug shows me thos other things.
EDIT TO HELP COMPREHENSION:
What am I supposed to do? I have a PI value calculates as the Gregory-Leibniz series (Pi = 4 -4/3 + 4/5 - 4/7 +...). Each operation (-4/3 and + 4/5, for example) are iteractions. I need to aproximate this Pi to the constant M_PI, from math.h library with a maximum difference of X (a number entered by the user). For exemple, it is necessary 100002 iteractions in the serie to aproximate Pi and M_PI with a difference of 0.000010. So, in this exemple, the user chose dif = 0.000010 and got 100002 iteractions.
The problem, as I said, is that the variable dif (as double or float) can get to be 0.000010 (DEBUG IMAGES AFTER THE CODE).
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main()
{
long int n = 0, iteractions = 0;
float Pi1 = 4.0, Pi2 = 0.0, sub = 0.0, sum = 0.0;
double dif = 0.0;
printf("Type the difference to be observed: ");
scanf("%f", &dif);
Pi1 = 4;
sub = Pi1 - M_PI;
for(n=1; sub >= dif; n++){
Pi2 = (pow(-1,n)*4)/(2*n + 1);
sum = Pi1 + Pi2;
Pi1 = sum;
sub = Pi1 - M_PI;
iteractions = iteractions + 1;
}
printf("Iteractions: %ld \n", iteractions);
return 0;
}
Image: