So when I input a decimal number like 12.45, it gets decremented by 0.00001 or something that causes my function to work badly.
For example: If x is 12.45 and div is 0.1 when watching x you can see that it becomes 12.449999999
BUT If x is 12.455 and div is 0.01 it doesn't reduce x
double round(double x, double div){
if (div != 0){
double temp2 = 0;
int temp = x;
double dec = x - temp;
dec = dec/div;
temp = dec;
temp2 = dec-temp;
temp2 = temp2 * div;
cout << x << endl << endl;
if (temp2 >= div/2){
x+=(div-temp2);
}else{
x-=temp2;
}
cout << temp << " " << dec << " " << x << " " << temp2 << " " << div/2;
return x;
}else{
cout << "div cant be equal to zero" << endl;
}
}
I was trying to make a function that rounds up decimal numbers. I know its probably not the best to do it, but it works except the problem I described earlier.
To fix it I tried limiting decimal places at the input, didn't work. Also tried using other methods instead of using a double/integer combo without any results.
I expect the output of 12.5 when x is 12.45 and div is 0.1 but it's not working, because of the 0.000001 of the input getting lost.