I am trying to write a program in C to accomplish the following task.
Input: Three double-precision numbers, a, b, and c.
Output: All the numbers from b to a, that can be reached by decrements of c.
Here is a simple program (filename: range.c).
#include <stdlib.h>
#include <stdio.h>
int main()
{
double high, low, step, var;
printf("Enter the <lower limit> <upperlimit> <step>\n>>");
scanf("%lf %lf %lf", &low, &high, &step);
printf("Number in the requested range\n");
for (var = high; var >= low; var -= step)
printf("%g\n", var);
return 0;
}
However, the for loop behaves rather bizarrely for some inputs. For instance, the following.
10-236-49-81:stackoverflow pavithran$ ./range.o
Enter the <lower limit> <upperlimit> <step>
>>0.1 0.9 0.2
Number in the requested range
0.9
0.7
0.5
0.3
10-236-49-81:stackoverflow pavithran$
I cannot figure out why the loop quits at var = 0.1. While for another input, it behaves as expected.
10-236-49-81:stackoverflow pavithran$ ./range.o
Enter the <lower limit> <upperlimit> <step>
>>0.1 0.5 0.1
Number in the requested range
0.5
0.4
0.3
0.2
0.1
10-236-49-81:stackoverflow pavithran$
Had the weird behaviour in the first situation got something to do with numeric precision?
How can I ensure that the range will always contain floor((high - low)/step) + 1 numbers?
I have tried an alternate method of looping over floats, where I scale the loop variables to integers, and print the result of the loop variable divided by the scaling used. But there's perhaps a better way...