0

Below is my code to convert farenhiet to celsius scale:


#include<stdio.h>
#define CONVERT_TO_CELSIUS(far) (5.0/9.0)*(far-32.0)

int main()
{
        int f;
        float c;
        int l=0,u=200,s=20;
        f=l;
        while(f<=u)
        {
                c = CONVERT_TO_CELSIUS(f);
                printf("%3.0f\t%6.1f\n",f,c);
                f=f+s;

        }

        return 1;

}
       
Output seen:
-18  32.00
 -7  32.00
  4  32.00
 16  32.00
 27  32.00
 38  32.00
 49  32.00
 60  32.00
 71  32.00
 82  32.00
 93  32.00

Output expected:
  0 -17.78
 20  -6.67
 40   4.44
 60  15.56
 80  26.67
100  37.78
120  48.89
140  60.00
160  71.11
180  82.22
200  93.33               

I am seeing that when I specify variable f's format specifier as %f the output is wrong , but when I retain it as %d then output is as expected. How code works here?

Suzanno Hogwarts
  • 323
  • 2
  • 3
  • 12
  • 1
    `f` is not a float so the `%3.0f` format specifier is incorrect. `%d` is the correct format for an int. – Retired Ninja Dec 17 '21 at 04:01
  • Yeah thats correct. But internally what is happening here that affecting the output? Atleast c variable output should be correct right as it is a float. – Suzanno Hogwarts Dec 17 '21 at 04:02
  • 3
    Once you pass the wrong type, you have invoked "undefined behavior" and anything becomes possible. There is no requirement that the results be logical. Depending on the processor and the ABI, you might get consistent garbage, or uninitialized garbage, or it might even crash. – Raymond Chen Dec 17 '21 at 04:04
  • Thanks @RaymondChen i see different output with other machine and that clears that this behavior is undefined. – Suzanno Hogwarts Dec 17 '21 at 04:27

0 Answers0