-3

Here when I print the value of x its giving zero as output.Whereas when I print y, I am getting correct value(a random number between 0 and 1),the typecasting is the problem it seems.Why do i need to typecast it?

double x,y;
x=rand()/RAND_MAX;
printf("X=%f\n",x);
y=(double)rand()/RAND_MAX;
printf("Y=%f",y);

Output

X=0.000000
Y=0.546745
47aravind
  • 13
  • 2
  • 8

2 Answers2

1

When you divide an integer by an integer, you get truncating integer division. So using

y = (double)rand() / RAND_MAX;

is absolutely the right way to get the result you want.

Steve Summit
  • 45,437
  • 7
  • 70
  • 103
0

Different types of division yielded different responses.

// Integer division
x=rand()/RAND_MAX;

// floating-point division
y=(double)rand()/RAND_MAX;

It isn't the a cast is needed, but it is one of the ways to insure floating-point division. I like the last one as it insures the division will be done at least to the precision of x, be it float, double or long double without changing code.

x = (double)rand()/RAND_MAX;

x = 1.0*rand()/RAND_MAX;

x = rand()/(1.0*RAND_MAX);

x = rand();
x /= RAND_MAX;

BTW, often code needs to generate a number [0.0 ... 1.0): from 0.0 to almost 1.0.

x = rand();
x /= RAND_MAX + 1.0;
chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256