-1

I was running a simple code on my laptop (Ubuntu Mate 64 bit)

# include <stdio.h>

int main()
{
    int i,j,l=0,swap,n,k,a[100],b[100],count;

    printf("%d", count);
}

As expected, this should return garbage value, and it was doing exactly that. I also ran it several times. I was getting different results everytime, which indicates that fresh memory was allocated everytime. The output was something like this.

32576
33186
0
29318
0
32111
0

However, notice that for some instances I was getting zero

Now this same code was executed in a solaris server. I ran it on a thin-client machine connected to this solaris server. But this time, the program printed only 0, no matter how many times I executed the program. I logged into the server from a different account and still I got the same result. I thought that the solaris machine has some sort of inherant garbage collector, but it was not true. When I ran another program, I was getting garbage values.

# include <stdio.h>

int main()
{
    int i;

    printf("%d", i);
}

My question is that is there some difference between the two platforms or is it that I am just a victim of randomness?

xavier666
  • 87
  • 12

1 Answers1

1

is there some difference between the two platforms

Of course there are many differences. You can't run an executable compiled for one on the other, so that should be obvious. And that's even if you're running both on x86 platforms. The differences are even greater if you're running Linux on x86 and Solaris on SPARC hardware.

And reading an uninitialized variable is undefined behavior anyway. See (Why) is using an uninitialized variable undefined behavior?

Community
  • 1
  • 1
Andrew Henle
  • 32,625
  • 3
  • 24
  • 56