#include<stdio.h>
int main()
{
int a;
a=100000000;
printf("%d",a);
return(0);
}
//why value of a gets printed even if it larger than the range of int?
#include<stdio.h>
int main()
{
int a;
a=100000000;
printf("%d",a);
return(0);
}
//why value of a gets printed even if it larger than the range of int?
because range of int type in c is from -32767 to 32767
You have false assumptions there.
The range of type int
is not fixed. An int is not necessarily 16 bits.
The C standard doesn't define a fixed range for int
. It requires a conforming implementation supports at least the range -32767 to 32767 for int
. But an implementation can support higher ranges. On most systems, an int
is 32 bits wide.
So there's nothing unexpected in your output.
If you want to know the exact limits then you can use the macros INT_MIN
and INT_MAX
from <limits.h>
.
Relevant C-FAQ: How should I decide which integer type to use?
That number fits on a 32 bit integer, which can hold up to 2,147,483,647 (signed).
The size of "int" can vary across different CPU architectures. If you want to be absolutely sure how large your integer is use:
int32_t a; //this is a 32bit signed integer
uint16_t b; //16 bit unsigned integer
instead.
In C the size of an integer can be 2,4 or even 8 bytes. This can depend on the compiler that you are using and also on your system.
The above was true for some extend for old compilers.
But this is not always the case. Cross compilation is required (among other things) to get around the host CPU's int being different from the int used on the target CPU. This can cause problems when porting code for use on multiple platforms. Most modern C compilers will allow the size of integers to be altered by the user.