I have a question about the number of bytes that a computer normally use to do calculation. First of all, i want you to see the source code below.
source code
printf("%d\n", sizeof(444444444));
printf("%d\n", 444444444);
printf("%d\n", sizeof(4444444444));
printf("%llu\n", 4444444444);
output
4
444444444
8
4444444444
As you can see, the computer never loses the value. If it were to be too big to fit in an int, The computer itself would automatically extend it's type. I think the reason why the computer never loses the value is because it operates originally on big type already at least bigger than the 8-bit container.
Would you guys let me know the overall mechanism? Thank you for your help in advance.