This is a beginners question and deserves to be answered properly. (I'm startled by the downvotes and the non-constructive comments)
Let me give a walk-throu line by line:
int a;
The first line declares a variable with the name a
. The variable is of type integer
and with normal compilers (Microsoft Visual C++ on Windows, GCC on linux, Clang on Mac) this usually 32 bits wide. The integer variable is signed because you did not specify unsigned. This means it can represent values ranging from –2,147,483,648 to 2,147,483,647.
a =10;
The second line assigns the value 10 to that variable
printf ("%d", &a);
The third line is where you get the surprising result. "%d"
is the "format string" it defines how the variables, given as further arguments are formatted and subsequently printed. The format string comprises of normal text (which will be printed normally) and control-sequences. the control sequences start with the character %
and end with a letter. The letter specifies the argument type that is expected. d
in the above case expects a integer value.
The problem with your code is that you do not specify an itenger value, you give the address of an integer value (with the address-of &
operator). The correct statement would be:
printf ("%d", a);
Simple.
I recommend that you have a read on a good C book. I recommend "The C programming language" which is from the original authors of the language. You find this book on amazon, but you find it also online.
You can find the same answers reading in the standard. But to be honest, these documents are not easy reading. You can find a draft of the C11 standard here. The description of the formatting options starts on page 309. (drafts are usually good enough for programming purposes, and are usually available for free).