I do not understand the practical usage of .code16
or other .code*
directives. What I understood from the answers at this question on StackOverflow,
when someone defines .code16
in their assembly code and does the following :
$ gcc -c -m32 -o main.o main.s
It ignores the .code16
and the output assembly would be meant to run on 32-bit platform. If someone does not specify -m
flag, it seems to take the one configured for gcc as default depending on the host. Hence, to conclude, .code*
directive is always ignored and superseded by -m
flag.
Can someone please correct me if I am wrong in my understanding, and what is the situation when I would use .code16
because I can always define that using -m16
and .code*
is anyway going to be ignored depending on the target mode.
Are .code16
(or others) only meant to throw errors when the data couldn't fit in 16-bit registers, else otherwise, they would remain dormant?