NASM allows you to freely change the code size (using the bits
directive) anywhere you like, which allows you to have 16-bit or 32-bit code in an elf64, or 64-bit code in an elf32. For this reason detecting the output format is relatively wrong (fragile).
Instead you want to detect the current code size. For this purpose NASM has a standard macro called __BITS__
. This means you can do %if __BITS__ = 64
.
However, 32-bit code and 64-bit code typically use completely different calling conventions; and 64-bit code has more registers, wider registers and other differences (where not using the extra registers or the extra width means that the code would be extremely poor quality - far worse than compiler generated code). This means that (excluding only using it for a few small macros) attempting to support both 32-bit and 64-bit in the same code is extremely silly. You just end up with completely different code for completely different cases that are slapped into the same file for no sane reason and then separated (using preprocessor magic) to work-around the fact that it never made sense in the first place.
More sensible is to have separate files for separate cases (e.g. a src/arch/x86_32
directory and a src/arch/x86_64
directory), and in that case you can just let the build script or makefile sort it out (e.g. tell the assembler to use a different "include path").