43

is there a C macro or some kind of way that i can check if my c program was compiled as 64bit or 32bit at compile time in C?

Compiler: GCC Operating systems that i need to do the checks on: Unix/Linux

Also how could i check when running my program if the OS is capable of 64bit?

Daniel
  • 3,017
  • 12
  • 44
  • 61
  • Why do you want to know? – Fred Foo Mar 11 '11 at 12:33
  • How to detect if a program was compiled (in c) as 32 or 64bit – Daniel Mar 11 '11 at 12:36
  • 2
    Do you want to examine a binary executable file and determine what compiler options were used to create that file? – pmg Mar 11 '11 at 12:44
  • @pmg A executable i created. so not one that is already compiled – Daniel Mar 11 '11 at 12:45
  • 1
    Wait a sec... you mean you already have the binary and then want to check it? (Since you mentioned "was compiled") Or **during** compile time (Since you mentioned C macro) ? – Derick Schoonbee Mar 11 '11 at 12:46
  • Sorry its late where i am right now, i men't that "how can i check if this c file is going to be compiled as 32bit or 64bit" (COMPILE TIME) – Daniel Mar 11 '11 at 12:48
  • 1
    @Daniel: I understand what you want to do, the question is just *why*. Your question isn't entirely valid since "64-bit architecture" isn't a very well-defined term (do you want 64-bit registers, 64-bit data bus, 64-bit pointers), are your programming only for x86 or portably... – Fred Foo Mar 11 '11 at 12:50
  • PS: Mabe add more detail such as OS(es) and compiler. For example in gcc you can view and specific some switches that can give hints if you do not need to do some check in the code. – Derick Schoonbee Mar 11 '11 at 12:51
  • @Derick check question again please – Daniel Mar 11 '11 at 12:57
  • Have a look at the [following question](https://stackoverflow.com/questions/682934/is-there-a-gcc-preprocessor-directive-to-check-if-the-code-is-being-compiled-on-a/682955#682955). It outlines the use of the `__LP__` gcc preprocessor directive – mdec Mar 11 '11 at 12:28
  • This is a [near-duplicate of another question](https://stackoverflow.com/questions/163058/how-can-i-detect-if-im-compiling-for-a-64bits-architecture-in-c/32717129) (it deals with C++), some of those answers apply here as well. – DarkDust Apr 03 '20 at 18:02

9 Answers9

42

Since you tagged this "gcc", try

#if __x86_64__
/* 64-bit */
#endif
Anomie
  • 92,546
  • 13
  • 126
  • 145
  • 18
    An other macro to test is '_____LP64_____' which will work on a non x86-64 architecture. – Gunther Piez Mar 11 '11 at 12:35
  • +1 for `__LP64__`, but note this will not work for some of the more obscure 64 bit architectures which do not use the LP64 model. – Paul R Mar 11 '11 at 12:51
  • 4
    Testing any macro beginning with `_[A-Z]` or `__` is almost surely the wrong answer. – R.. GitHub STOP HELPING ICE Mar 11 '11 at 13:15
  • 11
    @R..: No, it's almost surely the *right* answer. The macros beginning with `_[A-Z]` or `__` are reserved by the implementation (i.e. the compiler/preprocessor), which means you can't define them yourself, but you can certainly test their existence to query the implementation. – Adam Rosenfield Aug 04 '11 at 15:29
  • 2
    @Adam: And the result will only be meaningful on some implementations. If you instead test a *standard* macro like `UINTPTR_MAX`, it's reliable across all implementations. (Hint: A valid implementation could happily predefine `__LP64__` on 32-bit machines, or as an even more extreme example, it could treat **all** macro names beginning with `__` as defined unless they're explicitly undefined.) – R.. GitHub STOP HELPING ICE Aug 04 '11 at 16:39
  • 4
    @R..: OTOH, the C99 standard guarantees that uintptr_t is large enough to hold a pointer, but it doesn't guarantee that it is not larger than needed. An implementation could use a 64-bit uintptr_t even though all pointers are 32 bits. Or, for that matter, since uintptr_t is optional in C99 your "standard" macro may not be defined anyway. – Anomie Aug 04 '11 at 17:07
  • See the comments on my answer for a discussion of that issue. – R.. GitHub STOP HELPING ICE Aug 04 '11 at 19:01
  • 1
    @R..: I see nothing there about the possibility that the size of uintptr_t is larger than the size of any actual pointer. – Anomie Aug 04 '11 at 19:05
  • I'm not sure why it would matter. In any case, a question of whether a a system "is 64-bit" is rather ambiguous. Do you want to know if you have fast 64-bit arithmetic? If you have large virtual address space? Or what? To answer these individual questions, there are various `stdint.h` types whose limits you could test. – R.. GitHub STOP HELPING ICE Aug 04 '11 at 19:50
  • Where are those documented in the cpp docs? I tried http://gcc.gnu.org/onlinedocs/cpp/Predefined-Macros.html but it explicitly says there that system specific defines will not be documented there... where are they then? – Ciro Santilli OurBigBook.com Jun 09 '13 at 10:49
  • 2
    I am downvoting. See my answer for the reason. Generally speaking none of these scenarios can be relied upon to give any reliable indication of whether a 64-bit address space and non-emulated 64-bit arithmetic is available, thus they are basically useless except in the context of a build system that is not agnostic. Thus it is preferred to set build macros that so the build system can select which variant is compiled. – Shelby Moore III Nov 24 '15 at 00:09
  • 1
    @Shelby - *"non-emulated 64-bit arithmetic is available"* - That's the important one for me. We have two specific implementations, each optimized for a specific platform, and we need to know which one to use. – jww Dec 11 '18 at 11:16
38

Here is the correct and portable test which does not assume x86 or anything else:

#include <stdint.h>
#if UINTPTR_MAX == 0xffffffff
/* 32-bit */
#elif UINTPTR_MAX == 0xffffffffffffffff
/* 64-bit */
#else
/* wtf */
#endif
R.. GitHub STOP HELPING ICE
  • 208,859
  • 35
  • 376
  • 711
  • 2
    I know this question is for C, but since it's mixed with (or included from) C++ a lot of the time, so here is a C++ caveat: C99 requires that to get limit macros defined in C++, you have to have `__STDC_LIMIT_MACROS` defined before you include the header. As it may have been already included, the only way to ensure the correct definition is to force the client to always include it as a first header in the source file, or add `-D__STDC_LIMIT_MACROS` to your compile options for all files. – Alex B Mar 11 '11 at 13:41
  • 2
    Portability is theoretically limited by the fact that `uintptr_t` is an optional type. I suspect it would be perverse though for a 64 bit implementation to omit it, since `unsigned long long` is a big enough integer type. – Steve Jessop Aug 04 '11 at 15:16
  • 1
    My view is that a system that omits `uintptr_t` probably has very good reason for doing so (a very pathological or at least atypical memory model, for instance) and that any assumptions made on the basis that this is "a 32-bit system" or "a 64-bit system" would be invalid on such an implementation. As such, the "wtf" case in my answer should probably either contain `#error` or else hyper-portable code that's completely agnostic to traditional assumptions about memory models, type sizes, etc. – R.. GitHub STOP HELPING ICE Aug 04 '11 at 15:35
  • 2
    This doesn't work on Linux PAE kernels. Kernels with PAE activated, are 32 bit but can address RAM like a 64 bit system. This code determines the architecture by checking the maximum addressable RAM. A 32 bit PAE kernel machine would be seen as 64 bit with this, so the inserted source code (possible some inline assembler instruction) would not work. – Kenyakorn Ketsombut Jun 16 '14 at 03:27
  • 1
    @KenyakornKetsombut: No they cannot. PAE has nothing to do with the size of the address space. It's merely an extension for the kernel to access more physical memory, but the virtual address space is always, inherently, permanently 32-bit on a 32-bit system. – R.. GitHub STOP HELPING ICE Jun 16 '14 at 03:56
  • 1
    @LưuVĩnhPhúc: In what sense is x32 "64-bit"? If having N-bit registers available when they're needed makes an implementation N-bit, why isn't i686 "128-bit"? After all you have 128-bit SSE registers. For most purposes, "N-bit" means "address space is an N-bit space". If you have another purpose in mind you need to clarify what it is; from this perspective, x32 is 32-bit. – R.. GitHub STOP HELPING ICE Feb 24 '15 at 01:46
  • 3
    from my perspective any architectures that can do 64-bit arithmetics natively is a 64-bit architecture. And there are several architectures with only 24-bit address bus but still called "32-bit" because their registers are 32 bits. The same to 8-bit MCUs, although their address buses are often 14 to 16 bits or more – phuclv Feb 24 '15 at 06:28
  • @LưuVĩnhPhúc: "Natively" is not an observable aspect of a C implementation. Whether arithmetic takes place as one instruction or in some other form in the machine code is not observable. In any case I don't see anyone calling i686 a 128-bit architecture, which would be the obvious consequence of your criterion... – R.. GitHub STOP HELPING ICE Feb 24 '15 at 17:36
  • @R.. i686 can't do 128-bit arithmetics, only 128-bit SSE registers, so no one calls it a 128-bit architecture anyway. – phuclv Feb 26 '15 at 05:46
  • and you can't call the above machines 14, 16 or 24-bit right? – phuclv Feb 26 '15 at 06:07
  • While I agree there's a range of ways you could go about making the classification (and the whole classification is rather stupid except in the context of concepts like ILP32 model/LP64 model/etc.), I would not go by the number of wired bits on the physical address bus but rather the logical (or virtual, on archs with MMU) address space. If pointers take 32 bits of storage and addressing instructions use 32-bit registers for addresses, I would call that 32-bit even if 8 of the pins go nowhere on the metal. – R.. GitHub STOP HELPING ICE Feb 26 '15 at 07:27
  • On 8-bit architectures, 16-bit pointers are still stored as 16 bits, not 8. And classifying by GPR size is probably more common like Pascal Cuoq in his answer [“64-bit machine” is an ambiguous term but usually means that the processor's General-Purpose Registers are 64-bit wide](http://stackoverflow.com/a/28297443/995714) – phuclv Feb 28 '15 at 06:22
  • @LưuVĩnhPhúc: Note that he said 64-bit *machine*. That's a concept that has nothing to do with the C implementation or the compilation environment. C code compiled on a 32-bit implementation with a target like x86 or arm or mips-o32 could run on a 64-bit machine (like x86_64 or aarch64 or mips64, respectively). But this whole conversation is rather pointless. If you want to use your definition of 64-bit, nobody is stopping you, but it's not useful from a standpoint of C. – R.. GitHub STOP HELPING ICE Feb 28 '15 at 13:07
  • I downvoted this answer because it assures 64-bit pointers (thus probably address space), but it doesn't assure an `int` is 64-bit. Many cases of testing for 64-bit are to insure that 64-bit integer arithmetic is fast because it is not emulated. For example Emscripten might provide 64-bit pointers but it emulates 64-bit integer arithmetic because the Javascript output target doesn't support 64-bit integers. – Shelby Moore III Nov 23 '15 at 09:25
  • @ShelShelbyMooreIII: I accept your reasoning but the question is not clear on what "64-bit" even means. In the absence of a specific definition I generally assume address space size because it's the only thing that affects what your program can **do** and not just performance. – R.. GitHub STOP HELPING ICE Nov 23 '15 at 11:31
  • But another issue is that 64-bit pointers don't even guarantee a 64-bit address space. See [Anomie's comment](http://stackoverflow.com/questions/5272825/detecting-64bit-compile-in-c/33867847?noredirect=1#comment8278599_5272888) and my answer for an example. Thus the correct answers are, "Do not detect 64-bit from the preprocessor and instead use the build system to define a macro". – Shelby Moore III Nov 24 '15 at 00:37
14

A compiler and platform neutral solution would be this:

// C
#include <stdint.h>

// C++
#include <cstdint>

#if INTPTR_MAX == INT64_MAX
// 64-bit
#elif INTPTR_MAX == INT32_MAX
// 32-bit
#else
#error Unknown pointer size or missing size macros!
#endif

Avoid macros that start with one or more underscores. They are not standard and might be missing on your compiler/platform.

DarkDust
  • 90,870
  • 19
  • 190
  • 224
  • This is actually the best practice! – Jackie Yeh May 18 '20 at 09:32
  • Just thought it was a good idea to mention... it took Microsoft 11 years to add stdint.h to its c99 support. If _MSC_VER is less than 1600, it doesn't exist. (Granted it's old, but it may still be encountered) – Jimmio92 Jan 19 '22 at 02:59
10

An easy one that will make language lawyer squeem.

if(sizeof (void *) * CHARBIT == 64) {
...
}
else {
...
}

As it is a constant expression an optimizing compiler will drop the test and only put the right code in the executable.

Patrick Schlüter
  • 11,394
  • 1
  • 43
  • 48
  • 4
    It usually is true, but please, please stop making assertions like "... so an optimizing compiler will ...". Preprocessor is preprocessor, and often the code following "else" will not compile when the condition is true. – Tomasz Gandor Aug 08 '14 at 18:52
  • 3
    I don't see what the preprocessor has to do with anything? The OP asked for a method to detect the mem model used (64 or 32 bit), he didn't ask for a preprocessor solution. Nobody asked a way to replace conditionnal compilation. Of course my solution requires that both branches are syntactically correct. The compiler will compile them always. If the compiler is optimizing it will remove the generated code, but even if it doesn't there's no problem with that. Care to elaborate what you mean? – Patrick Schlüter Aug 09 '14 at 10:27
  • OK, you're right. The exact wording was "a C macro or some kind of way". I didn't notice the "some kind of way" at first. – Tomasz Gandor Aug 09 '14 at 16:00
  • I downvoted this answer because it assures 64-bit pointers (thus probably address space), but it doesn't assure an `int` is 64-bit. Many cases of testing for 64-bit are to insure that 64-bit integer arithmetic is fast because it is not emulated. For example Emscripten might provide 64-bit pointers but it emulates 64-bit integer arithmetic because the Javascript output target doesn't support 64-bit integers. – Shelby Moore III Nov 23 '15 at 09:27
  • 2
    @ShelbyMooreIII: Ummmmm... excuse me? The distinction of a 32-bit vs 64-bit target has absolutely nothing to do with the size of `int` (indeed, its size _differs_ e.g. in LP64 as used in Linux/BSD vs. LLP64 as used in Windows, while both are very clearly 64-bit). It also has nothing to do with how fast a compiler might optimize a particular operation (or how fast Javascript performs). – Damon Mar 10 '16 at 16:29
  • @Damon true, but obviously that is irrelevant to the point I made. Try reading again. The question didn't specify a 64-bit address space. It asks whether the program will be compiled at 64-bit, which is a general question. You are presuming the question meant what you want it to mean, but I read the question literally. Your "ummmmm..." drama :rolleyes: – Shelby Moore III Apr 08 '16 at 14:36
  • 2
    Doesn't detect ILP32 ABIs on 64-bit architectures, e.g. [the Linux x32 ABI](https://en.wikipedia.org/wiki/X32_ABI) or the [AArch64 ILP32 ABI](http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.dai0490a/ar01s01.html). That's 32-bit pointers in 64-bit mode. So `long long` is still efficient on those targets, unlike on 32-bit CPUs where 64-bit integers take 2 instructions per operation, and 2 registers. – Peter Cordes Mar 28 '18 at 16:35
6

Use a compiler-specific macro.

I don't know what architecture you are targeting, but since you don't specify it, I will assume run-of-the-mill Intel machines, so most likely you are interested in testing for Intel x86 and AMD64.

For example:

#if defined(__i386__)
// IA-32
#elif defined(__x86_64__)
// AMD64
#else
# error Unsupported architecture
#endif

However, I prefer putting these in the separate header and defining my own compiler-neutral macro.

Alex B
  • 82,554
  • 44
  • 203
  • 280
  • Use a standard macro (see my answer), not a compiler-specific one. – R.. GitHub STOP HELPING ICE Mar 11 '11 at 13:17
  • @R.. Yes, I know of that one, and it breaks with C++ code, so I usually stick with compiler-specific ones. – Alex B Mar 11 '11 at 13:41
  • Then use `ULONG_MAX` instead of `UINTPTR_MAX`. On any real-world unixy system they'll be the same. It's surely a lot more portable to assume `long` and pointers are the same size than to assume some particular compiler's macros are present. – R.. GitHub STOP HELPING ICE Mar 11 '11 at 20:38
  • 3
    @R.. And it's still wrong on 64-bit Windows. I prefer that my code fails to compile, rather than silently compile the wrong thing. – Alex B Mar 31 '11 at 23:35
  • I am downvoting. See my answer for the reason. Generally speaking none of these scenarios can be relied upon to give any reliable indication of whether a 64-bit address space and non-emulated 64-bit arithmetic is available, thus they are basically useless except in the context of a build system that is not agnostic. Thus it is preferred to set build macros that so the build system can select which variant is compiled. – Shelby Moore III Nov 24 '15 at 00:12
2

GLIBC itself uses this (in inttypes.h):

#if __WORDSIZE == 64
Jonathon Reinhart
  • 132,704
  • 33
  • 254
  • 328
1

Use this UINTPTR_MAX value to check build type.

#include <stdio.h>
#include <limits.h>

#if UINTPTR_MAX == 0xffffffffffffffffULL               
# define BUILD_64   1
#endif

int main(void) {

    #ifdef BUILD_64
    printf("Your Build is 64-bit\n");

    #else
    printf("Your Build is 32-bit\n");

    #endif
    return 0;
}
Haseeb Mir
  • 928
  • 1
  • 13
  • 22
0

The same program source can (and should be able to) be compiled in 64-bit computers, 32-bit computers, 36-bit computers, ...

So, just by looking at the source, if it is any good, you cannot tell how it will be compiled. If the source is not so good, it may be possible to guess what the programmer assumed would be used to compile it under.

My answer to you is:

There is a way to check the number of bits needed for a source file only for bad programs.

You should strive to make your programs work no matter on how many bits they will be compiled for.

pmg
  • 106,608
  • 13
  • 126
  • 198
  • 1
    If you need to use inline assembly, you have to use architecture-specific macros. – Alex B Mar 11 '11 at 13:12
  • 1
    If you need to use inline assembly, knowing the number of bits is not helpful. You need to know the name of the arch and adjust your build system/macros/etc. accordingly. – R.. GitHub STOP HELPING ICE Mar 11 '11 at 13:18
  • @R..: Meh, often the number of bits is good enough. Especially if you know your app is destined exclusively for x86 hardware, then knowing whether the compiler is 32 or 64 bit is often all you need to code the correct assembly source. – deltamind106 Jul 21 '15 at 19:07
  • @deltamind106: Are you really still producing x86-only products in 2015? How long do you expect that line of business to be around? :-) – R.. GitHub STOP HELPING ICE Jul 21 '15 at 22:46
  • I am upvoting. My answer goes into more detail as to why your answer is correct. – Shelby Moore III Nov 24 '15 at 00:14
-3

The question is ambiguous because it doesn't specify whether the requirement is for 64-bit pointers or 64-bit native integer arithmetic, or both.

Some other answers have indicated how to detect 64-bit pointers. Even though the question literally stipulates "compiled as", note this does not guarantee a 64-bit address space is available.

For many systems, detecting 64-bit pointers is equivalent to detecting that 64-bit arithmetic is not emulated, but that is not guaranteed for all potential scenarios. For example, although Emscripten emulates memory using Javascript arrays which have a maximum size of 232-1, to provide compatibility for compiling C/C++ code targeting 64-bit, I believe Emscripten is agnostic about the limits (although I haven't tested this). Whereas, regardless of the limits stated by the compiler, Emscripten always uses 32-bit arithmetic. So it appears that Emscripten would take LLVM byte code that targeted 64-bit int and 64-bit pointers and emulate them to the best of Javascript's ability.

I had originally proposed detecting 64-bit "native" integers as follows, but as Patrick Schlüter pointed out, this only detects the rare case of ILP64:

#include <stdint.h>
#if UINT_MAX >= 0xffffffffffffffff
// 64-bit "native" integers
#endif

So the correct answer is that generally you shouldn't be making any assumptions about the address space or arithmetic efficiency of the nebulous "64-bit" classification based on the values of the limits the compiler reports. Your compiler may support non-portable preprocessor flags for a specific data model or microprocessor architecture, but given the question targets GCC and per the Emscripten scenario (wherein Clang emulates GCC) even these might be misleading (although I haven't tested it).

Generally speaking none of these scenarios can be relied upon to give any reliable indication of whether a 64-bit address space and non-emulated 64-bit arithmetic is available, thus they are basically useless (w.r.t. to said attributes) except in the context of a build system that is not agnostic. Thus for said attributes, it is preferred to set build macros that so the build system can select which variant is compiled.

Community
  • 1
  • 1
Shelby Moore III
  • 6,063
  • 1
  • 33
  • 36
  • 3
    Except for classic Crays and the defunct HAL nobody uses ILP64 (SILP64 even for Cray). So trying to find if `int` arithmetic is 64 bit has not much prectical value. – Patrick Schlüter Nov 23 '15 at 16:43
  • @PatrickSchlüter you are correct that 32-bit `int` does not guarantee that `uint64_t` arithmetic is emulated with 32-bit arithmetic. I will correct my answer. – Shelby Moore III Nov 23 '15 at 22:54