According to the Wikipedia article on C_data_types, there's one part that mentions how any data type can be 64-bit:
The actual size of the integer types varies by implementation. The standard only requires size relations between the data types and minimum sizes for each data type:
The relation requirements are that the long long is not smaller than long, which is not smaller than int, which is not smaller than short. As char's size is always the minimum supported data type, no other data types (except bit-fields) can be smaller.
The minimum size for char is 8 bits, the minimum size for short and int is 16 bits, for long it is 32 bits and long long must contain at least 64 bits.
The type int should be the integer type that the target processor is most efficiently working with. This allows great flexibility: for example, all types can be 64-bit. However, several different integer width schemes (data models) are popular. Because the data model defines how different programs communicate, a uniform data model is used within a given operating system application interface.[8]
Yet, I do observe clear data type sizes when I code in C on most machines. Is this article just saying that there's a uniform data model in place (enforced by most operating systems) which enforces the size of data types to allow ease of communication across programs/machines?