2

With use of a C++ Long Long int being 64-bits does this limit use of this program to execution to a 64-bit OS only or would this still handle a 64-bit value within execution on a 32-bit OS?

I have a program that I am passing a integer seed to random generator and using the seed as a key for starting position within random algorithm. I compiled my code with long long int and it compiles and runs with no problems however my system is running Windows 7 64-bit and I dont have a 32-bit system to test it out on. While this program will mainly be run on my 64-bit system the reason behind this question is to understand if long long int use is only for a 64-bit OS system or if 32-bit systems can handle 64-bit int's through say 2 or more clock cycles to handle 64-bits vs a 64-bit OS being able to handle it in less clock cycles etc? Maybe I am comparing apples to oranges with 64-bit int to 64-bit CPU/OS? Thinking that if able to be handled by 32-bit CPU/OS that it may be inefficient at 64-bit int's?

Dave L
  • 35
  • 3
  • Please share the program for better understanding – zoho_deployment Jan 23 '16 at 13:46
  • useful to see how 64 bit types and operations are implemented in a 32-bit environment: http://stackoverflow.com/a/20773254/2805305 – bolov Jan 23 '16 at 14:12
  • take a look at http://en.cppreference.com/w/cpp/types/integer when you need a variable which must be large enough to handle particular number of bits, it's better to use those 'numbered' int types. when you simply need an integer then you should stay with 'ordinary' int types, but the only assumption you can make in this case, is that sizeof(int) <= sizeof(long) <= sizeof(long long), notice <=, not <. – user3159253 Jan 23 '16 at 15:10
  • Its based off of this source, but the IF logic has been drastically corrected to just tap into ASCII calls vs assigning my own mapping for characters to be called from array. I just wanted to expand the scope of keys that can be used to influence the algorithm starting position for the random generator. Not knowing the total length of the random generator algorithm before the sequence repeats, long long int seed might be overkill? http://stackoverflow.com/questions/34698580/trying-to-figure-out-why-when-writing-to-file-the-last-character-written-is-bei?noredirect=1#comment57157130_34698580 – Dave L Jan 23 '16 at 16:51

7 Answers7

1

Short answer - No.

The terms 32-bit and 64-bit refer to the way a computer's processor, handles information. The 64-bit version of Windows handles large amounts of random access memory (RAM) more effectively than a 32-bit system.
If you have a long long int, it simply means that it takes up 64 bits in the memory.

Refer to What is the difference between a 32-bit and 64-bit processor? , for a complete understanding of the differences between 32 bit and 64 bit processors.

Also take a look at - https://en.wikipedia.org/wiki/64-bit_computing

Community
  • 1
  • 1
novice
  • 545
  • 3
  • 16
1

In short, no.

Using long long types does not limit your choice of operating system. If your compiler supports long long and targets a 32-bit operating system (or even a 16-bit operating system), then the compiler or library sorts out the details of how to support longer types.

Using long long types does limit you to compilers (and libraries) that support such a type, no matter what operating system you use. In C++, the 2011 standard introduced them, but some older compilers support long long types as an extension (e.g. because C did since 1999). So, compilers predating the 2011 standard may not support long long types.

Peter
  • 35,646
  • 4
  • 32
  • 74
0

The integer size in memmory is independent of the amount of bits a CPU uses. hoewever the length of an int can vary because that processor always tries to run at it's most efficient way so for an 32 bit processor an int will have a default size of 32-bits. using types like long long int or uint64 (unsigned integer of 64 bit) will guarantee the length of 64 bits across all processors. so it's good to keep that in mind.

0

You are worried if your program will run the same when you run it on a 32-bit machine. You shouldn't.

If it's compiled as a 32-bit application it will always run the same, it just won't use the features of 64-bit processors. So you might have guessed long long was fine with 32-bit compilers because otherwise you wouldn't be able to build your program.

If it's compiled as a 64-bit application, it will not run on 32-bit machines at all.

kamilk
  • 3,829
  • 1
  • 27
  • 40
  • So if i'm using a 32-bit IDE and running on a 64-bit OS, the IDE being 32-bit governs that whats compiled with it is always 32-bit compatible? – Dave L Jan 23 '16 at 16:41
  • @DaveL IDE doesn't matter. When you compile a program, you specify if you want it to be 32 or 64-bit. An IDE would have an option for it in the project settings, when running a compiler from command line you specify at as a parameter. There is always some default setting, depending on the compiler. Visual Studio IDE is 32-bit but you can compile both 32 and 64-bit programs with it. – kamilk Jan 23 '16 at 17:13
  • Cool thanks for clarifying that. =) I'm using Bloodshed Dev C++ 4.9.9.2 and when I get home I will have to poke around and see where the option is. – Dave L Jan 23 '16 at 20:38
  • Just an edit to my past post here about Bloodshed Dev C++ 4.9.9.2, it looks like there are at least 350 bugs that I was not aware of so I found online a reference to Orwell Dev-C++ as the fork that spawned off from the original project as a suggestion to move over to. I have been using Bloodshed Dev-C++ for years. Never considered switching to newer until I read an article about it lacking C++11 and Orwell Dev-C++ supports it. Prior to this I was using Borland C++ 5.02 for many years that came in the back of a book I bought many years ago ( 1998 @ Waldenbooks ), switched to Dev C++ in 2007. =) – Dave L Jan 23 '16 at 21:37
0

No, instead it completely depend on the data models used by the OS and the compiler being used. In C++ since C99 the size of long long int is 64 bits. Infact, most of the Linux/Unix implementations define long as a 64 bit type but it is only 32 bit in Windows because they use different data models. Have a look at given 64 bit computing related to models.

Abhishek Rathore
  • 1,016
  • 1
  • 11
  • 13
0

Edit: long long int always exists in a C++11 implementation, and it has at least 64 bits because the C standard in 5.2.4.2.1 (not the C++ standard) demands it (by defining a minimum value of LLONG_MAX). That means you should be fine. On 32 bit systems the library may be slow or not available though.

Peter - Reinstate Monica
  • 15,048
  • 4
  • 37
  • 62
-1

long long int is not guaranteed to be a 64 bit integer. At least not by the C++ standard:

3.9.1 Fundamental types [basic.fundamental]

 ...

2 There are five standard signed integer types : “signed char”, “short int”, “int”, “long int”, and “long long int”. In this list, each type provides at least as much storage as those preceding it in the list. There may also be implementation-defined extended signed integer types. The standard and extended signed integer types are collectively called signed integer types. Plain ints have the natural size suggested by the architecture of the execution environment; the other signed integer types are provided to meet special needs.

In other words, the only guarantee you have is that a long long int will be at least as big as a long int. That's it.

Now, on most modern execution environments, a long long int is a 64 bit value, and this is true even on native 32-bit hardware platforms.

But, as far as the C++ standard goes, you have no guarantees whatsoever. So, you might find that on a particular 32-bit platform, a long int and a long long int are both 32 bit integer values. And this will be perfectly compliant with the C++ standard.

Sam Varshavchik
  • 114,536
  • 5
  • 94
  • 148
  • `long long int` must be at least 64 bits in size. – interjay Jan 23 '16 at 14:31
  • What is your cite for that? The C++ Standard seems pretty clear on this. – Sam Varshavchik Jan 23 '16 at 16:20
  • Some minimal googling gives many sources, some of them citing the standard: http://stackoverflow.com/questions/589575/what-does-the-c-standard-state-the-size-of-int-long-type-to-be http://en.cppreference.com/w/cpp/language/types https://en.wikipedia.org/wiki/C_data_types – interjay Jan 23 '16 at 17:24
  • `std:numeric_limits::min()` (aka `LLONG_MIN` in C, the implementation defined value of a `long long int`) is required to be `-9223372036854775807` or less, and `std:numeric_limits::max()` (aka `LLONG_MAX`) is required to be `9223372036854775807` or more. A basic property of all basic integral types (`char`, `int`, .... `long long`, etc) is ability to uniquely represent all values between their minimum and maximum. No 32-bit type can ever uniquely represent all values between `-9223372036854775807` and `9223372036854775807`. – Peter Jan 23 '16 at 21:33
  • Just found an oddity when testing the input with IF statement to avoid an overflow for IF (seed1<'-9223372036854775807' || seed1>'9223372036854775807'){ //Require user to input a number within range.} It seems that I am unable to get any numbers greater than 10 digits in length, because any in 11 or greater digit length is flagged as acting as if it exceeds 9223372036854775807 which is 19 digits in length. Maybe its an issue of Bloodshed Dev C++ 4.9.9.2 one of the 350+ bugs maybe or maybe its caused by some other reason. I am not casting the long long int in any way to cause it to lose max val – Dave L Jan 23 '16 at 23:31