3

I’ve noticed that whenever any integer surpasses 2^31-1 my number heavy code suffers a large slowdown, despite the fact I’m using a 64 bit build of Python on a 64bit version of Windows. This seems to be true on Python 2.7 and Python 3. I’ve read that Windows made their longs to be 32 bits, but that doesn’t suggest to me it is impossible to use 64 bit numbers.

Is there a way to use 64 bit integers either though a class or module or even a different build of Python?

Community
  • 1
  • 1
Status
  • 912
  • 1
  • 12
  • 23

3 Answers3

4
my_array = numpy.array(my_list,dtype=numpy.int64)

maybe?

Joran Beasley
  • 110,522
  • 12
  • 160
  • 179
  • Looking at numpy and scipy is the correct answer, especially since the questioner seems interested in the "larger world" of higher performing computation on python. – Peter M Jul 23 '17 at 22:50
1

I'm not aware of any Windows build that uses a 64-bit native type for int with Python 2.7. All C compilers will use long to refer to a 32 bit type. Changing Python to use long long for the internal representation of int would likely break extension modules.

On Python 3.x, the only integer type is the arbitrary precision type (known as long under Python 2.x). On 64-bit systems, the arbitrary precision type works in chunks of 2^30 bits. On 32-bit systems, the arbitrary precision type works in chunks of 2^15 bits. The values 15 and 30 would be difficult to change.

For external libraries, I maintain the gmpy2 library. It provides access to the arbitrary precision GMP/MPIR library. The gmpy2.mpz integer type is usually more efficient once numbers reach ~128 bits in length. YMMV.

casevh
  • 11,093
  • 1
  • 24
  • 35
  • Yes, but even in 3.x there are a few places where `PyLong` values get unboxed into machine `long` values to improve performance (e.g. to implement built-in `sum`). In these cases the Windows build would have to unbox to `long long` values by calling `PyLong_AsLongLongAndOverflow`. That makes the code harder to maintain. – Eryk Sun May 21 '15 at 00:29
-1

python has four numeric types for integers, there are int and long. Long integers have unlimited precision. You get a long when you enter a big enough number, or you can specify it explicitly by adding an "l"

>>> s = 1000
>>> type(s)
<type 'int'>
>>> s = 1000l
>>> type(s)
<type 'long'>
Wyrmwood
  • 3,340
  • 29
  • 33
  • 1
    I don't think this answers the question, there's already an awareness of the difference between `int` and `long`. The question is how to get Python itself to utilize the 64-bit capabilities of the processor for the `int` type so that you don't get the slowness inherent in `long`. – Mark Ransom May 20 '15 at 18:34
  • Maybe. The first link shows that long takes "longer" because they are "bigger" calculations. Makes sense. The second link shows that maxint is 32 bit on windows (jives with documentation for python). The question was how to "force 64bit" not how to make it faster, although there is some mention in the pretext so the OP appears to be suspecting that the longer calculation is due to it using 32 bit by default. Like I said, "maybe" :) – Wyrmwood May 20 '15 at 18:41