A ValueError indicates the size is too big for allocation, not that there is not enough memory. On my laptop, using 64bits python, I can allocate it if I reduce the number of bits:
In [16]: a=np.arange(2708000000)
---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
<ipython-input-16-aaa1699e97c5> in <module>()
----> 1 a=np.arange(2708000000)
MemoryError:
# Note I don't get a ValueError
In [17]: a = np.arange(2708000000, dtype=np.int8)
In [18]: a.nbytes
Out[18]: 2708000000
In [19]: a.nbytes * 1e-6
Out[19]: 2708.0
In your case, arange uses int64
bits, which means it is 16 times more, or around 43 GB. a 32 bits process can only access around 4 GB of memory.
The underlying reason is the size of the pointers used to access data and how many numbers you can represent with those bits:
In [26]: np.iinfo(np.int32)
Out[26]: iinfo(min=-2147483648, max=2147483647, dtype=int32)
In [27]: np.iinfo(np.int64)
Out[27]: iinfo(min=-9223372036854775808, max=9223372036854775807, dtype=int64)
Note that I can replicate your ValueError if I try to create an absurdly large array:
In [29]: a = np.arange(1e350)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-29-230a6916f777> in <module>()
----> 1 a = np.arange(1e350)
ValueError: Maximum allowed size exceeded
If your machine has a lot of memory, as you said, it will be 64 bits, so you should install Python 64 bits to be able to access it. On the other hand, for such big datasets, you should consider the possibility of using out of core computations.