I have a huge file of csv which can not be loaded into memory. Transforming it to libsvm format may save some memory. There are many nan in csv file. If I read lines and store them as np.array, with np.nan as NULL, will the array still occupy too much memory ? Does the np.nan in array also occupy memory ?
-
1*Does the np.nan in array also occupy memory?* A `numpy` array is a homogeneous fixed-size record data structure, i.e. the same amount of memory is allocated for each of its elements (e.g. 4 bytes for `float32` and 8 bytes for `float64`). `numpy.nan` is simply represented by a special (reserved) bit pattern. – Leon Jun 19 '17 at 07:26
-
Numpy arrays are contiguous (assuming C ordering and no transpose) blocks of memory. No matter what you store on it, it will occupy space equivalent to its shape and data type. Scipy has sparse matrices that you could use to ignore nans. – Imanol Luengo Jun 19 '17 at 07:26
-
You might find this [question](https://stackoverflow.com/questions/1938894/csv-to-sparse-matrix-in-python) helpful which constructs a sparse scipy matrix from a CSV. – Jan Trienes Jun 19 '17 at 07:31
-
`scikit-learn` does work with `(lib)svm`. http://scikit-learn.org/stable/modules/svm.html. But you'll need to read its docs to see whether that helps with your memory issues. – hpaulj Jun 19 '17 at 08:13
3 Answers
When working with floating point representations of numbers, non-numeric values (NaN
and inf
) are also represented by a specific binary pattern occupying the same number of bits as any numeric floating point value. Therefore, NaN
s occupy the same amount of memory as any other number in the array.

- 111,146
- 38
- 238
- 371
As far as I know yes, nan and zero values occupy the same memory as any other value, however, you can address your problem in other ways:
Have you tried using a sparse vector? they are intended for vectors with a lot of 0 values and memory consumption is optimized
There you have some info about SVM and sparse matrices, if you have further questions just ask.
Edited to provide an answer as well as a solution

- 199
- 1
- 13
-
-
I am not sure the sparse vector will support xgboost. Because my goal is to train model on it. – yanachen Jun 19 '17 at 07:30
-
Do not use the sparse matrix code unless your learning/training code explicitly says you can. Some scikit-learn methods do. – hpaulj Jun 19 '17 at 07:45
-
I have sometimes used it to train a SVM using scipy, if you are interested in it I can look for my code and post it – Kailegh Jun 19 '17 at 07:51
According to the getsizeof() command from the sys module it does. A simple and fast example :
import sys
import numpy as np
x = np.array([1,2,3])
y = np.array([1,np.nan,3])
x_size = sys.getsizeof(x)
y_size = sys.getsizeof(y)
print(x_size)
print(y_size)
print(y_size == x_size)
This should print out
120
120
True
so my conclusion was it uses as much memory as a normal entry.
Instead you could use sparse matrices (Scipy.sparse) which do not save zero / Null at all and therefore are more memory efficient. But Scipy strongly discourages from using Numpy methods directly https://docs.scipy.org/doc/scipy/reference/sparse.html since Numpy might not interpret them correctly.

- 607
- 7
- 19
-
On my machine, this prints `108, 120, False`, because `x.dtype == np.int32`. To make this a useful example, you should use `1.0, 2.0, 3.0`, which will make the arrays have the same type – Eric Jun 19 '17 at 11:26
-
Okay, sorry. I didn't know that there might be a difference between machines for that example. But to be fair on my machine, it works like that. And furthermore `x.dtype == np.int64 `and analogously the datatype for `y ==np.float64` in my case – Marvin Taschenberger Jun 20 '17 at 06:56