1

I have a data frame -> data with the shape (10000,257). I need to preprocess this dataframe so that I can use it in LSTM which requires a 3 dimensional input - (nrows,ntimesteps,nfeatures)I am working with the code snippet that is provided here:

def univariate_processing(variable, window):
   import numpy as np

   # create empty 2D matrix from variable
   V = np.empty((len(variable)-window+1, window))

   # take each row/time window
   for i in range(V.shape[0]):
      V[i,:] = variable[i : i+window]

   V = V.astype(np.float32) # set common data type
   return V

def RNN_regprep(df, y, len_input, len_pred): #, test_size):
    # create 3D matrix for multivariate input
    X = np.empty((df.shape[0]-len_input+1, len_input, df.shape[1]))

    # Iterate univariate preprocessing on all variables - store them in XM
    for i in range(df.shape[1]):
        X[ : , : , i ] = univariate_processing(df[:,i], len_input)

    # create 2D matrix of y sequences
    y = y.reshape((-1,))  # reshape to 1D if needed
    Y = univariate_processing(y, len_pred)

    ## Trim dataframes as explained
    X = X[ :-(len_pred + 1) , : , : ]
    Y = Y[len_input:-1 , :]

    # Set common datatype
    X = X.astype(np.float32)
    Y = Y.astype(np.float32)

    return X, Y

X,y = RNN_regprep(data,label, len_ipnut=200,len_pred=1)

While running this the following error is obtained:

numpy.core._exceptions._ArrayMemoryError: Unable to allocate 28.9 GiB for an array with shape (10000, 200, 257) and data type float64

I do understand that this is more of an issue with my memory within my server. I want to know any solution that I can change within my code to see if I can avoid this memory error or try reducing this memory consumption?

Kathan Vyas
  • 355
  • 3
  • 16

1 Answers1

0

This is what windowed views are for. Using my recipe here:

var = np.random.rand(10000,257)
w = window_nd(var, 200, axis = 0)

Now you have a windowed view over var:

w.shape
Out[]: (9801, 200, 257)

But, importantly, it's using the exact same data as var, just looking into it in a windowed way:

w.__array_interface__['data'] #This is the memory's starting address
Out[]: (1448954720320, False)

var.__array_interface__['data']
Out[]: (1448954720320, False)

np.shares_memory(var, w)
Out[]: True

w.base.base.base is var  #(lots of rearranging views in the background)
Out[]: True

So you can do:

def univariate_processing(variable, window):
   return window_nd(variable, window, axis = 0)

That should significantly reduce your memory allocation, no "magic" required :)

You can also try

from skimage.util import view_as_windows
w = np.squeeze(view_as_windows(var, (200, 1)))

Which does almost the same thing. In this case: your answer would be:

def univariate_processing(variable, window):
   from skimage.util import view_as_windows
   window = (window,) + (1,)*(len(variable.shape)-1)
   return np.squeeze(view_as_windows(variable, window))
Daniel F
  • 13,620
  • 2
  • 29
  • 55