I am running into a MemoryError at around 2 GB on my Python implementation of a split_train_test function on the MNIST data set.
Task Manager barely reaches 50% of max memory, including the other apps that are open on my machine. I have 16GB of RAM.
I see most people point towards a 64 vs 32 bit or python 2 vs 3 problem; however, both my VS Code and Windows 10 are 64-bit and view > Command Palette > Python: Select Interpreter show that I am using Python 3.7.1 64-bit from anaconda3/conda.
I know the code itself works because I have used the output in Jupyter afterI import the py file.
def split_train_val(val_frac=0.3, size=1):
"""Splits training and validation set
param val_frac: fraction of total training set to be used for validation
"""
# Read converted csv
X_raw = pd.read_csv('Data/csv/X_train.csv')
Y_raw = pd.read_csv('Data/csv/y_train.csv')
# Rename Label column, concat to X set
Y_raw.columns = ['Label']
df = pd.concat([Y_raw, X_raw], axis=1).sample(frac=size)
# Split training set into train and val
N = df.shape[0]
n = round(val_frac * N)
train = df.iloc[n:,:]
val = df.iloc[:n,:]
x_train = train.drop(['Label'], axis=1)
x_val = val.drop(['Label'], axis=1)
y_train = train.Label
y_val = val.Label
# Return training and validation set
return(x_train, y_train, x_val, y_val)
x_train, y_train, x_val, y_val = split_train_val()
Error Message:
Traceback (most recent call last):
File "preprocessing.py", line 71, in <module>
x_train, y_train, x_val, y_val = split_train_val()
File "preprocessing.py", line 53, in split_train_val
df = pd.concat([Y_raw, X_raw], axis=1).sample(frac=size)
File "C:\Users\...\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pandas\core\reshape\concat.py", line 229, in concat
return op.get_result()
File "C:\Users\...\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pandas\core\reshape\concat.py", line 426, in get_result
copy=self.copy)
File "C:\Users\...\AppData\Local\Programs\Python\Python37-32\lib\site-packages\pandas\core\internals\managers.py", line 2052, in concatenate_block_managers
values = values.copy()
MemoryError
Lastly, I tried changing the jedi.memoryLimit setting to -1 as suggested by some VS Code documentation. This did not help either.
I imported then ran the function in Jupyter. I have also run this exact code in my Anaconda Prompt. None of them result in any error.