I have a neural network that takes in arrays of 1024 doubles and returns a single value. I've trained it, and now I want to use it on a set of test data.
The file that I want to test it on is very big (8.2G), and whenever I try to import it into Google Colab it crashes the RAM.
I'm reading it in using read_csv as follows:
testsignals = pd.read_csv("/content/drive/My Drive/MyFile.txt", delimiter="\n",header=None)
.
What I'd like to know is if there's a more efficient way of reading in the file, or whether I will just have to work with smaller data sets.
Edit: following Prune's comment, I looked at the advice from the people who had commented, and I tried this:
import csv
testsignals=[]
with open('/content/drive/MyDrive/MyFile') as csvfile:
reader=csv.reader(csvfile)
for line in reader:
testsignals.append(line)
But it still exceed the RAM.
If anyone can give me some help I'd be very grateful!