17

For my project I have large amounts of data, about 60GB spread into npy files, each holding about 1GB, each containing about 750k records and labels.

Each record is a 345 float32 and the labels are 5 float32.

I read the tensorflow dataset documentation and the queues / threads documentation as well but I can't figure out how to best handle the input for training and then how save the model and weights for future predicting.

My model is pretty straight forward, it looks like this:

x = tf.placeholder(tf.float32, [None, 345], name='x')
y = tf.placeholder(tf.float32, [None, 5], name='y')
wi, bi = weight_and_bias(345, 2048)
hidden_fc = tf.nn.sigmoid(tf.matmul(x, wi) + bi)
wo, bo = weight_and_bias(2048, 5)
out_fc = tf.nn.sigmoid(tf.matmul(hidden_fc, wo) + bo)
loss = tf.reduce_mean(tf.squared_difference(y, out_fc))
train_op = tf.train.AdamOptimizer().minimize(loss)

The way I was training my neural net was reading the files one at a time in a random order then using a shuffled numpy array to index each file and manually creating each batch to feed the train_op using feed_dict. From everything I read this is very inefficient and I should somehow replace it with datasets or queue and threads but as I said the documentation was of no help.

So, what is the best way to handle large amounts of data in tensorflow?

Also, for reference, my data was saved to a numpy file in a 2 operation step:

with open('datafile1.npy', 'wb') as fp:
    np.save(data, fp)
    np.save(labels, fp)
Maxim
  • 52,561
  • 27
  • 155
  • 209
  • This is probably exactly what you are looking for: [Import Data (with the `Dataset` API)](https://www.tensorflow.org/programmers_guide/datasets) – kww Oct 19 '17 at 00:04
  • With large datasets you should not pass all of it at once. Use mini-batches to overcome that. But over that, do not bring everything in memory in the first place. Use mini-batches. – ParmuTownley Oct 19 '17 at 06:36
  • Karhy, I did read the dataset documentation but most of it seems to assume the data is preloaded into memory. Paramdeep, I am using mini batches, this is just how I load the data from the numpy files, then later on I shuffle the data and manually do mini batches to feed the x and y placeholders. This is what I am trying to figure out how to do in a more efficient way. – Joao Paulo Farias Oct 20 '17 at 00:51

1 Answers1

13

The utilities for npy files indeed allocate the whole array in memory. I'd recommend you to convert all of your numpy arrays to TFRecords format and use these files in training. This is one of the most efficient ways to read large dataset in tensorflow.

Convert to TFRecords

def array_to_tfrecords(X, y, output_file):
  feature = {
    'X': tf.train.Feature(float_list=tf.train.FloatList(value=X.flatten())),
    'y': tf.train.Feature(float_list=tf.train.FloatList(value=y.flatten()))
  }
  example = tf.train.Example(features=tf.train.Features(feature=feature))
  serialized = example.SerializeToString()

  writer = tf.python_io.TFRecordWriter(output_file)
  writer.write(serialized)
  writer.close()

A complete example that deals with images can be found here.

Read TFRecordDataset

def parse_proto(example_proto):
  features = {
    'X': tf.FixedLenFeature((345,), tf.float32),
    'y': tf.FixedLenFeature((5,), tf.float32),
  }
  parsed_features = tf.parse_single_example(example_proto, features)
  return parsed_features['X'], parsed_features['y']

def read_tfrecords(file_names=("file1.tfrecord", "file2.tfrecord", "file3.tfrecord"),
                   buffer_size=10000,
                   batch_size=100):
  dataset = tf.contrib.data.TFRecordDataset(file_names)
  dataset = dataset.map(parse_proto)
  dataset = dataset.shuffle(buffer_size)
  dataset = dataset.repeat()
  dataset = dataset.batch(batch_size)
  return tf.contrib.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)

The data manual can be found here.

xhluca
  • 868
  • 5
  • 22
Maxim
  • 52,561
  • 27
  • 155
  • 209