1

Assuming TensorFlow GPU library being used in computation, which operations are offloaded to GPU (and how often)? What is the performance impact of:

  1. CPU Core count (because it is now not actively involved in computation)
  2. RAM size.
  3. GPU VRAM (What benefit of owning a higher memory GPU)

Say I'd like to decide upon particular(s) of these hardware choices. Can someone explain with an example, which aspect of a Machine Learning model will impact the particular hardware constraint?

(I need a little elaboration on what exact ops are offloaded to GPU and CPU, based on TensorFlow GPU lib for example.)

Karan Shah
  • 417
  • 6
  • 21

1 Answers1

1

One way of using tensorflow to efficiently spread work between CPUs and GPUs is to use estimators.

For example :

 model = tf.estimator.Estimator(model_fn=model_fn,
                              params=params,
                              model_dir="./models/model-v0-0")


 model.train(lambda:input_fn(train_data_path), steps=1000)

In the function 'input_fn' the data batch loading and batch preparation will be offloaded to the CPU while the GPU is working on the model as declared in the function 'model_fn'.

If you are concerned about RAM constraints then you should look at using the tfrecord format as this avoids loading up the whole dataset in RAM

see tensorflow.org/tutorials/load_data/tf_records

NiallJG
  • 1,881
  • 19
  • 22