1

I have two GPUs installed on my Windows 7 64-bit computer: an NVIDIA GeForce GTX 750 Ti and a GeForce GTX 570. The former has a compute capability of 5 and the latter a compute capability of 2.

For one of my projects, I would like to utilize MatConvNet, a library for fitting convolutional neural networks (CNNs) in MATLAB in a style similar to Caffe or TensorFlow. The package supports the use of both graphics cards, but cuDNN, NVIDIA's toolkit for deep learning, is only compatible with graphics cards that boast a compute capability of 3 or greater. If I decide to just use the 750 Ti, I can compile MatConvNet with the option to enableCudnn set to true, and if I decide to just use the 570, I have to compile it with the option set to false.

On a simple CNN I created to classify handwritten digits with three convolutional layers, three pooling layers, a fully connected layer, and a softmax layer, I have noticed that the training time is the shortest for the 750 Ti alone followed by the combination of the two cards followed by the 570 alone. This is because when I make use of both cards, I have to compile the MatConvNet package withenableCudnn to false, which prevents MatConvNet from utilizing the fast convolution codes that are part of cuDNN. However, having two GPUs is still better than having the 570 alone.

I was wondering if there is a way to compile MatConvNet separately for each of the graphics cards, so that the 750 utilizes cuDNN while the 570 does not. Not accounting for the overhead of distributing the workload between the graphics cards, this should in theory speed up the code compared to using the 750 alone. Could someone please let me know if they have done something like this, if it is possible, and/or how to do it if so?

Vivek Subramanian
  • 1,174
  • 2
  • 17
  • 31

0 Answers0