0

I have a little knowledge of using GPU to train model. I am using K-means from scikit-learn to train my model. Since my data is very large, is it possible to train this model using GPU to reduce computation time? or could you please suggest any methods to use GPU power?

The other question is if I use TensorFlow to build the K-means as shown in this blog.

https://blog.altoros.com/using-k-means-clustering-in-tensorflow.html

It will use GPU or not?

Thank you in advance.

Yaoi Dirty
  • 109
  • 3
  • 7
  • 1
    You should have a CUDA enabled GPU. Not any GPU would do. – zed Feb 01 '17 at 07:35
  • Yes. I wonder that how to use it to compute my model since my model is not deep learning that using Tensorflow. – Yaoi Dirty Feb 01 '17 at 07:38
  • I believe it uses GPU by default. See this: http://stackoverflow.com/questions/37660312/run-tensorflow-on-cpu – zed Feb 01 '17 at 07:40
  • scikit-learn has no support for GPU. Implementing kmeans on tensorflow though is possible though. – sascha Feb 03 '17 at 00:18

2 Answers2

0

To check if your GPU supports CUDA: https://developer.nvidia.com/cuda-gpus

Scikit-learn hasn't supported CUDA so far. You may want to use TensorFlow: https://www.tensorflow.org/install/install_linux

I hope this helps.

Tai Christian
  • 654
  • 1
  • 10
  • 21
0

If you have CUDA enabled GPU with Compute Capability 3.0 or higher and install GPU supported version of Tensorflow, then it will definitely use GPU for training.

For additions information on NVIDIA requirements to run TensorFlow with GPU support check the following link:

https://www.tensorflow.org/install/install_linux#nvidia_requirements_to_run_tensorflow_with_gpu_support

Nandeesh
  • 2,683
  • 2
  • 30
  • 42