I have a little knowledge of using GPU to train model. I am using K-means from scikit-learn to train my model. Since my data is very large, is it possible to train this model using GPU to reduce computation time? or could you please suggest any methods to use GPU power?
The other question is if I use TensorFlow to build the K-means as shown in this blog.
https://blog.altoros.com/using-k-means-clustering-in-tensorflow.html
It will use GPU or not?
Thank you in advance.