1

My PC is with AMD Ryzen 5 3600X, with 6 cores, no GPU. I'm running my trained deep learning model in my pc to predict, and it is slow. What make it even slower is the fact I found that it only use about 1/20 of the total CPU time. My code is like following:

from time import process_time,time
for i in range(cnt):
    print("learner ", i , " : ", process_time() - start, time()-start1)
    res.append(learn_opt[i].get_preds())

and result is

learner  0  :  0.140625 0.14303159713745117
learner  1  :  1.328125 18.774247884750366
learner  2  :  2.390625 37.759544372558594
timeused: 3.484375 56.34675097465515 

How to make my program use more cpu? I tried psutil.nice(psutil.HIGH_PRIORITY_CLASS) but I didn't observe change.

Thanks!

Arty
  • 14,883
  • 6
  • 36
  • 69
jerron
  • 189
  • 11
  • CPython will only use 1 thread/core unless it calls into a C extension that 1) uses threads and 2) releases the GIL over threaded work. (Or otherwise uses additional parallel processing approaches.) – user2864740 Oct 28 '20 at 02:05
  • 1
    Maybe take a look into [how-to-do-parallel-programming-in-python](https://stackoverflow.com/questions/20548628/how-to-do-parallel-programming-in-python) and, thus, [the multiprocessing package](https://docs.python.org/2/library/multiprocessing.html). However I don't think you will be able to easily implement you DL algorithm in parallel using your CPU. If you are just trying to train it, maybe take a look into [Google Colab](https://colab.research.google.com/). – Felipe Whitaker Oct 28 '20 at 02:52
  • What neural network library are you using? Tensorflow library for example should already parallelize very well each prediction call and use whole CPU. Also what size of data are you giving to neural network for prediction? If you feed data one sample at a time (e.g. one image at a time) then you can significantly boost code by feeding a batch, i.e. 20 images at a time, so that your library has a chance for more massive parallelization. – Arty Oct 28 '20 at 03:09

0 Answers0