0

Please forgive if this question is too basic. I am neither familiar with the idea of parallelization nor used a HPC system before.

I am training a deep learning model which takes really long on my PC. It takes approximately 2 days on my i5 with 12 GB RAM.

So I decided to use HPC but in one of the tutorials I watched, it says that if I do not write my code properly HPC will not be any faster than a regular PC. What is it really meant? Should I adjust my original code so that I can benefit HPC?

Secondly, can we say that using 30 cores should be 5 times faster than using 6 cores? Is speed and number of cores proportionate?

user3666197
  • 1
  • 6
  • 50
  • 92
Arwen
  • 168
  • 1
  • 13

2 Answers2

0

Yes that's true , if your code takes a very long time even an HPC wont be enough to run it fast , i mean that you can benefit from HPC's performances when the code is difficult to run on a regular PC for example due the low processor or RAM or any limited resources ... etc .

But in case that you write a code near to be a Non Polynomial Problem ( with a very high time complexity ) then even a HPC wont be enough for it , it will create a difference but not a wanted one for example you're writing a code with a very high time complexity which will take a regular computer 2 months to execute but only 1 month for a HPC

Karam Mohamed
  • 843
  • 1
  • 7
  • 15
  • I have edited my question. Actually my code is not anywhere near to be a Non Polynomial Problem. It takes approximately 2 days on my i5 with 12GB RAM. I guess the answer to my first question is 'No, I do not need to change any code I can just run whatever I run in my PC.' right? – Arwen May 21 '20 at 04:38
  • @Arwen to be honest i dont really know how it will exactly affect the time ( calculated ) , it talked in general – Karam Mohamed May 21 '20 at 11:22
0

Q : "can we say that using 30 cores should be 5 times faster than using 6 cores?"

No, we can not.

Q : "Is speed and number of cores proportionate?"

No, it is not.

There is an ultimate ceiling for any (potential) speedup. The Amdahl's Law ( even in its original, overhead-naive, atomicity-of-work ignoring formulation ).

enter image description here

Better use the revised, overhead-strict, resources-aware Amdahl's Law re-formulation.

There you see.


In a seek for improving performance?

Start with this, best spending some time with tuning the core-parameters in the INTERACTIVE TOOL ( URL there ).

A conversion of a classical library (like TF or other ) into an HPC-efficient tool is not easy and does not come free - add-on overhead costs may easily (ref. the results in the INTERACTIVE TOOL) devastate any potential HPC-powers, just due to a poor scaling (going from costs in the range of a few ns to costs above a few ms is killing the game at whatever HPC-budget you may spend, isn't it? )

user3666197
  • 1
  • 6
  • 50
  • 92