I have 32 threads that I know the input parameters to ahead of time, nothing changes inside the function (other than the memory buffer that each thread interacts with).
In pseudo C code this is my design pattern:
// declare 32 pthreads as global variables
void dispatch_32_threads() {
for(int i=0; i < 32; i++) {
pthread_create( &thread_id[i], NULL, thread_function, (void*) thread_params[i] );
}
// wait until all 32 threads are finished
for(int j=0; j < 32; j++) {
pthread_join( thread_id[j], NULL);
}
}
int main (crap) {
//init 32 pthreads here
for(int n = 0; n<4000; n++) {
for(int x = 0; x<100< x++) {
for(int y = 0; y<100< y++) {
dispatch_32_threads();
//modify buffers here
}
}
}
}
I am calling dispatch_32_threads
100*100*4000= 40000000
times. thread_function
and (void*) thread_params[i]
do not change. I think pthread_create
keeps creating and destroying threads, I have 32 cores, none of them are at 100% utilization, it hovers around 12%. Moreover, when I reduce the number of threads to 10, all 32 cores remain at 5-7% utilization, and I see no slow down in runtime. Running less than 10 slow things down.
Running 1 thread however is extremely slow, so multi threading is helping. I profiled my code, I know it's thread_func
that is slow, and thread_func
is parallelizable. This leads me to believe that pthread_create
keeps spawning and destroying threads on different cores, and after 10 threads I lose efficiency, and it gets slower, thread_func
is in essence "less complicated" than spawning more than 10 threads.
Is this assessment true? What is the best way to utilize 100% of all cores?