You are experiencing the effects of false sharing. On x86 a single cache line is 64 bytes long and therefore holds 64 / sizeof(double)
= 8 array elements. When one thread updates its element, the core that it runs on uses the cache coherency protocol to invalidate the same cache line in all other cores. When another thread updates its element, instead or operating directly on the cache, its core has to reload the cache line from an upper-level data cache or from the main memory. This significantly slows down the program execution.
The simplest solution is to insert padding and thus spread array elements accessed by different threads into distinct cache lines. On x86 that would be 7 double
elements. Therefore your code should look like:
double local_sum[8*16];
//Initializations....
#pragma omp parallel for shared(h,n,a) private(x, thread_id)
for (i = 1; i < n; i++) {
thread_id = omp_get_thread_num();
x = a + i* h;
local_sum[8*thread_id] += f(x);
}
Don't forget to take only each 8th element when summing the array at the end (or initialise all array elements to zero).