0

As I know, false sharing occurs when several threads try to read small and adjacent pieces of data which are placed within the same cache line:

#include <omp.h>
#define NUM_THREADS 4

int main() {
    int arr[NUM_THREADS];

#   pragma omp parallel num_threads(NUM_THREADS)
    {
        const int id = omp_get_thread_num();
        arr[id] // doing something with it
    }
}

What if I create the array dynamically?

int *arr = new int[NUM_THREADS];

Will false sharing take place if I have my array on the heap? Are there some cache line restrictions in this case?

Kaiyakha
  • 1,463
  • 1
  • 6
  • 19
  • 9
    Short answer: it will be the same. – prapin Jun 01 '22 at 09:59
  • 6
    False sharing = concurrent updates (+reads) of different memory locations that are mapped to the same (or even adjacent due to prefetchnig) cache line. This has nothing to do with the way how memory is allocated (on the stack, on the heap, in the data segment,...). – Daniel Langr Jun 01 '22 at 10:14
  • 1
    You need to add padding to avoid this or use a thread local storage – Jérôme Richard Jun 01 '22 at 10:32
  • C++17 has [`std::hardware_destructive_interference_size`](https://en.cppreference.com/w/cpp/thread/hardware_destructive_interference_size) for avoiding false sharing. It wasn't implemented by gcc until recently (gcc 12) though. clang hasn't implemented it yet, but Microsoft has done so pretty early. See "Hardware interference size" in the table "C++17 library features" [here](https://en.cppreference.com/w/cpp/17) – paleonix Jun 01 '22 at 10:46
  • 1
    @paleonix This related post may be also helpful: [Understanding std::hardware_destructive_interference_size and std::hardware_constructive_interference_size](https://stackoverflow.com/q/39680206/580083). – Daniel Langr Jun 02 '22 at 07:09

1 Answers1

11

Memory is memory. To the cpu an array on the stack is exactly the same as an array on the heap. So any false sharing problem remains the same.

Goswin von Brederlow
  • 11,875
  • 2
  • 24
  • 42