4

I was reading about false sharing and cache ping-ponging when you have multiple threads on different cores trying to use the same cache line, but for different data (like two int values next to each other in an array). In this case the cache line would need to move back and fourth between the cores. What I am confused about is that I thought each core has its own L1 cache so why does it need to share that cache line with the other cores? Wouldn't it just keep its own copy and update that? Also if the cpu is forced to make all of the cache consistent between cores what is the point of having keywords like volatile in c++ (other than maybe preventing the compiler from storing values in registers)?

NathanOliver
  • 171,901
  • 28
  • 288
  • 402
chasep255
  • 11,745
  • 8
  • 58
  • 115

1 Answers1

4

Fist off you should not use volatile for thread synchronization. For a detailed explanation on that see Why is volatile not considered useful in multithreaded C or C++ programming?

Secondly the reason false sharing is a problem is that when a variable in a cache line is updated the entire cache line is marked as dirty. This then forces the same cache line on all cores to be invalidated as they do not know what part was updated, they just know that the line was updated so it needs to be synchronized.

Intel has a very good article on this called Avoiding and Identifying False Sharing Among Threads

NathanOliver
  • 171,901
  • 28
  • 288
  • 402
  • Ok but for instance, there is the flush construct on OpenMP. I though the purpose of this was to force the cache to be consistent across all cores of the cpu. From the sounds of this then a feature like this would not be necessary if the cpu synchronizes its cache lines anyway. – chasep255 Oct 11 '15 at 01:22
  • Or is this only something that occurs when the cache is being written to. Say one core is writing to a cache line and another is reading from it. Then maybe it would not synchronize and the reading core would not see the update? – chasep255 Oct 11 '15 at 01:24
  • Sorry I am not familiar with OpenMP so I can't comment about how it works. Yes this only happens when data is written. Once a core writes into a cache line then the CPU forces that cache line to be re-synchronized. to all other caches that have that line. – NathanOliver Oct 11 '15 at 01:27
  • Ok. But if one core is only reading that line and not writing to it then it will wait until it wants to write to resynchronize? – chasep255 Oct 11 '15 at 01:31
  • No. Once one core writes to that cache line then they all need to be synced. The core doing the reading has to re-sync as it does not know if the variable it was reading was changed or not. I suggest you read the attached article in my answer. It really details what is going on. – NathanOliver Oct 11 '15 at 01:33