I know that memory_order_consume
has been deprecated, but I'm trying to understand the logic that went into the original design and how [[carries_dependency]]
and kill_dependency
were supposed to work. For that, I would like a specific example of code that would break on an IBM PowerPC or DEC alpha or even a hypothetical architecture with a hypothetical compiler that fully implemented consume semantics in C++11 or C++14.
The best I can come up with is an example like this:
int v;
std::atomic<int*> ap;
void
thread_1()
{
v = 1;
ap.store(&v, std::memory_order_release);
}
int
f(int *p [[carries_dependency]])
{
return v;
}
void
thread_2()
{
int *p;
while (!(p = ap.load(std::memory_order_consume)))
;
int v2 = f(p);
assert(*p == v2);
}
I understand that the assertion could fail in this code. However, is it the case that the assertion is not supposed to fail if you remove [[carries_dependency]]
from f
? If so, why is that the case? After all, you requested a memory_order_consume
, so why would you expect other accesses to v
to reflect acquire semantics? If removing [[carries_dependency]]
does not make the code correct, then what's an example where [[carries_dependency]]
(or making [[carries_dependency]]
the default for all variables) breaks otherwise correct code?
The only thing I can think is that maybe this has to do with register spills? If a function spills a register onto the stack and later re-loads it, this could break the dependency chain. So maybe [[carries_dependency]]
makes things efficient in some cases (says no need to issue memory barrier in the caller before calling this function) but also requires the callee to issue a memory barrier before any register spills or calling another function, which could be less efficient in other cases? I'm grasping at straws here, though, so would still love to hear from someone who understands this stuff...