The barrier prevents reordering (or optimization) wherever you put it. There is no magical "scope". Just look at the inline assembly instruction:
asm volatile (""::: "memory");
The volatile
keyword means to put the asm
statement exactly where I put it, and don't optimize it away (i.e. remove it). After the third :
is the list of clobbers, so this means "I have clobbered the memory." You are basically telling the compiler "I have done something to affect the memory."
In your example, you have something like
y[0] += 1;
y[0] += 1;
The compiler is very clever and knows this is not as efficient as it could be. It will probably compile this into something like
load y[0] from memory to register
add 2 to this register
store result to y[0]
Because of pipelining reasons, it may also be more efficient to combine this with other load/modify/store operations. So the compiler may reorder this even further by merging it with nearby operations.
To prevent this, you can place a memory barrier between them:
y[0] += 1;
asm volatile (""::: "memory");
y[0] += 1;
This tells the compiler that after the first instruction, "I have done something to the memory, you may not know about it, but it happened." So it can not use its standard logic and assume that adding one twice to the same memory location is the same as adding two to it, since something happened in between. So this would be compiled into something more like
load y[0] from memory to register
add 1 to this register
store result to y[0]
load y[0] from memory to register
add 1 to this register
store result to y[0]
Again, it could possibly reorder things on each side of the barrier, but not across it.
Another example: Once, I was working with memory-mapped I/O on a microcontroller. The compiler saw that I was writing different values to the same address with no read in between, so it kindly optimized it into a single write of the last value. Of course, this made my I/O activity not work as expected. Placing a memory barrier between writes told the compiler not to do this.