Let's say I define a following C++ object:
class AClass
{
public:
AClass() : foo(0) {}
uint32_t getFoo() { return foo; }
void changeFoo() { foo = 5; }
private:
uint32_t foo;
} aObject;
The object is shared by two threads, T1 and T2. T1 is constantly calling getFoo()
in a loop to obtain a number (which will be always 0 if changeFoo()
was not called before). At some point, T2 calls changeFoo()
to change it (without any thread synchronization).
Is there any practical chance that the values ever obtained by T1 will be different than 0 or 5 with modern computer architectures and compilers? All the assembler code I investigated so far was using 32-bit memory reads and writes, which seems to save the integrity of the operation.
What about other primitive types?
Practical means that you can give an example of an existing architecture or a standard-compliant compiler where this (or a similar situation with a different code) is theoretically possible. I leave the word modern a bit subjective.
Edit: I can see many people noticing that I should not expect 5 to be read ever. That is perfectly fine to me and I did not say I do (though thanks for pointing this aspect out). My question was more about what kind of data integrity violation can happen with the above code.