-2

Consider the following code that runs in thread B (that's all that runs in that thread):

    private void KeepValueCurrent(WaitHandle mre)
    {
        while (mre.WaitOne(50))
        {
            var newAddressOffset = LengthOfLastLogEntry;
            if (newAddressOffset > _currentAddressOffset)
            {
                //Only update if new value is larger than existing
                lock(_locker) {_currentAddressOffset = newAddressOffset;}
            }
        }
    }

Will I be able to access _currentAddressOffset in thread A, like below, or will the lock block me because the loop in thread B runs so fast? My program has so many other dependencies, so I have not been able to test it by itself.

lock (_locker) { currentAddressOffset = _currentAddressOffset; }

Note: The _currentAddressOffset is a global int variable, with modifier volatile to avoid any compiler optimization.


UPDATE: Follow-On Question

From the received answers it has become apparent that I do not need to lock around the int if the only place I write to _currentAddressOffset is in the loop in thread B. What if, however, in thread A I also write to that variable. Then the concept of race conditions comes up, and I will change my while loop in thread B, to this:

    private void KeepValueCurrent(WaitHandle mre)
    {
        while (mre.WaitOne(50))
        {
            var newAddressOffset = LengthOfLastLogEntry;
            lock(_locker)
            {
                if (newAddressOffset > _currentAddressOffset)
                {
                    //Only update if new value is larger than existing
                    _currentAddressOffset = newAddressOffset;
                }
            }
        }
    }

In thread A, I will now read and write to it like this:

lock (_locker) { currentAddressOffset = _currentAddressOffset; } //Read
lock (_locker) { _currentAddressOffset = newValue; } //Write

Are the locks needed in this scenario, to avoid race conditions?

Thanks a lot.

Anders
  • 580
  • 8
  • 17
  • You don't need a lock when writing an `int`. But you do have a race condition between the `if` and the `lock`. – SLaks Jan 28 '15 at 18:01
  • 1
    You should at least wrap the whole `if` clause in that `lock`, or you may get race conditions. – Lucas Trzesniewski Jan 28 '15 at 18:01
  • How are you accessing it on thread A? Are you locking there too? – Lasse V. Karlsen Jan 28 '15 at 18:09
  • Thanks gents. I read this on race conditions: http://stackoverflow.com/questions/34510/what-is-a-race-condition. It seems to says that they only happen if the value of `_currentAddressOffset` is also written to, somewhere else. In the above it is only written to in the while loop. Everywhere else it is just read. Is it still an issue? – Anders Jan 28 '15 at 18:09
  • @LasseV.Karlsen I show above how I access it (it is currently in a lock, but it might not be necessary as some have pointed out). – Anders Jan 28 '15 at 18:20
  • If the people who vote down could give some kind of explanation, it would be incredibly helpful. – Anders Jan 28 '15 at 19:01

1 Answers1

0

Assignments to an int are already atomic, and given that its volatile memory barriers are already in place. The lock is doing literally nothing.

Servy
  • 202,030
  • 26
  • 332
  • 449
  • If that is true, why are there methods like Interlocked.Increment? https://msdn.microsoft.com/en-us/library/system.threading.interlocked.increment%28v=vs.110%29.aspx – Steve Wellens Jan 28 '15 at 18:07
  • @SteveWellens Because incrementing an integer is radically different from assigning it a value. Incrementing it requires reading the value, performing an addition, and then assigning the value back to the variable. That whole sequence is not atomic (without using `Interlocked`), even if the assignment alone would be. – Servy Jan 28 '15 at 18:09
  • Increment and Decrement are single steps in assembly language (INC and DEC). I believe all languages map their instructions directly to these operators. – Steve Wellens Jan 28 '15 at 18:17
  • Servy, a couple of commentators brought up the point of race conditions. If the only place I write to `_currentAddressOffset` is in the loop in thread B, no race condition can exist right? Now instead, if thread A also writes to the variable, I need the lock around the full if statement right? Will my original issue of not being able to access `_currentAddressOffset` now be a concern due to speed of loop? – Anders Jan 28 '15 at 18:19
  • 2
    @SteveWellens No, they *don't* map directly. The CPU needs to fetch the value from memory, increment *the register*, and then write the value back to memory. The "increment the register" is going to be an atomic operation, but the other two are not. "Things" needs to be done to ensure that no other threads access the value after the memory read and before the memory write. – Servy Jan 28 '15 at 18:19
  • @Anders One would need to example the entire program to definitively say whether or not a particular program has undesirable race conditions. It's not something that can be determined by looking at a snippet out of context. (And analyzing your whole program would be out of scope for an SO question btw.) – Servy Jan 28 '15 at 18:21
  • Understood, thanks. Let's say, just to be on the safe side, I lock around the `if statement` in `thread B` and the read and writes in `thread A`. Will I have an issue with the reads and writes not easily being able to access the variable, due to that lock being taken on and off so quickly in the loop? Or does it work so that it is 100% first-come first serve, and the longest the operations in `thread A` have to wait is 50ms? – Anders Jan 28 '15 at 18:25
  • @Anders Locks are not first come first serve. That said, you'd want to actually test the code and do a detailed performance analysis to determine if the lock is actually in high contention before being very concerned. – Servy Jan 28 '15 at 18:28
  • @Servy - Increments DO map directly to assembly code, here is a C# disassembly: http://i.imgur.com/5c2nQrO.png I would guess the Interlocked.Increment family of functions was introduced with multiple core processors. – Steve Wellens Jan 28 '15 at 22:29
  • @SteveWellens The fact that there is a single MSIL instruction for it doesn't mean that there's a single CPU instruction for it. MSIL will be translated into native code by the JITter. Additionally, even if there were a single chip instruction, it doesn't necessarily mean that that instruction is atomic. – Servy Jan 29 '15 at 15:19
  • @Servy That is not MSIL. It is native assembly language. In your answer you state that assignments are atomic and now you are saying increments are not? Sorry, I can't buy into that. – Steve Wellens Jan 29 '15 at 16:27
  • @SteveWellens So me saying that assignments are atomic means that increments must also be atomic? In what possible way does that follow? Assignments (of integers and certain other types) are atomic and increments are not atomic. The C# specs specifically state that assignments are atomic and it will guarantee that that is the case. It's trivial to write a program that demonstrates that increments aren't atomic (without using `Interlocked`); just create two threads that increment the same integer in a loop some number of times and you'll see that the total is not equal to what it should be. – Servy Jan 29 '15 at 16:31
  • @Servy Assignments are more 'complicated'. They are taking information from one location of memory and putting it in a different location. Increments are 'simpler'. They are only touching one area of memory. When multi-threading on single core processors, there is no issue, no single assembly instruction is ever going to be interrupted in the middle of it's operation. It's the multi-core processors that make things trickier. – Steve Wellens Jan 29 '15 at 16:52
  • @SteveWellens No, that's not true. Increments are not atomic, even on a single core processor. Assignments *are* atomic, even on a multi-core processor. This is trivial enough to test on any single core processor. Again, just spin up a bunch of threads and go around incrementing the same integer without any synchronization. The result will be less than the number of increments performed. But try as you might, even in a multi-core processor, you won't be able to perform an assignment that isn't consistent with *some* sequential ordering of the various operations. – Servy Jan 29 '15 at 16:56
  • @Servy From the Intel manual: _B.3.4.3 An atomic instruction is either an XCHG instruction involving a memory address or one of the following instructions with memory destination and lock prefix: ADD, ADC, AND, BTC, BTR, BTS, CMPXCHG, CMPXCH8B, DEC, INC, NEG, NOT, OR, SBB, SUB, XOR or XADD._ So, INC instructions are atomic. And yet there is the Interlocked family of instructions which imply INC and other atomic functions are not safe in a multi-threaded environment. The conclusion is that NO atomic functions that modify global variables are safe in a multi-threaded environment. – Steve Wellens Jan 29 '15 at 23:20
  • @SteveWellens Apparently you didn't notice the relevant snippet, "one of the following instructions with [...] lock prefix". So you can add a prefix that will make the instruction atomic, but it won't be atomic without it. Forcing it to be atomic will unquestionably slow down the execution of the operation, potentially dramatically so (by inhibiting certain other operations from being run while it's running). It is the means by which `Interlocked` can *make* it atomic. That lock flag *won't* be used for the non-interlocked methods however, due to its costs. – Servy Jan 30 '15 at 15:02
  • @SteveWellens The C# spec is very clear about what operations it makes atomic and which it doesn't. Much of this is merely exposing what is already atomic by the underlying CPUs, but not exclusively. Some atomic operations are made non-atomic because they don't map 1:1, some non-atomic operations can be made atomic through added synchronization. – Servy Jan 30 '15 at 15:03