0
MPgetlock_edx:
1:
    movl    (%edx), %eax
    movl    %eax, %ecx
    andl    $CPU_FIELD,%ecx
    cmpl    _cpu_lockid, %ecx
    jne 2f
    incl    %eax
    movl    %eax, (%edx)
    ret
2:
    movl    $FREE_LOCK, %eax
    movl    _cpu_lockid, %ecx
    incl    %ecx
    lock
    cmpxchg %ecx, (%edx)
    jne 1b
    GRAB_HWI
    ret
  1. Why upper function can implement a Big kernel Lock (BKL)?
  2. 'cmpxchg' is atomic? why 'lock' needed before it? (This part is a duplicate of Is x86 CMPXCHG atomic, if so why does it need LOCK?)
  3. Why not 'movl (%edx), %ecx' directly?
Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
Zegang
  • 11
  • 3
  • 2
    The first part of the function checks if the CPU already has the lock. If it does, the counter is incremented. If not an CAS (`lock cmpxchg`) is used to try to acquire the lock. `cmpxchg` is not atomic without a `lock` prefix (you were probably thinking of `xchg`) and `mov` is totally different from `cmpxchg` (see Compare-And-Swap). – Margaret Bloom May 11 '21 at 07:56
  • Part 2 is a duplicate of [Is x86 CMPXCHG atomic, if so why does it need LOCK?](https://stackoverflow.com/a/44273130), but we can't close multi-part questions as duplicates of only one part. (That's one reason Stack Overflow discourages multi-part questions.) – Peter Cordes May 11 '21 at 11:04

0 Answers0