FWIW, in WWDC 2016 video Concurrent Programming with GCD they point out that while you might have historically used pthread_mutex_t
, that they now discourage us from using it. They show how you can use traditional locks (recommending os_unfair_lock
as a more performant solution, but one that doesn’t suffer the power problems with the old deprecated spin lock), but if you want to do that, they advise that you derive an Objective-C base class with struct-based locks as ivars. But they warn us that we just can’t just use the old C struct-based locks safely directly from Swift.
But there’s no need for pthread_mutex_t
locks any more. I personally find that the simple NSLock
is quite performant solution, so I personally use an extension (based upon a pattern Apple used in their “advanced operations” example):
extension NSLocking {
func synchronized<T>(_ closure: () throws -> T) rethrows -> T {
lock()
defer { unlock() }
return try closure()
}
}
Then I can define a lock and use this method:
class Synchronized<T> {
private var _value: T
private var lock = NSLock()
var value: T {
get { lock.synchronized { _value } }
set { lock.synchronized { _value = newValue } }
}
init(value: T) {
_value = value
}
}
That video (being about GCD), shows how you might do it with GCD queues. A serial queue is the easiest solution, but you can also use reader-writer pattern on concurrent queue, with the reader using sync
, but the writer using async
with a barrier:
class Synchronized<T> {
private var _value: T
private var queue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".synchronizer", attributes: .concurrent)
var value: T {
get { queue.sync { _value } }
set { queue.async(flags: .barrier) { self._value = newValue } }
}
init(value: T) {
_value = value
}
}
I’d suggest benchmarking the various alternatives for your use case and see which is best for you.
Please note that I’m synchronizing both reads and writes. Only using synchronization on the writes will guard against simultaneous writes, but not against simultaneous reads and writes (where the read might therefore yield an invalid result).
Make sure to synchronize all interaction with the underlying object.
All of this having been said, doing this at the accessor-level (like you’ve done and like I’ve shown above) is almost always insufficient to achieve thread safety. Invariably synchronization must be at a higher level of abstraction. Consider this trivial example:
let counter = Synchronized(value: 0)
DispatchQueue.concurrentPerform(iterations: 1_000_000) { _ in
counter.value += 1
}
This will almost certainly not return 1,000,000. It’s because the synchronization is at the wrong level. See Swift Tip: Atomic Variables for a discussion of what’s wrong.
You can fix this by adding a synchronized
method to wrap whatever needs synchronization (in this case, the retrieval of the value, the incrementing of it, and the storing of the result):
class Synchronized<T> {
private var _value: T
private var lock = NSLock()
var value: T {
get { lock.synchronized { _value } }
set { lock.synchronized { _value = newValue } }
}
func synchronized(block: (inout T) throws -> Void) rethrows {
try lock.synchronized { try block(&_value) }
}
init(value: T) {
_value = value
}
}
And then:
let counter = Synchronized(value: 0)
DispatchQueue.concurrentPerform(iterations: 1_000_000) { _ in
counter.synchronized { $0 += 1 }
}
Now, with the whole operation synchronized, we get the correct result. This is a trivial example, but it shows why burying the synchronization in the accessors is frequently insufficient, even in a trivial example like the above.