I have a unit test setup to prove that concurrently performing multiple heavy tasks is faster than serial.
Now... before everyone in here loses their minds over the fact that above statement is not always correct because multithreading comes with many uncertainties, let me exlain.
I know from reading the apple documentation that you can not guarantee you get multiple threads when asking for them. The OS (iOS) will assign threads however it sees fit. If the device only has one core for example, it will assign one core and serial will be slightly faster due to initialisation code of concurrent operation taking some extra time whilst not delivering a performance improvement because the device has only one core.
However: This difference should only be slight. But in my POC setup the difference is massive. In my POC, concurrent is slower by about 1/3 of the time.
If serial completes in 6 seconds, concurrent will complete in 9 seconds.
This trend continues even with heavier loads. if serial completes in 125 seconds, concurrent will compete in 215 seconds. This also happens not just once but solid every time.
I wonder if I made a mistake in creating this POC, and if so, how should I prove that concurrently performing multiple heavy tasks is indeed faster than serial?
My POC in swift unit tests:
func performHeavyTask(_ completion: (() -> Void)?) {
var counter = 0
while counter < 50000 {
print(counter)
counter = counter.advanced(by: 1)
}
completion?()
}
// MARK: - Serial
func testSerial () {
let start = DispatchTime.now()
let _ = DispatchQueue.global(qos: .userInitiated)
let mainDPG = DispatchGroup()
mainDPG.enter()
DispatchQueue.global(qos: .userInitiated).async {[weak self] in
guard let self = self else { return }
for _ in 0...10 {
self.performHeavyTask(nil)
}
mainDPG.leave()
}
mainDPG.wait()
let end = DispatchTime.now()
let nanoTime = end.uptimeNanoseconds - start.uptimeNanoseconds // <<<<< Difference in nano seconds (UInt64)
print("NanoTime: \(nanoTime / 1_000_000_000)")
}
// MARK: - Concurrent
func testConcurrent() {
let start = DispatchTime.now()
let _ = DispatchQueue.global(qos: .userInitiated)
let mainDPG = DispatchGroup()
mainDPG.enter()
DispatchQueue.global(qos: .userInitiated).async {
let dispatchGroup = DispatchGroup()
let _ = DispatchQueue.global(qos: .userInitiated)
DispatchQueue.concurrentPerform(iterations: 10) { index in
dispatchGroup.enter()
self.performHeavyTask({
dispatchGroup.leave()
})
}
dispatchGroup.wait()
mainDPG.leave()
}
mainDPG.wait()
let end = DispatchTime.now()
let nanoTime = end.uptimeNanoseconds - start.uptimeNanoseconds // <<<<< Difference in nano seconds (UInt64)
print("NanoTime: \(nanoTime / 1_000_000_000)")
}
Details:
OS: macOS High Sierra
Model Name: MacBook Pro
Model Identifier: MacBookPro11,4
Processor Name: Intel Core i7
Processor Speed: 2,2 GHz
Number of Processors: 1
Total Number of Cores: 4
Both tests were done on iPhone XS Max simulator. Both tests were done straight after a reboot of the entire mac was done (to avoid the mac being busy with applications other than running this unit test, blurring results)
Also, both unit tests are wrapped in an async DispatcherWorkItem because the testcase is for the main (UI) queue not to be blocked, preventing the serial testcase to have an advantage on that part as it consumes the main queue instead of a background queue as the concurrent testcase does.
I'll also accept an answer that shows a POC reliably testing this. It does not have to show concurrent is faster than serial all the time (read above explanation as to why not). But at least some time