I have a std::unordered_map
which is subjected to a very read-heavy workload from multiple threads. I could use a std::mutex
for synchronization, but since concurrent reads should be fine, I wanted to use a boost::shared_mutex
instead. To test the performance improvement, I first pre-populate a map with a bunch of values and then have a bunch of threads run a read test:
for (int i = 0; i < iters; ++i) map.count(random_uint(0, key_max));
I run this for my coarse-lock implementation where count
is protected by std::lock_guard<std::mutex>
and for my shared-lock implementation where it is protected by boost::shared_lock<boost::shared_mutex>
.
On my Arch Linux x86_64 system with GCC 6.1.1 the boost::shared_lock
version is always slower! On my friend's Windows 10 system with MSVC 2013, the boost::shared_lock
is always faster!
The complete, compilable code is on github: https://github.com/silverhammermba/sanity
Edit
This seems to be a platform-specific issue. See above. I would really appreciate if anyone else could build and run this code and report whether they saw a positive output (shared_lock
is faster) or negative (course mutex is faster) and what platform you're using.