Okay, I couldn't find any authoritative source, so I figured I'd give it a go myself.
#include <pthread.h>
#include <sched.h>
#include <atomic>
#include <cstdint>
#include <iostream>
alignas(128) static uint64_t data[SIZE];
alignas(128) static std::atomic<unsigned> shared;
#ifdef EMPTY_PRODUCER
alignas(128) std::atomic<unsigned> unshared;
#endif
alignas(128) static std::atomic<bool> stop_producer;
alignas(128) static std::atomic<uint64_t> elapsed;
static inline uint64_t rdtsc()
{
unsigned int l, h;
__asm__ __volatile__ (
"rdtsc"
: "=a" (l), "=d" (h)
);
return ((uint64_t)h << 32) | l;
}
static void * consume(void *)
{
uint64_t value = 0;
uint64_t start = rdtsc();
for (unsigned n = 0; n < LOOPS; ++n) {
for (unsigned idx = 0; idx < SIZE; ++idx) {
value += data[idx] + shared.load(std::memory_order_relaxed);
}
}
elapsed = rdtsc() - start;
return reinterpret_cast<void*>(value);
}
static void * produce(void *)
{
do {
#ifdef EMPTY_PRODUCER
unshared.store(0, std::memory_order_relaxed);
#else
shared.store(0, std::memory_order_relaxed);
#enfid
} while (!stop_producer);
return nullptr;
}
int main()
{
pthread_t consumerId, producerId;
pthread_attr_t consumerAttrs, producerAttrs;
cpu_set_t cpuset;
for (unsigned idx = 0; idx < SIZE; ++idx) { data[idx] = 1; }
shared = 0;
stop_producer = false;
pthread_attr_init(&consumerAttrs);
CPU_ZERO(&cpuset);
CPU_SET(CONSUMER_CPU, &cpuset);
pthread_attr_setaffinity_np(&consumerAttrs, sizeof(cpuset), &cpuset);
pthread_attr_init(&producerAttrs);
CPU_ZERO(&cpuset);
CPU_SET(PRODUCER_CPU, &cpuset);
pthread_attr_setaffinity_np(&producerAttrs, sizeof(cpuset), &cpuset);
pthread_create(&consumerId, &consumerAttrs, consume, NULL);
pthread_create(&producerId, &producerAttrs, produce, NULL);
pthread_attr_destroy(&consumerAttrs);
pthread_attr_destroy(&producerAttrs);
pthread_join(consumerId, NULL);
stop_producer = true;
pthread_join(producerId, NULL);
std::cout <<"Elapsed cycles: " <<elapsed <<std::endl;
return 0;
}
Compile with the following command, replacing defines:
gcc -std=c++11 -DCONSUMER_CPU=3 -DPRODUCER_CPU=0 -DSIZE=131072 -DLOOPS=8000 timing.cxx -lstdc++ -lpthread -O2 -o timing
Where:
- CONSUMER_CPU is the number of the cpu to run consumer thread on.
- PRODUCER_CPU is the number of the cpu to run producer thread on.
- SIZE is the size of the inner loop (matters for cache)
- LOOPS is, well...
Here are the generated loops:
Consumer thread
400cc8: ba 80 24 60 00 mov $0x602480,%edx
400ccd: 0f 1f 00 nopl (%rax)
400cd0: 8b 05 2a 17 20 00 mov 0x20172a(%rip),%eax # 602400 <shared>
400cd6: 48 83 c2 08 add $0x8,%rdx
400cda: 48 03 42 f8 add -0x8(%rdx),%rax
400cde: 48 01 c1 add %rax,%rcx
400ce1: 48 81 fa 80 24 70 00 cmp $0x702480,%rdx
400ce8: 75 e6 jne 400cd0 <_ZL7consumePv+0x20>
400cea: 83 ee 01 sub $0x1,%esi
400ced: 75 d9 jne 400cc8 <_ZL7consumePv+0x18>
Producer thread, with empty loop (no writing to shared
):
400c90: c7 05 e6 16 20 00 00 movl $0x0,0x2016e6(%rip) # 602380 <unshared>
400c97: 00 00 00
400c9a: 0f b6 05 5f 16 20 00 movzbl 0x20165f(%rip),%eax # 602300 <stop_producer>
400ca1: 84 c0 test %al,%al
400ca3: 74 eb je 400c90 <_ZL7producePv>
Producer thread, writing to shared
:
400c90: c7 05 66 17 20 00 00 movl $0x0,0x201766(%rip) # 602400 <shared>
400c97: 00 00 00
400c9a: 0f b6 05 5f 16 20 00 movzbl 0x20165f(%rip),%eax # 602300 <stop_producer>
400ca1: 84 c0 test %al,%al
400ca3: 74 eb je 400c90 <_ZL7producePv>
The program counts the number of CPU cycles consumed, on consumer's core, to complete the whole loop. We compare the first producer, which does nothing but burn CPU cycles, to the second producer, which disrupts the consumer by repetitively writing to shared
.
My system has a i5-4210U. That is, 2 cores, 2 threads per core. They are exposed by the kernel as Core#1 → cpu0, cpu2
Core#2 → cpu1, cpu3
.
Result without starting the producer at all:
CONSUMER PRODUCER cycles for 1M cycles for 128k
3 n/a 2.11G 1.80G
Results with empty producer. For 1G operations (either 1000*1M or 8000*128k).
CONSUMER PRODUCER cycles for 1M cycles for 128k
3 3 3.20G 3.26G # mono
3 2 2.10G 1.80G # other core
3 1 4.18G 3.24G # same core, HT
As expected, since both threads are cpu hogs and both get a fair share, the producer burning cycles slows down consumer by about half. That's just cpu contention.
With producer on cpu#2, as there is no interaction, consumer runs with no impact from the producer running on another cpu.
With producer on cpu#1, we see hyperthreading at work.
Results with disruptive producer:
CONSUMER PRODUCER cycles for 1M cycles for 128k
3 3 4.26G 3.24G # mono
3 2 22.1 G 19.2 G # other core
3 1 36.9 G 37.1 G # same core, HT
When we schedule both thread on the same thread of the same core, there is no impact. Expected again, as the producer writes remain local, incurring no synchronization cost.
I cannot really explain why I get much worse performance for hyperthreading than for two cores. Advice welcome.