2

I wrote a code to pass a 64-bit value from a writer thread to reader threads using a global variable. (not mentioned any locking for the sake of simplicity)

uint64_t g_x;

writer_thread()
{
    g_x = new_value;
}

reader_thread()
{
    uint64_t curr_value;
    curr_value = g_x;
}

Now, I want to separate writer and reader threads into different processes without extra latency, system calls etc?

How can I do this in Linux? Is it possible? How can I get the g_x variable into the address space of both processes?

If I use shmget/shmat, mmap (or etc), can I achieve the same performance?

Thanks in advance.

avatli
  • 610
  • 6
  • 16
  • 4
    Yes, shared memory has no overhead after you have set it up. – Jester Apr 03 '19 at 14:16
  • how do you want to notify the other process? Remember that sharing memory across processes which running on different cores on the same cpu or if present running on multiple cpus have a lot of latency as memory has to be in sync over the cores. Think about several levels of caches and other hardware dependent isues. – Klaus Apr 03 '19 at 14:22
  • @Klaus, I did not mention for the sake of simplicity but I'm using seqlock and also there are a few variables, but the case is performance overhead for the question. – avatli Apr 03 '19 at 14:26
  • @Jester, how can I make sure this is so, I don't want to do trial and error. Do you know any document, paper etc? – avatli Apr 03 '19 at 14:30
  • @AliVolkanATLI: I can't catch your point. Using memory over different cores increase latency / drop performance. So the answer to your question is simply "No" if there are different cores involved. Maybe it is not relevant cause the locks take much more time, but that is not your question. If latency of memory access is relevant, you have to deal with it – Klaus Apr 03 '19 at 14:32
  • @Klaus, in both case, multi-thread or multi-process, the writer and readers are pinning different CPUs. – avatli Apr 03 '19 at 14:37
  • 4
    It's unclear what you want documented. Shared memory just means the same page is mapped into multiple process address spaces. It does not incur any additional overhead, especially since the cpu has no notion of processes. It just sees a memory mapping and that's that. – Jester Apr 03 '19 at 14:50
  • 3
    @Klaus, After the shared region has been mapped, why would you expect any difference in performace between accesses from threads in different processes vs. accesses from threads in the same process? Threads in the same process also could run on different CPUs. – Solomon Slow Apr 03 '19 at 16:16

0 Answers0