4

I'm using C++11 and the built-in threading class std::thread. Using std::atomic or std::mutex makes it easy to synchronise data, but I'm wondering if it is really necessary for "non sensitive" tasks - while maintaining a bug-free application. Let's say there's a class like

class FPS
{
  private:
    int rate;
  public:
    void change(const int i)
    {rate = i;}
    int read(void)
    {return rate;}
};

storing the frame rate of a camera. In an application, there is one thread for data acquisition (frame grabbing, etc.) that reads the frame rate, and there's another thread handling a GUI that displays the frame rate. The display is "non crucial" in this case, meaning that the display is allowed to be lagging the actual rate in some cases. I can of course simply use an atomic to make it safe, but still I'm wondering if it is actually a must to guarantee a bug-free performance of the program, assuming the application runs on a multi-core CPU.

Kerrek SB
  • 464,522
  • 92
  • 875
  • 1,084
Clemens
  • 313
  • 4
  • 15
  • 2
    "My program works fine, it only has a few minor, benign race conditions..." – Kerrek SB Apr 09 '15 at 14:46
  • 1
    It is in fact possible to reason about systems where data can be stale. You can design bug-free algorithms in the presence of data races. But logic quickly breaks down if single variable updates are no longer atomic. For such designs, C++ offers `std::atomic` with `memory_order_relaxed`. – MSalters Apr 09 '15 at 17:02
  • 1
    The display may not be lagging, it could report a completely different value than both the original and desired values. Undefined Behaviour is Undefined Behaviour. Theoretically anything can happen (including formatting your hard drive, or causing your monitor to burst into flames). – Andre Kostur Apr 09 '15 at 19:17
  • @MSalters Maybe I got you wrong, but to my knowledge the relaxed order **is** atomic, but it doesn't synchronize any side effects. – Clemens Apr 10 '15 at 05:35
  • 1
    @Clemens: That's indeed my point - algorithms that merely require atomic updates can be implemented in C++ that way. – MSalters Apr 10 '15 at 07:32

1 Answers1

8

The C++ threading model is incredibly permissive on the part of what the code does at run time. Your particular implementation of C++ may not be nearly as insane as C++ allows it to be.

The problem with relying on that is, that if it is not documented and understood, the next iterative release of your compiler could make different assumptions, be C++ compliant, and break your code.

As an example, if you communicate through an unsynchronized int, the compiler could notice that the data could not be modified within this thread. If it can prove that, it would be free to store that int in a local register and ignore any updates to the int from another thread.

What more, one piece of code could read it from the register, another from memory, and you could have distinct values. What more, one read to the variable in your code could be turned into two reads, and the two reads could disagree: we are deep in the territory of undefined behavior.

So no, that isn't safe in general. Even if your tests don't detect any problems, threading problems that do not show up in your tests are extremely common: tests are almost never sufficient to demonstrate that your threading code is safe.

It may be safe on your particular compiler and compiler version and compiler flags.

http://en.cppreference.com/w/cpp/atomic/memory_order covers how you can do less than full atomics. Note that not all CPUs will treat these differently, but there are architectures that distinguish between all of these cases. I find all of the memory order rules a bit baffling, as I'm not used to working on systems where they all exist, but if you really need performance you can consider it.

From my reading of a "performance cheat sheet", atomic operations are currently expensive, but not extremely expensive (cheaper than, say, following a pointer to a memory address not in your cache).

Yakk - Adam Nevraumont
  • 262,606
  • 27
  • 330
  • 524
  • However, declaring your rate as a `volatile int` will prevent the compiler from caching the value in a register. – Ken P Apr 09 '15 at 17:28
  • 1
    @KenP The meaning of `volatile` is so volatile between compilers that I would advise against using it. – Yakk - Adam Nevraumont Apr 09 '15 at 17:35
  • 4
    But not caching it in a register does not mean that the write will be visible across threads. Most recently discussed at http://stackoverflow.com/questions/4557979/when-to-use-volatile-with-multi-threading – Nevin Apr 09 '15 at 19:23