3

I want to know what is a good practice in using objects stored in shared memory. The options I have in my mind are:

  1. Add volatile to every member functions of the objects stored in shared memory
  2. Copy the entire data from/to shared memory at every iteration.
  3. Access shared memory without volatile.

Let me explain the problem that I have:

I have two processes running on Linux on FPGA. They communicates data via the shared memory. Since they lock each other by binary semaphore, only one process do its job at a time. Compiler is g++ 3.4.x. My current code is something like below:

struct MyTime
{
  int32 dayNumber;
  int32 milliSecOfDay;
  void convert(double* answer);
};
struct MyData
{
  double var1;
  MyTime time;
};
volatile MyData* ptr;
ptr = (volatile MyData*)shmat(shmid, NULL, 0);

double answer;
ptr->time.convert(&answer);  //< ERROR(*)

*: error: passing const volatile TimeTTJ2000' asthis' argument of `bool TimeTTJ2000::get_Tu_UT1(double&, const int32&, const int32&) const' discards qualifiers

(The above code is just made up for explanation. The error message is from my real code, in which the size of MyData is much larger.)

To remove that error, it seems to me that I would have to define another member function like

MyTime::convert(double* answer) volatile;

But it seems to me ridiculous that I have to add 'volatile' to all the functions in the libraries that are not necessarily mine.

To avoid having 'volatile' everywhere, I think I can copy the entire data in shared memory to local right after one process is unlocked, and write back to shared memory right before the process is locked. By this way, I am not bothered with volatile, but still is this wise thing to do?

Or can I access shared memory data without using volatile in first place? That would make my life easier. (I have little experience in shared memory and volatile. I'm not very certain when volatile is needed. Of course I know basics like, volatile suppresses optimization.)

user22097
  • 229
  • 3
  • 13
  • 2
    I strongly suggest you use `boost::interprocess` for shared memory rather than going directly to a low level platform-specific shared memory API. – Paul R Sep 05 '13 at 06:36
  • I use older compiler: `microblaze-uclinux-g++ (GCC) 3.4.1 ( PetaLinux 0.20 Build -rc1 050607 )` The latest boost::interprocess requires GCC>=4.1. Do you still recommend me to get an older version of boost::inteprocess and use it in my program? – user22097 Sep 05 '13 at 22:43
  • 1
    OK - if you have no choice but to use this very old compiler then you may have to stick to your original plan and go directly to the shared memory API - you might still want to look at the boost::interprocess source code though and see how they handle shared memory etc on Linux/POSIX - it's been thoroughly tested by now so their methods should be pretty reliable. – Paul R Sep 06 '13 at 06:49

1 Answers1

1

But it seems to me ridiculous that I have to add 'volatile' to all the functions in the libraries that are not necessarily mine.

That is what the c++ standard says it should be done. You can cast away the const/volatile specifier, but by doing that, you may introduce UB.

Or can I access shared memory data without using volatile in first place?

Yes, you do not need volatile to access shared memory. And since you are locking the access with a semaphore, you do not need to copy data.

You would need volatile in a case your FPGA writes to some memory (which is not shared memory).

BЈовић
  • 62,405
  • 41
  • 173
  • 273
  • 1
    Some of my data is unidirectional. That is Process1 only writes a data member to a location in the shared memory whereas Process2 only reads the same location. To my understanding (without much confidence), the read operations in Process2 could be stripped away by a compiler. And that is where volatile ensures proper read operations. Am I wrong? Do you still say accessing share memory without volatile guarantees proper operation? – user22097 Sep 05 '13 at 22:51
  • @user22097 You are wrong. Accessing shared memory works fine without volatility. However, if you want one process to wait for other to finish writing, it is better to create a semaphore. – BЈовић Sep 06 '13 at 06:07
  • 1
    It's good to know I was wrong. But I wonder what is the difference between the shared memory not requiring volatility and other types of memory that requires volatility. Could you give me some explanation, reference, or keywords so I can study more? Thanks. – user22097 Sep 06 '13 at 14:15
  • 1
    @user22097 [this](http://www.kcomputing.com/volatile.html) may explain better. It is used to access registers and memory written by external HW devices. It has nothing to do with threads and IPC. See also [wiki](http://en.wikipedia.org/wiki/Volatile_variable) : "the volatile keyword is only meant for use for hardware access" – BЈовић Sep 06 '13 at 15:08
  • @BЈовић I think accessing shared memory is considered hardware access as shared memory is on main memory at the end (assuming swap is disabled). The article you linked mentions that without specifying volatile, accessing a non volatile variable might not force a re-load from the main memory. I wonder what the standard way to access shared memory without volatile is. I ran into UB when I didn't use volatile ([post](https://stackoverflow.com/questions/51168908/fail-to-read-through-shared-memory)). – HCSF Aug 06 '18 at 07:01