1

I have a homework project that require the creation of a STATIC library to provide mutual access to a couple of named pipes.

These pipes are used for communication between various clients using the library and a server.

Now, suppose I want to use pthread mutexes; how can I achieve that? How can the processes know which is the shared memory area in which the mutex is stored? And who should require this memory area? The server can't because it's required the library itself to provide mutual exclusion.

Thanks to asveikau i came up with this:

#define SHARED 1
#define MUTEX 1

int main() {

    sem_t* mutex = sem_open("mutex", O_CREAT);

    sem_init(mutex, SHARED, MUTEX);

    fork(), fork(), fork();

    sem_wait(mutex);

    int i;
    for(i = 0; i < 10; i++)
        printf("process %d %d\n", getpid(), i), fflush(stdout);

    sem_post(mutex);
}

that from the output really seem to solve my problem.

Thank you to everyone.

Simone
  • 2,261
  • 2
  • 19
  • 27
  • How about posting the text of the homework and what you tried. – cnicutar Sep 06 '11 at 19:59
  • the text of the homework is a 10 pages document written in italian. – Simone Sep 06 '11 at 20:01
  • 1
    Very closely related to [How to use a file as a mutex in Linux and C](http://stackoverflow.com/questions/7324864/how-to-use-a-file-as-a-mutex-in-linux-and-c/7325044#7325044). It is not clear that static libraries as opposed to shared libraries are relevant to the problem. – Jonathan Leffler Sep 06 '11 at 20:01
  • i think is clear what i need..i need syncronization between clients using the same library..but this library is required to be static – Simone Sep 06 '11 at 20:01
  • the project require the use of a static library! – Simone Sep 06 '11 at 20:03
  • 2
    Note that pthread mutexes are for mutual exclusion amongst the threads of a single process - not for cross-process mutual exclusion. – Jonathan Leffler Sep 06 '11 at 20:04
  • 1
    The choice between static and dynamic (shared) library is immaterial. The code can be the same; both will work if the underlying code works. – Jonathan Leffler Sep 06 '11 at 20:05
  • 1
    You can use named semaphores (`sem_open`) as a cross-process mutex. Some platforms don't support that though. – asveikau Sep 06 '11 at 20:06
  • @Banthar i thought about flock!!! in fact i asked about that in another question but everyone said me is not possible to archieve syncronization through flock – Simone Sep 06 '11 at 20:07
  • 1
    @Jonathan: pthread mutexes can be process-shared (see `pthread_mutexattr_setpshared`) and even *robust* (automatically-unlocking and informing the next thread to obtain the lock when the process that held the lock dies unexpectedly). This makes them a very powerful inter-process synchronization tool! – R.. GitHub STOP HELPING ICE Sep 07 '11 at 01:08
  • @R..: one lives; one learns. The residual problem is finding a system that supports those functions, but Linux is one such system (MacOS X is not). Thanks for the information. – Jonathan Leffler Sep 07 '11 at 02:11
  • According to the man page of [sem_overview](https://linux.die.net/man/7/sem_overview), a named semaphore is persistent in the system unless you call sem_unlink(...). Perhaps you should include sem_unlink("mutex") to the end of your program. – Jing Qiu May 18 '18 at 22:34

1 Answers1

2

I put this down as a comment, but it's worth an answer I think.

As others state, pthread mutexes are not cross-process. What you need is a "named mutex". You can use sem_open to create a cross-process semaphore, and give it an initial count of 1. In that case sem_wait becomes "mutex lock" and sem_post becomes "mutex unlock".

Note that sem_open, while part of POSIX, is not universally supported. I believe it works on Linux and Mac OS X. Probably Solaris if you care about that (these days you probably don't). I know on OpenBSD it always fails with ENOSYS. YMMV.

asveikau
  • 39,039
  • 2
  • 53
  • 68
  • ok we're on the right way. but my problem is: how can different processes be aware of which area the semaphore is stored? – Simone Sep 06 '11 at 20:12