0

I'm implementing a server with clients program. The communication works with a shared memory. To control access to a common resource I'm using semaphores. If the client is new to the server, the server generates an id for the client. The client saves it's id and will send this id in further requests. I'm using the following code for communicating between server and a client. This solution works only for one server and one client.

server:

#include <semaphore.h>

sem_t* server = sem_open(SEM_1, O_CREAT | O_EXCL, 0600, 1);
sem_t* client = sem_open(SEM_2, O_CREAT | O_EXCL, 0600, 0);
//other stuff
while (!want_quit)
{
  sem_wait(client)
  //get id of client from shared memory or generate id for client
  //get the command from the client via shared memory
  //process command
  //write result back to shared memory
  sem_post(server)
}

client:

#include <semaphore.h>

sem_t* s1 = sem_open(SEM_1, 0);
sem_t* s2 = sem_open(SEM_2, 0);
do
{
   //wait for the server
   sem_wait(s1);
   //get result from last command from shared memory
   //send new request to server by writing command into shared memory
   sem_post(s2);
} while (command from shm != CLOSE);

The server should work/manage more then one client. I thought I could solve this by a third semaphore, but I'm running into a deadlock issue or the client processes the result of an other client.

My solution with a third semaphore would look like the following:

server:

sem_wait(clients); sem_wait(client); sem_post(server);

client:

sem_wait(s1); sem_post(clients); sem_post(server);

How can I solve this challenge?

Briefkasten
  • 1,964
  • 2
  • 25
  • 53

1 Answers1

0

Your usage of semaphores is not quite right. Semaphores are designed to protect one or more shared resource, usually with the count of the semaphore representing the number of resources available. For the shared memory segment, you only have a single resource (the memory block), so you should use a single semaphore with a count of 1 to protect it.

This single semaphore coordinates all the clients and the server. A client acquires the semaphore, writes its command, and then releases the semaphore. The server acquires this semaphore, reads the command and does whatever processing it needs, writes the result back into shared memory, and then releases the semaphore.

However, as you've discovered, this doesn't coordinate each of the clients. There's nothing to prevent a client from reading the response of another client. So you could use another semaphore here, which can be thought of as protecting the "communication channel" with the server. Again, this is a single resource, so it should have a semaphore with a count of 1.

So your full design would use 2 semaphores, and look something like:

  1. Start with a shared memory semaphore, with a count of 0, and a channel semaphore with a count of 1.
  2. Server waits for the shared memory semaphore.
  3. A client acquires the channel semaphore, decrementing to 0. This blocks all other clients.
  4. The same client then writes to the shared memory segment and increments that semaphore, unblocking the server.
  5. Client and server communicate as needed, acquiring and releasing the memory semaphore however you want.
  6. When the client is done, it releases the channel semaphore, unblocking another client to communicate with the server.

Note that the server never acquires the communication channel semaphore. That's purely to coordinate the 2 or more clients.

I'd also like to point out that this solution is pretty messy. There are lots of moving parts, and lots of places for potential deadlock. This is a huge part of the reason that people use things like pipes, sockets, and message queues for IPC. You don't need to worry about locks, because the coordination is baked into the design of the communication channel (reads/writes block by default), and each client has a separate communication channel with the server. If you're worried about performance, you should check out this SO answer that shows that the various IPC mechanisms are all about the same speed on Linux. You may see different results on another type of system.

bnaecker
  • 6,152
  • 1
  • 20
  • 33