0

I want to implement a one-producer, multiple-consumer model with shared memory in Unix
Producer: put the data frame(~char[1024]) in a memory segment
Consumers: memcpy the data into its own private memory and do some processing

Some relevant info:

  1. It is okay for consumer to miss some data frame
  2. Consumers are independent, eg. It's okay if one consumer only gets data 1,2,4, and another gets 2,3,5
  3. About 10 consumers will be running at the same time
  4. Producer can generate data faster than consumers can process
  5. Slow/zombie consumer should not slow down the whole system
  6. Consumer will skip the memcpy if it sees the same data

I have setup the shared memory stuff, and use the pthread read-write lock, but it seems slower than using a tcp model

My question: what synchronization is best suited for this kind of model?

Wei Shi
  • 4,945
  • 8
  • 49
  • 73
  • What do you mean by "miss some data frame?" And how exactly is one consumer getting 1,2,4 and another getting 2,3,5 okay? Do you not mind 2 being processed twice? – aib Mar 12 '11 at 22:56
  • I don't think this is a synchronization problem. (Well, it is, but the solutions are few and simple.) The bigger problem here is of scheduling. – aib Mar 12 '11 at 22:57
  • I'm not sure if you could use .NET libraries, but you'd need something similar to [ReaderWriterLockSlim](http://msdn.microsoft.com/en-us/library/system.threading.readerwriterlockslim.aspx) and [Related question](http://stackoverflow.com/questions/5188475/is-readerwriterlockslim-the-right-coice/5188627#5188627) – Sanjeevakumar Hiremath Mar 12 '11 at 23:02

1 Answers1

0

Are you sure the problem is in the synchronization model used?

I am thinking about something else: Maybe the producer "keep the token" too long. For instance, the produced should produce the 1024 bytes in a private memory and keep the shared memory just for writing down the new data?

Be sure the critical section is as small as possible.

BenjaminB
  • 1,809
  • 3
  • 18
  • 32