0

If you make a pipe, you can fork a bunch of children (workers, say) and have each read from the pipe. If the parent process writes data to the pipe, it's unspecified which worker'll get it, and if each worker reads like 1000 bytes at a time, and the parent writes like 225 and then 430, it's unspecified which worker will get how many bytes. But, I think with some underlying assumptions it might actually work to have multiple readers on a pipe?

1) fixed size messages. workers only read messages on one fixed size. The server only writes messages of that size.

2) the workers are used as a "pool" to do the job of 1 worker, but in parallel for blocking operations. So not every worker needs (or should) receive every message. They want to split the messages according to who's available to wait for them.

I made a thing which makes a pipe per worker, and just randomly writes to one pipe or another, waving a rubber chicken in hopes that particular worker doesn't happen to be the one that's stuck in a long operation. But couldn't I have them all read from 1 pipe, and any worker that was free would block on reading it, thus get woken up on-demand by writes to that pipe?

cyisfor
  • 171
  • 2
  • 8

1 Answers1

0

write() returns the number of bytes actually written. Imagine your pipe has a 1 MB buffer provided by the system, and you have written 10 bytes fewer than 1 MB. Next time you try to write 1000 bytes, you will only write 10 bytes. Now what? The first reader who tries will get those 10 bytes, but has no way to make sure no one else "steals" the rest of the message.

You should instead consider a message-oriented API, such as SysV or POSIX Message Queues: System V IPC vs POSIX IPC

John Zwinck
  • 239,568
  • 38
  • 324
  • 436