If you make a pipe, you can fork a bunch of children (workers, say) and have each read from the pipe. If the parent process writes data to the pipe, it's unspecified which worker'll get it, and if each worker reads like 1000 bytes at a time, and the parent writes like 225 and then 430, it's unspecified which worker will get how many bytes. But, I think with some underlying assumptions it might actually work to have multiple readers on a pipe?
1) fixed size messages. workers only read messages on one fixed size. The server only writes messages of that size.
2) the workers are used as a "pool" to do the job of 1 worker, but in parallel for blocking operations. So not every worker needs (or should) receive every message. They want to split the messages according to who's available to wait for them.
I made a thing which makes a pipe per worker, and just randomly writes to one pipe or another, waving a rubber chicken in hopes that particular worker doesn't happen to be the one that's stuck in a long operation. But couldn't I have them all read from 1 pipe, and any worker that was free would block on reading it, thus get woken up on-demand by writes to that pipe?