1

eventfd is thread safe according to the man pages ATTRIBUTES section

ATTRIBUTES top
For an explanation of the terms used in this section, see
attributes(7).

   ┌──────────┬───────────────┬─────────┐
   │Interface │ Attribute     │ Value   │
   ├──────────┼───────────────┼─────────┤
   │eventfd() │ Thread safety │ MT-Safe │

I want to wrap eventfd with boost::asio::posix::stream_descriptor so I will be able to use it in boost::asio::io_service.

According to boost stream_descriptor reference, stream_descriptor isn't thread safe

Thread Safety
Distinct objects: Safe.
Shared objects: Unsafe.

So if I understand correctly it's not safe to read / write using boost::asio's async_read_some / write_some with multiple threads from / to an eventfd wrapped with stream_descriptor.

Which is kind of a "downgrade", because native eventfd allows it.

Is my understanding correct?

hudac
  • 2,584
  • 6
  • 34
  • 57

1 Answers1

2

Indeed.


Some precisions:

  • The threadsafety applies to the eventfd call, not the fd
  • Regardless, fd's are threadsafe and you can do syscalls on them freely
  • It's not a "downgrade" of course because you can still use the fd in the same way as before (nobody forces you to use a non-threadsafe object)

Nothing is stopping you from creating two instances tied to the same fd. Just use release() to avoid (double) close.

A similar/related answer here: How to avoid data race with `asio::ip::tcp::iostream`?

sehe
  • 374,641
  • 47
  • 450
  • 633
  • "It's not a "downgrade" of course because you can still use the fd in the same way as before (nobody forces you to use a non-threadsafe object)" - you mean, using the native `eventfd` + syscalls `read()`, `write()` ? – hudac Jan 17 '18 at 10:27
  • "Nothing is stopping you from creating two instances tied to the same fd" - I would my implementation to support N threads – hudac Jan 17 '18 at 10:28
  • What are you using the eventfd for? It seems to me you choose to do scheduling outside of Asio. (Why are you trying to fit it in? If you want to, why not move all the responsibility to Asio) – sehe Jan 17 '18 at 10:51
  • I think I'm missing something :/ . I'm trying to use `eventfd` - so I'll have an asio thread who `async_read_some()` on this `eventfd`, and do some functionality when receives these events. While other threads may send events on the same `eventfd`. These other threads doesn't share the same `io_service`, they might even not be asio threads. I'm calling `asio thread` for threads who invoked `io_service::run()` – hudac Jan 17 '18 at 11:48
  • Then logically I wouldn't expect many threads to run an io_service. Just create an instance of the asio steam descriptor wrapper per thread. Really I think I'd separate io and execution if this would be "many" threads – sehe Jan 17 '18 at 16:38
  • What do you mean by "separate io and execution" ? I also read it in the link you attached, in your answer, and didn't understand – hudac Jan 17 '18 at 17:05
  • 2
    It's usually not useful to do async IO on multiple threads. For exceptionally low-latency applications you can benefit from doing IO on as many threads as there are logical CPUs. More indicates that threads will have "fight" for the resources anyways. Usually (think 10k servers) you can handle IO and events on 1 thread that enqueues work on a thread pool that might be bigger (e.g. because they contain blocking operations and waiting for interprocess synchronization, so it makes sense to have more logical threads of execution than there are CPUs). – sehe Jan 17 '18 at 18:44
  • 1
    The work tasks post their results back or directly delegate IO tasks back to the IO thread(s). In this scenario, you'd have a very limited number of threads (typically, one) that even deals with the events (and other async operations). And the other threads do the work. That's why it's called separation of IO and execution. – sehe Jan 17 '18 at 18:45
  • i have used delayed-write to deal with lots of concurrent io - mark objects dirty all day long, then write them in timed bursts - sqlite is working great with this pattern, it has multi-op transactions. – moodboom Jan 17 '18 at 21:07
  • @moodboom Thanks for your illustration. I think it's a little advanced for OP here, but it goes to show that separation of concerns is common-place and if you know where the latency/bottlenecks actually come from, doesn't need to cost a thing (keeping complexity down is always a win) – sehe Jan 17 '18 at 21:10
  • @sehe yep, scratching out data onto a platter is expensive! :-) – moodboom Jan 17 '18 at 21:11