0

I need the following architecture:

scheduling-thread(S):
  - push scheduling event to "schedule" queue, with `data` and a `deadline`

scheduler-thread:
  - forever loop
    - process scheduling events from the "schedule" queue
    - push event to a "deadlines-met" queue when deadline is met

customer-thread(S):
  - listen to "deadlines-met" queue

That is, the scheduler-thread receives data from scheduling-threads via the "schedule" queue, and pushes them to a "deadlines-met" queue whenever the deadline is met.

Clients listening on the "deadlines-met" queue will receive the events at the desired time.

I am worried that the implementation of scheduler-thread can be complicated, since it needs to do two things:

  • listen to the "schedule" queue, and prepare deadlines
  • push events at the right moment to the "deadlines-met" queue

And both can not be done at the same time: that is, if I am waiting for a deadline to expire, I can not listen for new scheduling events, and if I am listening I can not wait for a deadline to expire.

How could I implement this scheduling thread? The easy alternative (sched module), would block my thread while waiting for deadlines to expire, so that I could not process new scheduling events.

blueFast
  • 41,341
  • 63
  • 198
  • 344
  • It looks like the `scheduler-thread` needs to pop events from its queue and pass them to another thread each (from a pool?) which waits for the deadline before pushing them on the `deadlines-met` queue and quit returning the thread to the pool. – quamrana Feb 17 '16 at 13:49
  • @quamrana: you mean one thread per scheduling event, to wait for the deadline? That would work but, isn't that expensive? – blueFast Feb 17 '16 at 14:21
  • Expensive in what sense? If you have thousands of events, then that takes lots of threads, but if they are asleep until the deadline then no runtime is consumed. – quamrana Feb 17 '16 at 15:55
  • @quamrana: sure, I was thinking more on terms of thread creation overhead and memory consumption – blueFast Feb 17 '16 at 16:14

2 Answers2

1

The other way to do this is with a priority queue.

I've not done this in Python, but the idea is to keep the deadlines sorted and to wait for the shortest. If you use a condition object you can wait() for the shortest, but when another event is posted, then a notify() will cancel the sleep and the thread places the next event into the sorted list and again waits for the shortest.

Community
  • 1
  • 1
quamrana
  • 37,849
  • 12
  • 53
  • 71
  • That means that pushing to the queue would not be enough: the condition object must be notified too when new data is in the queue, in order to wake up the thread. Both actions could be encapsulated in a push() method in the scheduler class. Sounds easy to implement. I'll try this, thanks! – blueFast Feb 18 '16 at 09:28
0

make the queue object in your "main" program

start both threads with Threading.thread from the "main" program

check the queues from each thread

You could block on read from the queues or you could sleep and check every second with Queue.empty()

See this example How to use threading in Python?

Community
  • 1
  • 1
Vorsprung
  • 32,923
  • 5
  • 39
  • 63
  • This says nothing about the problems that I mentioned in the scheduler-thread: it must do two tasks concurrently, which is not possible. – blueFast Feb 17 '16 at 14:22
  • @delavnog not sure I understand you. If you make 3 threads then they run in parallel on 3 concurrent tasks. Queue datastructure is thread safe and allows IPC between threads. That's how it works. If using "sched" (i am not familar with this) causes a blocking condition, don't use it – Vorsprung Feb 17 '16 at 15:31
  • An *atomic* processing of the scheduling event involves: listen to queue *and* sleep until deadline. Both are blocking, and can not be performed at the same time. So either I am not listening to scheduling events (so they will be processed eventually too late), or I am not sleeping until the next deadline, thus skipping it. – blueFast Feb 17 '16 at 15:41
  • Listen to queue, pull off event, make a completely new thread to process event (which can sleep or whatever) and the queue just carries on after that. Or decide how many concurrent events are likely and make a thread pool of that size and use them as needed. @quamrana already suggested this – Vorsprung Feb 17 '16 at 15:47
  • In fact, if you could find the thread pool size you could just launch several "scheduler-thread" – Vorsprung Feb 17 '16 at 15:50
  • Yes, that's an implementation that I am considering, but is a bit too complicated for my needs. I am considering sleeping for short intervals (equal to the queue timeout), which allows me to have a single scheduler thread, listening to scheduling events and pushing deadlines. The trade-off is an eventual late processing of some events, but since the sleep time would be short, that could be acceptable. – blueFast Feb 17 '16 at 15:55