These days, I'm studying kernel internal network code, especially RPS code. You know, there are a lot of functions about that. But I am focusing on some functions about SMP queue processing such as enqueue_to_backlog
and process_backlog
.
I wonder about synchronization btw two cores(or single core) by using two functions -enqueue_to_backlog
and process_backlog
-.
In that functions, A core(A) holds a spin_lock
of the other core(B) for queueing packets into input_pkt_queue
and scheduling napi of the core(B). And A Core(B) also holds a spin_lock
for splicing input_pkt_queue
to process_queue
of the core(B) and removing napi schedule by itself. I know that spin_lock
should be held to prevent two core from accessing the same queue each other during processing queue.
But I can't understand why spin_lock
is called with local_irq_disable
(or local_irq_save
). I think that there is no accessing the queues or rps_lock
of the core(B) by Interrupts Context(TH), when interrupts(TH) preempt current context(softirq, BH). - Of course, napi struct can be accessed for scheduling napi by TH, but it holds disabling irq until queueing packet- So I wonder about why spin_lock
is called with irq disable.
I think it is impossible to preempt current context(napi, softirq) by other BH such as tasklet. Is it true? And I want to know whether local_irq_disable disable all cores irq or just current core's irq literally? Actually, I read a book about kernel development, but I think i don't understand preemption enough.
Would explain the reasons why rps procedure use spin_lock
with local_irq_disable
?