diff options
author | Sebastian Andrzej Siewior <bigeasy@linutronix.de> | 2014-01-31 13:20:31 (GMT) |
---|---|---|
committer | Scott Wood <scottwood@freescale.com> | 2014-04-10 00:19:49 (GMT) |
commit | 4693b2db3dc1a48d4959e512f334c1772775440b (patch) | |
tree | e5c28534891fff83007ee2848abef57c4c9e2cda /kernel/irq_work.c | |
parent | 8c57bf0162a60e73faf923f89db7b8f2b4723b2b (diff) | |
download | linux-fsl-qoriq-4693b2db3dc1a48d4959e512f334c1772775440b.tar.xz |
irq_work: allow certain work in hard irq context
irq_work is processed in softirq context on -RT because we want to avoid
long latencies which might arise from processing lots of perf events.
The noHZ-full mode requires its callback to be called from real hardirq
context (commit 76c24fb ("nohz: New APIs to re-evaluate the tick on full
dynticks CPUs")). If it is called from a thread context we might get
wrong results for checks like "is_idle_task(current)".
This patch introduces a second list (hirq_work_list) which will be used
if irq_work_run() has been invoked from hardirq context and process only
work items marked with IRQ_WORK_HARD_IRQ.
This patch also removes arch_irq_work_raise() from sparc & powerpc like
it is already done for x86. Atleast for powerpc it is somehow
superfluous because it is called from the timer interrupt which should
invoke update_process_times().
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Diffstat (limited to 'kernel/irq_work.c')
-rw-r--r-- | kernel/irq_work.c | 22 |
1 files changed, 19 insertions, 3 deletions
diff --git a/kernel/irq_work.c b/kernel/irq_work.c index f6e4377..35d21f9 100644 --- a/kernel/irq_work.c +++ b/kernel/irq_work.c @@ -20,6 +20,9 @@ static DEFINE_PER_CPU(struct llist_head, irq_work_list); +#ifdef CONFIG_PREEMPT_RT_FULL +static DEFINE_PER_CPU(struct llist_head, hirq_work_list); +#endif static DEFINE_PER_CPU(int, irq_work_raised); /* @@ -48,7 +51,11 @@ static bool irq_work_claim(struct irq_work *work) return true; } +#ifdef CONFIG_PREEMPT_RT_FULL +void arch_irq_work_raise(void) +#else void __weak arch_irq_work_raise(void) +#endif { /* * Lame architectures will get the timer tick callback @@ -70,8 +77,12 @@ void irq_work_queue(struct irq_work *work) /* Queue the entry and raise the IPI if needed. */ preempt_disable(); - llist_add(&work->llnode, &__get_cpu_var(irq_work_list)); - +#ifdef CONFIG_PREEMPT_RT_FULL + if (work->flags & IRQ_WORK_HARD_IRQ) + llist_add(&work->llnode, &__get_cpu_var(hirq_work_list)); + else +#endif + llist_add(&work->llnode, &__get_cpu_var(irq_work_list)); /* * If the work is not "lazy" or the tick is stopped, raise the irq * work interrupt (if supported by the arch), otherwise, just wait @@ -115,7 +126,12 @@ static void __irq_work_run(void) __this_cpu_write(irq_work_raised, 0); barrier(); - this_list = &__get_cpu_var(irq_work_list); +#ifdef CONFIG_PREEMPT_RT_FULL + if (in_irq()) + this_list = &__get_cpu_var(hirq_work_list); + else +#endif + this_list = &__get_cpu_var(irq_work_list); if (llist_empty(this_list)) return; |