summaryrefslogtreecommitdiff
path: root/arch
diff options
context:
space:
mode:
authorSantosh Shilimkar <santosh.shilimkar@ti.com>2010-04-30 05:51:20 (GMT)
committerRussell King <rmk+kernel@arm.linux.org.uk>2010-05-01 10:32:57 (GMT)
commit13ea9cc82138691856d7cd855dff9aef1479adb9 (patch)
tree87904b0da34ba87d587edb65c62849421d6ea02d /arch
parent124efc27a7090d4aaab68b28f7e7a5137f4ecec9 (diff)
downloadlinux-13ea9cc82138691856d7cd855dff9aef1479adb9.tar.xz
ARM: 6066/1: Fix "BUG: scheduling while atomic: swapper/0/0x00000002
This patch fixes the preempt leak in the cpuidle path invoked from cpu-hotplug. The fix is suggested by Russell King and is based on x86 idea of calling init_idle() on the idle task when it's re-used which also resets the preempt count amongst other things dump: BUG: scheduling while atomic: swapper/0/0x00000002 Modules linked in: Backtrace: [<c0024f90>] (dump_backtrace+0x0/0x110) from [<c0173bc4>] (dump_stack+0x18/0x1c) r7:c02149e4 r6:c033df00 r5:c7836000 r4:00000000 [<c0173bac>] (dump_stack+0x0/0x1c) from [<c003b4f0>] (__schedule_bug+0x60/0x70) [<c003b490>] (__schedule_bug+0x0/0x70) from [<c0174214>] (schedule+0x98/0x7b8) r5:c7836000 r4:c7836000 [<c017417c>] (schedule+0x0/0x7b8) from [<c00228c4>] (cpu_idle+0xb4/0xd4) # [<c0022810>] (cpu_idle+0x0/0xd4) from [<c0171dd8>] (secondary_start_kernel+0xe0/0xf0) r5:c7836000 r4:c0205f40 [<c0171cf8>] (secondary_start_kernel+0x0/0xf0) from [<c002d57c>] (prm_rmw_mod_reg_bits+0x88/0xa4) r7:c02149e4 r6:00000001 r5:00000001 r4:c7836000 Backtrace aborted due to bad frame pointer <c7837fbc> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Diffstat (limited to 'arch')
-rw-r--r--arch/arm/kernel/smp.c6
1 files changed, 6 insertions, 0 deletions
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index 577543f..a01194e 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -86,6 +86,12 @@ int __cpuinit __cpu_up(unsigned int cpu)
return PTR_ERR(idle);
}
ci->idle = idle;
+ } else {
+ /*
+ * Since this idle thread is being re-used, call
+ * init_idle() to reinitialize the thread structure.
+ */
+ init_idle(idle, cpu);
}
/*