summaryrefslogtreecommitdiff
path: root/mm
AgeCommit message (Collapse)Author
2015-02-13mm/memcontrol: Don't call schedule_work_on in preemption disabled contextYang Shi
The following trace is triggered when running ltp oom test cases: BUG: sleeping function called from invalid context at kernel/rtmutex.c:659 in_atomic(): 1, irqs_disabled(): 0, pid: 17188, name: oom03 Preemption disabled at:[<ffffffff8112ba70>] mem_cgroup_reclaim+0x90/0xe0 CPU: 2 PID: 17188 Comm: oom03 Not tainted 3.10.10-rt3 #2 Hardware name: Intel Corporation Calpella platform/MATXM-CORE-411-B, BIOS 4.6.3 08/18/2010 ffff88007684d730 ffff880070df9b58 ffffffff8169918d ffff880070df9b70 ffffffff8106db31 ffff88007688b4a0 ffff880070df9b88 ffffffff8169d9c0 ffff88007688b4a0 ffff880070df9bc8 ffffffff81059da1 0000000170df9bb0 Call Trace: [<ffffffff8169918d>] dump_stack+0x19/0x1b [<ffffffff8106db31>] __might_sleep+0xf1/0x170 [<ffffffff8169d9c0>] rt_spin_lock+0x20/0x50 [<ffffffff81059da1>] queue_work_on+0x61/0x100 [<ffffffff8112b361>] drain_all_stock+0xe1/0x1c0 [<ffffffff8112ba70>] mem_cgroup_reclaim+0x90/0xe0 [<ffffffff8112beda>] __mem_cgroup_try_charge+0x41a/0xc40 [<ffffffff810f1c91>] ? release_pages+0x1b1/0x1f0 [<ffffffff8106f200>] ? sched_exec+0x40/0xb0 [<ffffffff8112cc87>] mem_cgroup_charge_common+0x37/0x70 [<ffffffff8112e2c6>] mem_cgroup_newpage_charge+0x26/0x30 [<ffffffff8110af68>] handle_pte_fault+0x618/0x840 [<ffffffff8103ecf6>] ? unpin_current_cpu+0x16/0x70 [<ffffffff81070f94>] ? migrate_enable+0xd4/0x200 [<ffffffff8110cde5>] handle_mm_fault+0x145/0x1e0 [<ffffffff810301e1>] __do_page_fault+0x1a1/0x4c0 [<ffffffff8169c9eb>] ? preempt_schedule_irq+0x4b/0x70 [<ffffffff8169e3b7>] ? retint_kernel+0x37/0x40 [<ffffffff8103053e>] do_page_fault+0xe/0x10 [<ffffffff8169e4c2>] page_fault+0x22/0x30 So, to prevent schedule_work_on from being called in preempt disabled context, replace the pair of get/put_cpu() to get/put_cpu_light(). Cc: stable-rt@vger.kernel.org Signed-off-by: Yang Shi <yang.shi@windriver.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2015-02-13mm: page_alloc: Use local_lock_on() instead of plain spinlockThomas Gleixner
The plain spinlock while sufficient does not update the local_lock internals. Use a proper local_lock function instead to ease debugging. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable-rt@vger.kernel.org
2015-02-13slub: delay ctor until the object is requestedSebastian Andrzej Siewior
It seems that allocation of plenty objects causes latency on ARM since that code can not be preempted Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2015-02-13slub: Enable irqs for __GFP_WAITThomas Gleixner
SYSTEM_RUNNING might be too late for enabling interrupts. Allocations with GFP_WAIT can happen before that. So use this as an indicator. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13mm: Enable SLUB for RTThomas Gleixner
Make SLUB RT aware and remove the restriction in Kconfig. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13mm: bounce: Use local_irq_save_nortThomas Gleixner
kmap_atomic() is preemptible on RT. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13mm: shrink the page frame to !-rt sizePeter Zijlstra
He below is a boot-tested hack to shrink the page frame size back to normal. Should be a net win since there should be many less PTE-pages than page-frames. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13mm: make vmstat -rt awareIngo Molnar
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13mm: convert swap to percpu lockedIngo Molnar
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13mm-page-alloc-fix.patchThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13mm: page_alloc reduce lock sections furtherPeter Zijlstra
Split out the pages which are to be freed into a separate list and call free_pages_bulk() outside of the percpu page allocator locks. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13mm: page_alloc: rt-friendly per-cpu pagesIngo Molnar
rt-friendly per-cpu pages: convert the irqs-off per-cpu locking method into a preemptible, explicit-per-cpu-locks method. Contains fixes from: Peter Zijlstra <a.p.zijlstra@chello.nl> Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13mm: Replace cgroup_page bit spinlockThomas Gleixner
Bit spinlocks are not working on RT. Replace them. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13kconfig-disable-a-few-options-rt.patchThomas Gleixner
Disable stuff which is known to have issues on RT Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13mm-page-alloc-use-list-last-entry.patchPeter Zijlstra
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13mm: Remove preempt count from pagefault disable/enableThomas Gleixner
Now that all users are cleaned up, we can remove the preemption count. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13mm: raw_pagefault_disablePeter Zijlstra
Adding migrate_disable() to pagefault_disable() to preserve the per-cpu thing for kmap_atomic might not have been the best of choices. But short of adding preempt_disable/migrate_disable foo all over the kmap code it still seems the best way. It does however yield the below borkage as well as wreck !-rt builds since !-rt does rely on pagefault_disable() not preempting. So fix all that up by adding raw_pagefault_disable(). <NMI> [<ffffffff81076d5c>] warn_slowpath_common+0x85/0x9d [<ffffffff81076e17>] warn_slowpath_fmt+0x46/0x48 [<ffffffff814f7fca>] ? _raw_spin_lock+0x6c/0x73 [<ffffffff810cac87>] ? watchdog_overflow_callback+0x9b/0xd0 [<ffffffff810caca3>] watchdog_overflow_callback+0xb7/0xd0 [<ffffffff810f51bb>] __perf_event_overflow+0x11c/0x1fe [<ffffffff810f298f>] ? perf_event_update_userpage+0x149/0x151 [<ffffffff810f2846>] ? perf_event_task_disable+0x7c/0x7c [<ffffffff810f5b7c>] perf_event_overflow+0x14/0x16 [<ffffffff81046e02>] x86_pmu_handle_irq+0xcb/0x108 [<ffffffff814f9a6b>] perf_event_nmi_handler+0x46/0x91 [<ffffffff814fb2ba>] notifier_call_chain+0x79/0xa6 [<ffffffff814fb34d>] __atomic_notifier_call_chain+0x66/0x98 [<ffffffff814fb2e7>] ? notifier_call_chain+0xa6/0xa6 [<ffffffff814fb393>] atomic_notifier_call_chain+0x14/0x16 [<ffffffff814fb3c3>] notify_die+0x2e/0x30 [<ffffffff814f8f75>] do_nmi+0x7e/0x22b [<ffffffff814f8bca>] nmi+0x1a/0x2c [<ffffffff814fb130>] ? sub_preempt_count+0x4b/0xaa <<EOE>> <IRQ> [<ffffffff812d44cc>] delay_tsc+0xac/0xd1 [<ffffffff812d4399>] __delay+0xf/0x11 [<ffffffff812d95d9>] do_raw_spin_lock+0xd2/0x13c [<ffffffff814f813e>] _raw_spin_lock_irqsave+0x6b/0x85 [<ffffffff8106772a>] ? task_rq_lock+0x35/0x8d [<ffffffff8106772a>] task_rq_lock+0x35/0x8d [<ffffffff8106fe2f>] migrate_disable+0x65/0x12c [<ffffffff81114e69>] pagefault_disable+0xe/0x1f [<ffffffff81039c73>] dump_trace+0x21f/0x2e2 [<ffffffff8103ad79>] show_trace_log_lvl+0x54/0x5d [<ffffffff8103ad97>] show_trace+0x15/0x17 [<ffffffff814f4f5f>] dump_stack+0x77/0x80 [<ffffffff812d94b0>] spin_bug+0x9c/0xa3 [<ffffffff81067745>] ? task_rq_lock+0x50/0x8d [<ffffffff812d954e>] do_raw_spin_lock+0x47/0x13c [<ffffffff814f7fbe>] _raw_spin_lock+0x60/0x73 [<ffffffff81067745>] ? task_rq_lock+0x50/0x8d [<ffffffff81067745>] task_rq_lock+0x50/0x8d [<ffffffff8106fe2f>] migrate_disable+0x65/0x12c [<ffffffff81114e69>] pagefault_disable+0xe/0x1f [<ffffffff81039c73>] dump_trace+0x21f/0x2e2 [<ffffffff8104369b>] save_stack_trace+0x2f/0x4c [<ffffffff810a7848>] save_trace+0x3f/0xaf [<ffffffff810aa2bd>] mark_lock+0x228/0x530 [<ffffffff810aac27>] __lock_acquire+0x662/0x1812 [<ffffffff8103dad4>] ? native_sched_clock+0x37/0x6d [<ffffffff810a790e>] ? trace_hardirqs_off_caller+0x1f/0x99 [<ffffffff810693f6>] ? sched_rt_period_timer+0xbd/0x218 [<ffffffff810ac403>] lock_acquire+0x145/0x18a [<ffffffff810693f6>] ? sched_rt_period_timer+0xbd/0x218 [<ffffffff814f7f9e>] _raw_spin_lock+0x40/0x73 [<ffffffff810693f6>] ? sched_rt_period_timer+0xbd/0x218 [<ffffffff810693f6>] sched_rt_period_timer+0xbd/0x218 [<ffffffff8109aa39>] __run_hrtimer+0x1e4/0x347 [<ffffffff81069339>] ? can_migrate_task.clone.82+0x14a/0x14a [<ffffffff8109b97c>] hrtimer_interrupt+0xee/0x1d6 [<ffffffff814fb23d>] ? add_preempt_count+0xae/0xb2 [<ffffffff814ffb38>] smp_apic_timer_interrupt+0x85/0x98 [<ffffffff814fef13>] apic_timer_interrupt+0x13/0x20 Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/n/tip-31keae8mkjiv8esq4rl76cib@git.kernel.org
2015-02-13mm: Prepare decoupling the page fault disabling logicIngo Molnar
Add a pagefault_disabled variable to task_struct to allow decoupling the pagefault-disabled logic from the preempt count. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2015-02-13Reset to 3.12.37Scott Wood
2014-05-14mm, rt: kmap_atomic schedulingPeter Zijlstra
In fact, with migrate_disable() existing one could play games with kmap_atomic. You could save/restore the kmap_atomic slots on context switch (if there are any in use of course), this should be esp easy now that we have a kmap_atomic stack. Something like the below.. it wants replacing all the preempt_disable() stuff with pagefault_disable() && migrate_disable() of course, but then you can flip kmaps around like below. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> [dvhart@linux.intel.com: build fix] Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins [tglx@linutronix.de: Get rid of the per cpu variable and store the idx and the pte content right away in the task struct. Shortens the context switch code. ]
2014-05-14mm-vmalloc.patchThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: Protect activate_mm() by preempt_[disable&enable]_rt()Yong Zhang
User preempt_*_rt instead of local_irq_*_rt or otherwise there will be warning on ARM like below: WARNING: at build/linux/kernel/smp.c:459 smp_call_function_many+0x98/0x264() Modules linked in: [<c0013bb4>] (unwind_backtrace+0x0/0xe4) from [<c001be94>] (warn_slowpath_common+0x4c/0x64) [<c001be94>] (warn_slowpath_common+0x4c/0x64) from [<c001bec4>] (warn_slowpath_null+0x18/0x1c) [<c001bec4>] (warn_slowpath_null+0x18/0x1c) from [<c0053ff8>](smp_call_function_many+0x98/0x264) [<c0053ff8>] (smp_call_function_many+0x98/0x264) from [<c0054364>] (smp_call_function+0x44/0x6c) [<c0054364>] (smp_call_function+0x44/0x6c) from [<c0017d50>] (__new_context+0xbc/0x124) [<c0017d50>] (__new_context+0xbc/0x124) from [<c009e49c>] (flush_old_exec+0x460/0x5e4) [<c009e49c>] (flush_old_exec+0x460/0x5e4) from [<c00d61ac>] (load_elf_binary+0x2e0/0x11ac) [<c00d61ac>] (load_elf_binary+0x2e0/0x11ac) from [<c009d060>] (search_binary_handler+0x94/0x2a4) [<c009d060>] (search_binary_handler+0x94/0x2a4) from [<c009e8fc>] (do_execve+0x254/0x364) [<c009e8fc>] (do_execve+0x254/0x364) from [<c0010e84>] (sys_execve+0x34/0x54) [<c0010e84>] (sys_execve+0x34/0x54) from [<c000da00>] (ret_fast_syscall+0x0/0x30) ---[ end trace 0000000000000002 ]--- The reason is that ARM need irq enabled when doing activate_mm(). According to mm-protect-activate-switch-mm.patch, actually preempt_[disable|enable]_rt() is sufficient. Inspired-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Yong Zhang <yong.zhang0@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/1337061236-1766-1-git-send-email-yong.zhang0@gmail.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm/memcontrol: Don't call schedule_work_on in preemption disabled contextYang Shi
The following trace is triggered when running ltp oom test cases: BUG: sleeping function called from invalid context at kernel/rtmutex.c:659 in_atomic(): 1, irqs_disabled(): 0, pid: 17188, name: oom03 Preemption disabled at:[<ffffffff8112ba70>] mem_cgroup_reclaim+0x90/0xe0 CPU: 2 PID: 17188 Comm: oom03 Not tainted 3.10.10-rt3 #2 Hardware name: Intel Corporation Calpella platform/MATXM-CORE-411-B, BIOS 4.6.3 08/18/2010 ffff88007684d730 ffff880070df9b58 ffffffff8169918d ffff880070df9b70 ffffffff8106db31 ffff88007688b4a0 ffff880070df9b88 ffffffff8169d9c0 ffff88007688b4a0 ffff880070df9bc8 ffffffff81059da1 0000000170df9bb0 Call Trace: [<ffffffff8169918d>] dump_stack+0x19/0x1b [<ffffffff8106db31>] __might_sleep+0xf1/0x170 [<ffffffff8169d9c0>] rt_spin_lock+0x20/0x50 [<ffffffff81059da1>] queue_work_on+0x61/0x100 [<ffffffff8112b361>] drain_all_stock+0xe1/0x1c0 [<ffffffff8112ba70>] mem_cgroup_reclaim+0x90/0xe0 [<ffffffff8112beda>] __mem_cgroup_try_charge+0x41a/0xc40 [<ffffffff810f1c91>] ? release_pages+0x1b1/0x1f0 [<ffffffff8106f200>] ? sched_exec+0x40/0xb0 [<ffffffff8112cc87>] mem_cgroup_charge_common+0x37/0x70 [<ffffffff8112e2c6>] mem_cgroup_newpage_charge+0x26/0x30 [<ffffffff8110af68>] handle_pte_fault+0x618/0x840 [<ffffffff8103ecf6>] ? unpin_current_cpu+0x16/0x70 [<ffffffff81070f94>] ? migrate_enable+0xd4/0x200 [<ffffffff8110cde5>] handle_mm_fault+0x145/0x1e0 [<ffffffff810301e1>] __do_page_fault+0x1a1/0x4c0 [<ffffffff8169c9eb>] ? preempt_schedule_irq+0x4b/0x70 [<ffffffff8169e3b7>] ? retint_kernel+0x37/0x40 [<ffffffff8103053e>] do_page_fault+0xe/0x10 [<ffffffff8169e4c2>] page_fault+0x22/0x30 So, to prevent schedule_work_on from being called in preempt disabled context, replace the pair of get/put_cpu() to get/put_cpu_light(). Cc: stable-rt@vger.kernel.org Signed-off-by: Yang Shi <yang.shi@windriver.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2014-05-14mm: page_alloc: Use local_lock_on() instead of plain spinlockThomas Gleixner
The plain spinlock while sufficient does not update the local_lock internals. Use a proper local_lock function instead to ease debugging. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable-rt@vger.kernel.org
2014-05-14slub: delay ctor until the object is requestedSebastian Andrzej Siewior
It seems that allocation of plenty objects causes latency on ARM since that code can not be preempted Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2014-05-14slub: Enable irqs for __GFP_WAITThomas Gleixner
SYSTEM_RUNNING might be too late for enabling interrupts. Allocations with GFP_WAIT can happen before that. So use this as an indicator. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: Enable SLUB for RTThomas Gleixner
Make SLUB RT aware and remove the restriction in Kconfig. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: bounce: Use local_irq_save_nortThomas Gleixner
kmap_atomic() is preemptible on RT. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: shrink the page frame to !-rt sizePeter Zijlstra
He below is a boot-tested hack to shrink the page frame size back to normal. Should be a net win since there should be many less PTE-pages than page-frames. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: make vmstat -rt awareIngo Molnar
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: convert swap to percpu lockedIngo Molnar
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm-page-alloc-fix.patchThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: page_alloc reduce lock sections furtherPeter Zijlstra
Split out the pages which are to be freed into a separate list and call free_pages_bulk() outside of the percpu page allocator locks. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: page_alloc: rt-friendly per-cpu pagesIngo Molnar
rt-friendly per-cpu pages: convert the irqs-off per-cpu locking method into a preemptible, explicit-per-cpu-locks method. Contains fixes from: Peter Zijlstra <a.p.zijlstra@chello.nl> Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: Replace cgroup_page bit spinlockThomas Gleixner
Bit spinlocks are not working on RT. Replace them. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14kconfig-disable-a-few-options-rt.patchThomas Gleixner
Disable stuff which is known to have issues on RT Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm-page-alloc-use-list-last-entry.patchPeter Zijlstra
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: Remove preempt count from pagefault disable/enableThomas Gleixner
Now that all users are cleaned up, we can remove the preemption count. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14filemap-fix-up.patchThomas Gleixner
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/n/tip-m6yuzd6ul717hlnl2gj6p3ou@git.kernel.org
2014-05-14mm: raw_pagefault_disablePeter Zijlstra
Adding migrate_disable() to pagefault_disable() to preserve the per-cpu thing for kmap_atomic might not have been the best of choices. But short of adding preempt_disable/migrate_disable foo all over the kmap code it still seems the best way. It does however yield the below borkage as well as wreck !-rt builds since !-rt does rely on pagefault_disable() not preempting. So fix all that up by adding raw_pagefault_disable(). <NMI> [<ffffffff81076d5c>] warn_slowpath_common+0x85/0x9d [<ffffffff81076e17>] warn_slowpath_fmt+0x46/0x48 [<ffffffff814f7fca>] ? _raw_spin_lock+0x6c/0x73 [<ffffffff810cac87>] ? watchdog_overflow_callback+0x9b/0xd0 [<ffffffff810caca3>] watchdog_overflow_callback+0xb7/0xd0 [<ffffffff810f51bb>] __perf_event_overflow+0x11c/0x1fe [<ffffffff810f298f>] ? perf_event_update_userpage+0x149/0x151 [<ffffffff810f2846>] ? perf_event_task_disable+0x7c/0x7c [<ffffffff810f5b7c>] perf_event_overflow+0x14/0x16 [<ffffffff81046e02>] x86_pmu_handle_irq+0xcb/0x108 [<ffffffff814f9a6b>] perf_event_nmi_handler+0x46/0x91 [<ffffffff814fb2ba>] notifier_call_chain+0x79/0xa6 [<ffffffff814fb34d>] __atomic_notifier_call_chain+0x66/0x98 [<ffffffff814fb2e7>] ? notifier_call_chain+0xa6/0xa6 [<ffffffff814fb393>] atomic_notifier_call_chain+0x14/0x16 [<ffffffff814fb3c3>] notify_die+0x2e/0x30 [<ffffffff814f8f75>] do_nmi+0x7e/0x22b [<ffffffff814f8bca>] nmi+0x1a/0x2c [<ffffffff814fb130>] ? sub_preempt_count+0x4b/0xaa <<EOE>> <IRQ> [<ffffffff812d44cc>] delay_tsc+0xac/0xd1 [<ffffffff812d4399>] __delay+0xf/0x11 [<ffffffff812d95d9>] do_raw_spin_lock+0xd2/0x13c [<ffffffff814f813e>] _raw_spin_lock_irqsave+0x6b/0x85 [<ffffffff8106772a>] ? task_rq_lock+0x35/0x8d [<ffffffff8106772a>] task_rq_lock+0x35/0x8d [<ffffffff8106fe2f>] migrate_disable+0x65/0x12c [<ffffffff81114e69>] pagefault_disable+0xe/0x1f [<ffffffff81039c73>] dump_trace+0x21f/0x2e2 [<ffffffff8103ad79>] show_trace_log_lvl+0x54/0x5d [<ffffffff8103ad97>] show_trace+0x15/0x17 [<ffffffff814f4f5f>] dump_stack+0x77/0x80 [<ffffffff812d94b0>] spin_bug+0x9c/0xa3 [<ffffffff81067745>] ? task_rq_lock+0x50/0x8d [<ffffffff812d954e>] do_raw_spin_lock+0x47/0x13c [<ffffffff814f7fbe>] _raw_spin_lock+0x60/0x73 [<ffffffff81067745>] ? task_rq_lock+0x50/0x8d [<ffffffff81067745>] task_rq_lock+0x50/0x8d [<ffffffff8106fe2f>] migrate_disable+0x65/0x12c [<ffffffff81114e69>] pagefault_disable+0xe/0x1f [<ffffffff81039c73>] dump_trace+0x21f/0x2e2 [<ffffffff8104369b>] save_stack_trace+0x2f/0x4c [<ffffffff810a7848>] save_trace+0x3f/0xaf [<ffffffff810aa2bd>] mark_lock+0x228/0x530 [<ffffffff810aac27>] __lock_acquire+0x662/0x1812 [<ffffffff8103dad4>] ? native_sched_clock+0x37/0x6d [<ffffffff810a790e>] ? trace_hardirqs_off_caller+0x1f/0x99 [<ffffffff810693f6>] ? sched_rt_period_timer+0xbd/0x218 [<ffffffff810ac403>] lock_acquire+0x145/0x18a [<ffffffff810693f6>] ? sched_rt_period_timer+0xbd/0x218 [<ffffffff814f7f9e>] _raw_spin_lock+0x40/0x73 [<ffffffff810693f6>] ? sched_rt_period_timer+0xbd/0x218 [<ffffffff810693f6>] sched_rt_period_timer+0xbd/0x218 [<ffffffff8109aa39>] __run_hrtimer+0x1e4/0x347 [<ffffffff81069339>] ? can_migrate_task.clone.82+0x14a/0x14a [<ffffffff8109b97c>] hrtimer_interrupt+0xee/0x1d6 [<ffffffff814fb23d>] ? add_preempt_count+0xae/0xb2 [<ffffffff814ffb38>] smp_apic_timer_interrupt+0x85/0x98 [<ffffffff814fef13>] apic_timer_interrupt+0x13/0x20 Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/n/tip-31keae8mkjiv8esq4rl76cib@git.kernel.org
2014-05-14mm: Prepare decoupling the page fault disabling logicIngo Molnar
Add a pagefault_disabled variable to task_struct to allow decoupling the pagefault-disabled logic from the preempt count. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14Reset to 3.12.19Scott Wood
2014-04-10mm, rt: kmap_atomic schedulingPeter Zijlstra
In fact, with migrate_disable() existing one could play games with kmap_atomic. You could save/restore the kmap_atomic slots on context switch (if there are any in use of course), this should be esp easy now that we have a kmap_atomic stack. Something like the below.. it wants replacing all the preempt_disable() stuff with pagefault_disable() && migrate_disable() of course, but then you can flip kmaps around like below. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> [dvhart@linux.intel.com: build fix] Link: http://lkml.kernel.org/r/1311842631.5890.208.camel@twins [tglx@linutronix.de: Get rid of the per cpu variable and store the idx and the pte content right away in the task struct. Shortens the context switch code. ]
2014-04-10mm-vmalloc.patchThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-04-10mm: Protect activate_mm() by preempt_[disable&enable]_rt()Yong Zhang
User preempt_*_rt instead of local_irq_*_rt or otherwise there will be warning on ARM like below: WARNING: at build/linux/kernel/smp.c:459 smp_call_function_many+0x98/0x264() Modules linked in: [<c0013bb4>] (unwind_backtrace+0x0/0xe4) from [<c001be94>] (warn_slowpath_common+0x4c/0x64) [<c001be94>] (warn_slowpath_common+0x4c/0x64) from [<c001bec4>] (warn_slowpath_null+0x18/0x1c) [<c001bec4>] (warn_slowpath_null+0x18/0x1c) from [<c0053ff8>](smp_call_function_many+0x98/0x264) [<c0053ff8>] (smp_call_function_many+0x98/0x264) from [<c0054364>] (smp_call_function+0x44/0x6c) [<c0054364>] (smp_call_function+0x44/0x6c) from [<c0017d50>] (__new_context+0xbc/0x124) [<c0017d50>] (__new_context+0xbc/0x124) from [<c009e49c>] (flush_old_exec+0x460/0x5e4) [<c009e49c>] (flush_old_exec+0x460/0x5e4) from [<c00d61ac>] (load_elf_binary+0x2e0/0x11ac) [<c00d61ac>] (load_elf_binary+0x2e0/0x11ac) from [<c009d060>] (search_binary_handler+0x94/0x2a4) [<c009d060>] (search_binary_handler+0x94/0x2a4) from [<c009e8fc>] (do_execve+0x254/0x364) [<c009e8fc>] (do_execve+0x254/0x364) from [<c0010e84>] (sys_execve+0x34/0x54) [<c0010e84>] (sys_execve+0x34/0x54) from [<c000da00>] (ret_fast_syscall+0x0/0x30) ---[ end trace 0000000000000002 ]--- The reason is that ARM need irq enabled when doing activate_mm(). According to mm-protect-activate-switch-mm.patch, actually preempt_[disable|enable]_rt() is sufficient. Inspired-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Yong Zhang <yong.zhang0@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Link: http://lkml.kernel.org/r/1337061236-1766-1-git-send-email-yong.zhang0@gmail.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-04-10mm/memcontrol: Don't call schedule_work_on in preemption disabled contextYang Shi
The following trace is triggered when running ltp oom test cases: BUG: sleeping function called from invalid context at kernel/rtmutex.c:659 in_atomic(): 1, irqs_disabled(): 0, pid: 17188, name: oom03 Preemption disabled at:[<ffffffff8112ba70>] mem_cgroup_reclaim+0x90/0xe0 CPU: 2 PID: 17188 Comm: oom03 Not tainted 3.10.10-rt3 #2 Hardware name: Intel Corporation Calpella platform/MATXM-CORE-411-B, BIOS 4.6.3 08/18/2010 ffff88007684d730 ffff880070df9b58 ffffffff8169918d ffff880070df9b70 ffffffff8106db31 ffff88007688b4a0 ffff880070df9b88 ffffffff8169d9c0 ffff88007688b4a0 ffff880070df9bc8 ffffffff81059da1 0000000170df9bb0 Call Trace: [<ffffffff8169918d>] dump_stack+0x19/0x1b [<ffffffff8106db31>] __might_sleep+0xf1/0x170 [<ffffffff8169d9c0>] rt_spin_lock+0x20/0x50 [<ffffffff81059da1>] queue_work_on+0x61/0x100 [<ffffffff8112b361>] drain_all_stock+0xe1/0x1c0 [<ffffffff8112ba70>] mem_cgroup_reclaim+0x90/0xe0 [<ffffffff8112beda>] __mem_cgroup_try_charge+0x41a/0xc40 [<ffffffff810f1c91>] ? release_pages+0x1b1/0x1f0 [<ffffffff8106f200>] ? sched_exec+0x40/0xb0 [<ffffffff8112cc87>] mem_cgroup_charge_common+0x37/0x70 [<ffffffff8112e2c6>] mem_cgroup_newpage_charge+0x26/0x30 [<ffffffff8110af68>] handle_pte_fault+0x618/0x840 [<ffffffff8103ecf6>] ? unpin_current_cpu+0x16/0x70 [<ffffffff81070f94>] ? migrate_enable+0xd4/0x200 [<ffffffff8110cde5>] handle_mm_fault+0x145/0x1e0 [<ffffffff810301e1>] __do_page_fault+0x1a1/0x4c0 [<ffffffff8169c9eb>] ? preempt_schedule_irq+0x4b/0x70 [<ffffffff8169e3b7>] ? retint_kernel+0x37/0x40 [<ffffffff8103053e>] do_page_fault+0xe/0x10 [<ffffffff8169e4c2>] page_fault+0x22/0x30 So, to prevent schedule_work_on from being called in preempt disabled context, replace the pair of get/put_cpu() to get/put_cpu_light(). Cc: stable-rt@vger.kernel.org Signed-off-by: Yang Shi <yang.shi@windriver.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2014-04-10mm: page_alloc: Use local_lock_on() instead of plain spinlockThomas Gleixner
The plain spinlock while sufficient does not update the local_lock internals. Use a proper local_lock function instead to ease debugging. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable-rt@vger.kernel.org
2014-04-10slub: delay ctor until the object is requestedSebastian Andrzej Siewior
It seems that allocation of plenty objects causes latency on ARM since that code can not be preempted Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2014-04-10slub: Enable irqs for __GFP_WAITThomas Gleixner
SYSTEM_RUNNING might be too late for enabling interrupts. Allocations with GFP_WAIT can happen before that. So use this as an indicator. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-04-10mm: Enable SLUB for RTThomas Gleixner
Make SLUB RT aware and remove the restriction in Kconfig. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>