summaryrefslogtreecommitdiff
path: root/kernel/trace/trace_functions_graph.c
diff options
context:
space:
mode:
authorSteven Rostedt <srostedt@redhat.com>2013-01-16 03:11:19 (GMT)
committerSteven Rostedt <rostedt@goodmis.org>2013-01-21 18:22:34 (GMT)
commit0f1ac8fd254b6c3e77950a1c4ee67be5dc88f7e0 (patch)
tree694bd2973e8146c6a598adaa086f3af7a5515bf1 /kernel/trace/trace_functions_graph.c
parent84c6cf0db6a00601eb43cfc08244a398ffb0894c (diff)
downloadlinux-fsl-qoriq-0f1ac8fd254b6c3e77950a1c4ee67be5dc88f7e0.tar.xz
tracing/lockdep: Disable lockdep first in entering NMI
When function tracing with either debug locks enabled or tracing preempt disabled, the add_preempt_count() is traced. This is an issue with lockdep and function tracing. As function tracing can disable interrupts, and lockdep records that change, lockdep may not be able to handle this recursion if it happens from an NMI context. The first thing that an NMI does is: #define nmi_enter() \ do { \ ftrace_nmi_enter(); \ BUG_ON(in_nmi()); \ add_preempt_count(NMI_OFFSET + HARDIRQ_OFFSET); \ lockdep_off(); \ rcu_nmi_enter(); \ trace_hardirq_enter(); \ } while (0) When the add_preempt_count() is traced, and the tracing callback disables interrupts, it will jump into the lockdep code. There's some places in lockdep that can't handle this re-entrance, and causes lockdep to fail. As the lockdep_off() (and lockdep_on) is a simple: void lockdep_off(void) { current->lockdep_recursion++; } and is never traced, it can be called first in nmi_enter() and lockdep_on() last in nmi_exit(). Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Diffstat (limited to 'kernel/trace/trace_functions_graph.c')
0 files changed, 0 insertions, 0 deletions