diff options
author | Steven Rostedt (Red Hat) <rostedt@goodmis.org> | 2014-12-10 22:31:07 (GMT) |
---|---|---|
committer | Steven Rostedt <rostedt@goodmis.org> | 2014-12-13 04:16:38 (GMT) |
commit | aee4e5f3d3abb7a2239dd02f6d8fb173413fd02f (patch) | |
tree | dd78b839b1d7f65be77fcb5522fcdb5ba531f576 /include | |
parent | f823b37ba6703f7cc65c8a2350872a67f080cafe (diff) | |
download | linux-aee4e5f3d3abb7a2239dd02f6d8fb173413fd02f.tar.xz |
tracing/sched: Check preempt_count() for current when reading task->state
When recording the state of a task for the sched_switch tracepoint a check of
task_preempt_count() is performed to see if PREEMPT_ACTIVE is set. This is
because, technically, a task being preempted is really in the TASK_RUNNING
state, and that is what should be recorded when tracing a sched_switch,
even if the task put itself into another state (it hasn't scheduled out
in that state yet).
But with the change to use per_cpu preempt counts, the
task_thread_info(p)->preempt_count is no longer used, and instead
task_preempt_count(p) is used.
The problem is that this does not use the current preempt count but a stale
one from a previous sched_switch. The task_preempt_count(p) uses
saved_preempt_count and not preempt_count(). But for tracing sched_switch,
if p is current, we really want preempt_count().
I hit this bug when I was tracing sleep and the call from do_nanosleep()
scheduled out in the "RUNNING" state.
sleep-4290 [000] 537272.259992: sched_switch: sleep:4290 [120] R ==> swapper/0:0 [120]
sleep-4290 [000] 537272.260015: kernel_stack: <stack trace>
=> __schedule (ffffffff8150864a)
=> schedule (ffffffff815089f8)
=> do_nanosleep (ffffffff8150b76c)
=> hrtimer_nanosleep (ffffffff8108d66b)
=> SyS_nanosleep (ffffffff8108d750)
=> return_to_handler (ffffffff8150e8e5)
=> tracesys_phase2 (ffffffff8150c844)
After a bit of hair pulling, I found that the state was really
TASK_INTERRUPTIBLE, but the saved_preempt_count had an old PREEMPT_ACTIVE
set and caused the sched_switch tracepoint to show it as RUNNING.
Link: http://lkml.kernel.org/r/20141210174428.3cb7542a@gandalf.local.home
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: stable@vger.kernel.org # 3.13+
Cc: Peter Zijlstra <peterz@infradead.org>
Fixes: 01028747559a "sched: Create more preempt_count accessors"
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/trace/events/sched.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h index 0a68d5a..a7d67bc 100644 --- a/include/trace/events/sched.h +++ b/include/trace/events/sched.h @@ -100,7 +100,7 @@ static inline long __trace_sched_switch_state(struct task_struct *p) /* * For all intents and purposes a preempted task is a running task. */ - if (task_preempt_count(p) & PREEMPT_ACTIVE) + if (preempt_count() & PREEMPT_ACTIVE) state = TASK_RUNNING | TASK_STATE_MAX; #endif |