summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2014-05-14timer-fd: Prevent live lockThomas Gleixner
If hrtimer_try_to_cancel() requires a retry, then depending on the priority setting te retry loop might prevent timer callback completion on RT. Prevent that by waiting for completion on RT, no change for a non RT kernel. Reported-by: Sankara Muthukrishnan <sankara.m@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable-rt@vger.kernel.org
2014-05-14hrtimer: fixup hrtimer callback changes for preempt-rtThomas Gleixner
In preempt-rt we can not call the callbacks which take sleeping locks from the timer interrupt context. Bring back the softirq split for now, until we fixed the signal delivery problem for real. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2014-05-14hrtimers: prepare full preemptionIngo Molnar
Make cancellation of a running callback in softirq context safe against preemption. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14timers: Avoid the switch timers base set to NULL trick on RTThomas Gleixner
On RT that code is preemptible, so we cannot assign NULL to timers base as a preempter would spin forever in lock_timer_base(). Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14timer: delay waking softirqs from the jiffy tickPeter Zijlstra
People were complaining about broken balancing with the recent -rt series. A look at /proc/sched_debug yielded: cpu#0, 2393.874 MHz .nr_running : 0 .load : 0 .cpu_load[0] : 177522 .cpu_load[1] : 177522 .cpu_load[2] : 177522 .cpu_load[3] : 177522 .cpu_load[4] : 177522 cpu#1, 2393.874 MHz .nr_running : 4 .load : 4096 .cpu_load[0] : 181618 .cpu_load[1] : 180850 .cpu_load[2] : 180274 .cpu_load[3] : 179938 .cpu_load[4] : 179758 Which indicated the cpu_load computation was hosed, the 177522 value indicates that there is one RT task runnable. Initially I thought the old problem of calculating the cpu_load from a softirq had re-surfaced, however looking at the code shows its being done from scheduler_tick(). [ we really should fix this RT/cfs interaction some day... ] A few trace_printk()s later: sirq-timer/1-19 [001] 174.289744: 19: 50:S ==> [001] 0:140:R <idle> <idle>-0 [001] 174.290724: enqueue_task_rt: adding task: 19/sirq-timer/1 with load: 177522 <idle>-0 [001] 174.290725: 0:140:R + [001] 19: 50:S sirq-timer/1 <idle>-0 [001] 174.290730: scheduler_tick: current load: 177522 <idle>-0 [001] 174.290732: scheduler_tick: current: 0/swapper <idle>-0 [001] 174.290736: 0:140:R ==> [001] 19: 50:R sirq-timer/1 sirq-timer/1-19 [001] 174.290741: dequeue_task_rt: removing task: 19/sirq-timer/1 with load: 177522 sirq-timer/1-19 [001] 174.290743: 19: 50:S ==> [001] 0:140:R <idle> We see that we always raise the timer softirq before doing the load calculation. Avoid this by re-ordering the scheduler_tick() call in update_process_times() to occur before we deal with timers. This lowers the load back to sanity and restores regular load-balancing behaviour. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14timers: preempt-rt supportIngo Molnar
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14timers: prepare for full preemption improveZhao Hongjiang
wake_up should do nothing on the nort, so we should use wakeup_timer_waiters, also fix a spell mistake. Cc: stable-rt@vger.kernel.org Signed-off-by: Zhao Hongjiang <zhaohongjiang@huawei.com> [bigeasy: s/CONFIG_PREEMPT_RT_BASE/CONFIG_PREEMPT_RT_FULL/] Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2014-05-14timers: prepare for full preemptionIngo Molnar
When softirqs can be preempted we need to make sure that cancelling the timer from the active thread can not deadlock vs. a running timer callback. Add a waitqueue to resolve that. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14relay: fix timer madnessIngo Molnar
remove timer calls (!!!) from deep within the tracing infrastructure. This was totally bogus code that can cause lockups and worse. Poll the buffer every 2 jiffies for now. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14ipc/mqueue: Add a critical section to avoid a deadlockKOBAYASHI Yoshitake
(Repost for v3.0-rt1 and changed the distination addreses) I have tested the following patch on v3.0-rt1 with PREEMPT_RT_FULL. In POSIX message queue, if a sender process uses SCHED_FIFO and has a higher priority than a receiver process, the sender will be stuck at ipc/mqueue.c:452 452 while (ewp->state == STATE_PENDING) 453 cpu_relax(); Description of the problem (receiver process) 1. receiver changes sender's state to STATE_PENDING (mqueue.c:846) 2. wake up sender process and "switch to sender" (mqueue.c:847) Note: This context switch only happens in PREEMPT_RT_FULL kernel. (sender process) 3. sender check the own state in above loop (mqueue.c:452-453) *. receiver will never wake up and cannot change sender's state to STATE_READY because sender has higher priority Signed-off-by: Yoshitake Kobayashi <yoshitake.kobayashi@toshiba.co.jp> Cc: viro@zeniv.linux.org.uk Cc: dchinner@redhat.com Cc: npiggin@kernel.dk Cc: hch@lst.de Cc: arnd@arndb.de Link: http://lkml.kernel.org/r/4E2A38A0.1090601@toshiba.co.jp Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14ipc: Make the ipc code -rt awareIngo Molnar
RT serializes the code with the (rt)spinlock but keeps preemption enabled. Some parts of the code need to be atomic nevertheless. Protect it with preempt_disable/enable_rt pairts. Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14panic: skip get_random_bytes for RT_FULL in init_oops_idThomas Gleixner
2014-05-14radix-tree-rt-aware.patchThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm/memcontrol: Don't call schedule_work_on in preemption disabled contextYang Shi
The following trace is triggered when running ltp oom test cases: BUG: sleeping function called from invalid context at kernel/rtmutex.c:659 in_atomic(): 1, irqs_disabled(): 0, pid: 17188, name: oom03 Preemption disabled at:[<ffffffff8112ba70>] mem_cgroup_reclaim+0x90/0xe0 CPU: 2 PID: 17188 Comm: oom03 Not tainted 3.10.10-rt3 #2 Hardware name: Intel Corporation Calpella platform/MATXM-CORE-411-B, BIOS 4.6.3 08/18/2010 ffff88007684d730 ffff880070df9b58 ffffffff8169918d ffff880070df9b70 ffffffff8106db31 ffff88007688b4a0 ffff880070df9b88 ffffffff8169d9c0 ffff88007688b4a0 ffff880070df9bc8 ffffffff81059da1 0000000170df9bb0 Call Trace: [<ffffffff8169918d>] dump_stack+0x19/0x1b [<ffffffff8106db31>] __might_sleep+0xf1/0x170 [<ffffffff8169d9c0>] rt_spin_lock+0x20/0x50 [<ffffffff81059da1>] queue_work_on+0x61/0x100 [<ffffffff8112b361>] drain_all_stock+0xe1/0x1c0 [<ffffffff8112ba70>] mem_cgroup_reclaim+0x90/0xe0 [<ffffffff8112beda>] __mem_cgroup_try_charge+0x41a/0xc40 [<ffffffff810f1c91>] ? release_pages+0x1b1/0x1f0 [<ffffffff8106f200>] ? sched_exec+0x40/0xb0 [<ffffffff8112cc87>] mem_cgroup_charge_common+0x37/0x70 [<ffffffff8112e2c6>] mem_cgroup_newpage_charge+0x26/0x30 [<ffffffff8110af68>] handle_pte_fault+0x618/0x840 [<ffffffff8103ecf6>] ? unpin_current_cpu+0x16/0x70 [<ffffffff81070f94>] ? migrate_enable+0xd4/0x200 [<ffffffff8110cde5>] handle_mm_fault+0x145/0x1e0 [<ffffffff810301e1>] __do_page_fault+0x1a1/0x4c0 [<ffffffff8169c9eb>] ? preempt_schedule_irq+0x4b/0x70 [<ffffffff8169e3b7>] ? retint_kernel+0x37/0x40 [<ffffffff8103053e>] do_page_fault+0xe/0x10 [<ffffffff8169e4c2>] page_fault+0x22/0x30 So, to prevent schedule_work_on from being called in preempt disabled context, replace the pair of get/put_cpu() to get/put_cpu_light(). Cc: stable-rt@vger.kernel.org Signed-off-by: Yang Shi <yang.shi@windriver.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2014-05-14mm: page_alloc: Use local_lock_on() instead of plain spinlockThomas Gleixner
The plain spinlock while sufficient does not update the local_lock internals. Use a proper local_lock function instead to ease debugging. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable-rt@vger.kernel.org
2014-05-14slub: delay ctor until the object is requestedSebastian Andrzej Siewior
It seems that allocation of plenty objects causes latency on ARM since that code can not be preempted Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2014-05-14slub: Enable irqs for __GFP_WAITThomas Gleixner
SYSTEM_RUNNING might be too late for enabling interrupts. Allocations with GFP_WAIT can happen before that. So use this as an indicator. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: Enable SLUB for RTThomas Gleixner
Make SLUB RT aware and remove the restriction in Kconfig. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: Allow only slub on RTIngo Molnar
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: bounce: Use local_irq_save_nortThomas Gleixner
kmap_atomic() is preemptible on RT. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14ARM: Initialize ptl->lock for vector pageFrank Rowand
Without this patch, ARM can not use SPLIT_PTLOCK_CPUS if PREEMPT_RT_FULL=y because vectors_user_mapping() creates a VM_ALWAYSDUMP mapping of the vector page (address 0xffff0000), but no ptl->lock has been allocated for the page. An attempt to coredump that page will result in a kernel NULL pointer dereference when follow_page() attempts to lock the page. The call tree to the NULL pointer dereference is: do_notify_resume() get_signal_to_deliver() do_coredump() elf_core_dump() get_dump_page() __get_user_pages() follow_page() pte_offset_map_lock() <----- a #define ... rt_spin_lock() The underlying problem is exposed by mm-shrink-the-page-frame-to-rt-size.patch. Signed-off-by: Frank Rowand <frank.rowand@am.sony.com> Cc: Frank <Frank_Rowand@sonyusa.com> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/4E87C535.2030907@am.sony.com Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: shrink the page frame to !-rt sizePeter Zijlstra
He below is a boot-tested hack to shrink the page frame size back to normal. Should be a net win since there should be many less PTE-pages than page-frames. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: make vmstat -rt awareIngo Molnar
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: convert swap to percpu lockedIngo Molnar
Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm-page-alloc-fix.patchThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: page_alloc reduce lock sections furtherPeter Zijlstra
Split out the pages which are to be freed into a separate list and call free_pages_bulk() outside of the percpu page allocator locks. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: page_alloc: rt-friendly per-cpu pagesIngo Molnar
rt-friendly per-cpu pages: convert the irqs-off per-cpu locking method into a preemptible, explicit-per-cpu-locks method. Contains fixes from: Peter Zijlstra <a.p.zijlstra@chello.nl> Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14cpu-rt-variants.patchThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14use local spin_locks in local_lockNicholas Mc Guire
Drop recursive call to migrate_disabel/enable for local_*lock* api reported by Steven Rostedt. local_lock will call migrate_disable via get_local_var - call tree is get_locked_var `-> local_lock(lvar) `-> __local_lock(&get_local_var(lvar)); `--> # define get_local_var(var) (*({ migrate_disable(); &__get_cpu_var(var); })) \ thus there should be no need to call migrate_disable/enable recursively in spin_try/lock/unlock. This patch addes a spin_trylock_local and replaces the migration disabling calls by the local calls. This patch is incomplete as it does not yet cover the _irq/_irqsave variants by local locks. This patch requires the API cleanup in kernel/softirq.c or it would break softirq_lock/unlock with respect to migration. Signed-off-by: Nicholas Mc Guire <der.herr@hofr.at> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2014-05-14rt-local-irq-lock.patchThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14local-var.patchThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14USB: Fix the mouse problem when copying large amounts of dataWu Zhangjin
When copying large amounts of data between the USB storage devices and the hard disk, the USB mouse will not work, this patch fixes it. [NOTE: This problem have been found in the Loongson family machines, not sure whether it is producible on other platforms] Signed-off-by: Hu Hongbing <huhb@lemote.com> Signed-off-by: Wu Zhangjin <wuzhangjin@gmail.com>
2014-05-14net: gianfar: do not try to cleanup TX packets if they are not doneSebastian Andrzej Siewior
What I observe is that the TX queue is not empty and does not make any progress. gfar_clean_tx_ring() does not clean up the packet because it is not completed yet. The root cause is that the DMA engine did not start yet (it was preempted before doing so) and that dumb loop, loops until that packet is gone. This is broken since c233cf4 ("gianfar: Fix tx napi polling"). What remains are spurious interrupts if CPU0 cleans up TX packages and CPU1 returns with IRQ_NONE. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2014-05-14net: gianfar: do not disable interruptsSebastian Andrzej Siewior
each per-queue lock is taken with spin_lock_irqsave() except in the case where all of them are taken for some kind of serialisation. As an optimisation local_irq_save() is used so that lock_tx_qs() and lock_rx_qs() can use just the spin_lock() variant instead. On RT local_irq_save() behaves differently so we use the nort() variant. Lockdep screems easily by "ethtool -K eth0 rx off tx off" What remains is missing lockdep annotation that makes lockdep think lock_tx_qs() may cause a dead lock. Cc: stable-rt@vger.kernel.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2014-05-14drivers: net: gianfar: Make RT awareThomas Gleixner
The adjust_link() disables interrupts before taking the queue locks. On RT those locks are converted to "sleeping" locks and therefor the local_irq_save/restore must be converted to local_irq_save/restore_nort. Reported-by: Xianghua Xiao <xiaoxianghua@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Xianghua Xiao <xiaoxianghua@gmail.com>
2014-05-14drivers/net: vortex fix locking issuesSteven Rostedt
Argh, cut and paste wasn't enough... Use this patch instead. It needs an irq disable. But, believe it or not, on SMP this is actually better. If the irq is shared (as it is in Mark's case), we don't stop the irq of other devices from being handled on another CPU (unfortunately for Mark, he pinned all interrupts to one CPU). Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> drivers/net/ethernet/3com/3c59x.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) Signed-off-by: Ingo Molnar <mingo@elte.hu>
2014-05-14drivers/net: fix livelock issuesThomas Gleixner
Preempt-RT runs into a live lock issue with the NETDEV_TX_LOCKED micro optimization. The reason is that the softirq thread is rescheduling itself on that return value. Depending on priorities it starts to monoplize the CPU and livelock on UP systems. Remove it. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14genirq: do not invoke the affinity callback via a workqueueSebastian Andrzej Siewior
Joe Korty reported, that __irq_set_affinity_locked() schedules a workqueue while holding a rawlock which results in a might_sleep() warning. This patch moves the invokation into a process context so that we only wakeup() a process while holding the lock. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2014-05-14genirq-force-threading.patchThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14genirq: disable irqpoll on -rtIngo Molnar
Creates long latencies for no value Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14genirq: Disable DEBUG_SHIRQ for rtThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14list_bl.h: make list head locking RT safePaul Gortmaker
As per changes in include/linux/jbd_common.h for avoiding the bit_spin_locks on RT ("fs: jbd/jbd2: Make state lock and journal head lock rt safe") we do the same thing here. We use the non atomic __set_bit and __clear_bit inside the scope of the lock to preserve the ability of the existing LIST_DEBUG code to use the zero'th bit in the sanity checks. As a bit spinlock, we had no lockdep visibility into the usage of the list head locking. Now, if we were to implement it as a standard non-raw spinlock, we would see: BUG: sleeping function called from invalid context at kernel/rtmutex.c:658 in_atomic(): 1, irqs_disabled(): 0, pid: 122, name: udevd 5 locks held by udevd/122: #0: (&sb->s_type->i_mutex_key#7/1){+.+.+.}, at: [<ffffffff811967e8>] lock_rename+0xe8/0xf0 #1: (rename_lock){+.+...}, at: [<ffffffff811a277c>] d_move+0x2c/0x60 #2: (&dentry->d_lock){+.+...}, at: [<ffffffff811a0763>] dentry_lock_for_move+0xf3/0x130 #3: (&dentry->d_lock/2){+.+...}, at: [<ffffffff811a0734>] dentry_lock_for_move+0xc4/0x130 #4: (&dentry->d_lock/3){+.+...}, at: [<ffffffff811a0747>] dentry_lock_for_move+0xd7/0x130 Pid: 122, comm: udevd Not tainted 3.4.47-rt62 #7 Call Trace: [<ffffffff810b9624>] __might_sleep+0x134/0x1f0 [<ffffffff817a24d4>] rt_spin_lock+0x24/0x60 [<ffffffff811a0c4c>] __d_shrink+0x5c/0xa0 [<ffffffff811a1b2d>] __d_drop+0x1d/0x40 [<ffffffff811a24be>] __d_move+0x8e/0x320 [<ffffffff811a278e>] d_move+0x3e/0x60 [<ffffffff81199598>] vfs_rename+0x198/0x4c0 [<ffffffff8119b093>] sys_renameat+0x213/0x240 [<ffffffff817a2de5>] ? _raw_spin_unlock+0x35/0x60 [<ffffffff8107781c>] ? do_page_fault+0x1ec/0x4b0 [<ffffffff817a32ca>] ? retint_swapgs+0xe/0x13 [<ffffffff813eb0e6>] ? trace_hardirqs_on_thunk+0x3a/0x3f [<ffffffff8119b0db>] sys_rename+0x1b/0x20 [<ffffffff817a3b96>] system_call_fastpath+0x1a/0x1f Since we are only taking the lock during short lived list operations, lets assume for now that it being raw won't be a significant latency concern. Cc: stable-rt@vger.kernel.org Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2014-05-14fs: jbd/jbd2: Make state lock and journal head lock rt safeThomas Gleixner
bit_spin_locks break under RT. Based on a previous patch from Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> -- include/linux/buffer_head.h | 10 ++++++++++ include/linux/jbd_common.h | 24 ++++++++++++++++++++++++ 2 files changed, 34 insertions(+)
2014-05-14buffer_head: Replace bh_uptodate_lock for -rtThomas Gleixner
Wrap the bit_spin_lock calls into a separate inline and add the RT replacements with a real spinlock. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: Replace cgroup_page bit spinlockThomas Gleixner
Bit spinlocks are not working on RT. Replace them. Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14net-wireless-warn-nort.patchThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14signal-fix-up-rcu-wreckage.patchThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14mm: scatterlist dont disable irqs on RTThomas Gleixner
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-05-14usb: use _nort in givebackSebastian Andrzej Siewior
Since commit 94dfd7ed ("USB: HCD: support giveback of URB in tasklet context") I see |BUG: sleeping function called from invalid context at kernel/rtmutex.c:673 |in_atomic(): 0, irqs_disabled(): 1, pid: 109, name: irq/11-uhci_hcd |no locks held by irq/11-uhci_hcd/109. |irq event stamp: 440 |hardirqs last enabled at (439): [<ffffffff816a7555>] _raw_spin_unlock_irqrestore+0x75/0x90 |hardirqs last disabled at (440): [<ffffffff81514906>] __usb_hcd_giveback_urb+0x46/0xc0 |softirqs last enabled at (0): [<ffffffff81081821>] copy_process.part.52+0x511/0x1510 |softirqs last disabled at (0): [< (null)>] (null) |CPU: 3 PID: 109 Comm: irq/11-uhci_hcd Not tainted 3.12.0-rt0-rc1+ #13 |Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011 | 0000000000000000 ffff8800db9ffbe0 ffffffff8169f064 0000000000000000 | ffff8800db9ffbf8 ffffffff810b2122 ffff88020f03e888 ffff8800db9ffc18 | ffffffff816a6944 ffffffff810b5748 ffff88020f03c000 ffff8800db9ffc50 |Call Trace: | [<ffffffff8169f064>] dump_stack+0x4e/0x8f | [<ffffffff810b2122>] __might_sleep+0x112/0x190 | [<ffffffff816a6944>] rt_spin_lock+0x24/0x60 | [<ffffffff8158435b>] hid_ctrl+0x3b/0x190 | [<ffffffff8151490f>] __usb_hcd_giveback_urb+0x4f/0xc0 | [<ffffffff81514aaf>] usb_hcd_giveback_urb+0x3f/0x140 | [<ffffffff815346af>] uhci_giveback_urb+0xaf/0x280 | [<ffffffff8153666a>] uhci_scan_schedule+0x47a/0xb10 | [<ffffffff81537336>] uhci_irq+0xa6/0x1a0 | [<ffffffff81513c48>] usb_hcd_irq+0x28/0x40 | [<ffffffff810c8ba3>] irq_forced_thread_fn+0x23/0x70 | [<ffffffff810c918f>] irq_thread+0x10f/0x150 | [<ffffffff810a6fad>] kthread+0xcd/0xe0 | [<ffffffff816a842c>] ret_from_fork+0x7c/0xb0 on -RT we run threaded so no need to disable interrupts. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
2014-05-14usb: Use local_irq_*_nort() variantsSteven Rostedt
[ tglx: Now that irqf_disabled is dead we should kill that ] Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>