Age | Commit message (Collapse) | Author |
|
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
kmap_atomic() is preemptible on RT.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Without this patch, ARM can not use SPLIT_PTLOCK_CPUS if
PREEMPT_RT_FULL=y because vectors_user_mapping() creates a
VM_ALWAYSDUMP mapping of the vector page (address 0xffff0000), but no
ptl->lock has been allocated for the page. An attempt to coredump
that page will result in a kernel NULL pointer dereference when
follow_page() attempts to lock the page.
The call tree to the NULL pointer dereference is:
do_notify_resume()
get_signal_to_deliver()
do_coredump()
elf_core_dump()
get_dump_page()
__get_user_pages()
follow_page()
pte_offset_map_lock() <----- a #define
...
rt_spin_lock()
The underlying problem is exposed by mm-shrink-the-page-frame-to-rt-size.patch.
Signed-off-by: Frank Rowand <frank.rowand@am.sony.com>
Cc: Frank <Frank_Rowand@sonyusa.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/4E87C535.2030907@am.sony.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
He below is a boot-tested hack to shrink the page frame size back to
normal.
Should be a net win since there should be many less PTE-pages than
page-frames.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Split out the pages which are to be freed into a separate list and
call free_pages_bulk() outside of the percpu page allocator locks.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
rt-friendly per-cpu pages: convert the irqs-off per-cpu locking
method into a preemptible, explicit-per-cpu-locks method.
Contains fixes from:
Peter Zijlstra <a.p.zijlstra@chello.nl>
Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Drop recursive call to migrate_disabel/enable for local_*lock* api
reported by Steven Rostedt.
local_lock will call migrate_disable via get_local_var - call tree is
get_locked_var
`-> local_lock(lvar)
`-> __local_lock(&get_local_var(lvar));
`--> # define get_local_var(var) (*({
migrate_disable();
&__get_cpu_var(var); })) \
thus there should be no need to call migrate_disable/enable recursively in
spin_try/lock/unlock. This patch addes a spin_trylock_local and replaces
the migration disabling calls by the local calls.
This patch is incomplete as it does not yet cover the _irq/_irqsave variants
by local locks. This patch requires the API cleanup in kernel/softirq.c or
it would break softirq_lock/unlock with respect to migration.
Signed-off-by: Nicholas Mc Guire <der.herr@hofr.at>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
When copying large amounts of data between the USB storage devices and
the hard disk, the USB mouse will not work, this patch fixes it.
[NOTE: This problem have been found in the Loongson family machines, not
sure whether it is producible on other platforms]
Signed-off-by: Hu Hongbing <huhb@lemote.com>
Signed-off-by: Wu Zhangjin <wuzhangjin@gmail.com>
|
|
What I observe is that the TX queue is not empty and does not make any
progress. gfar_clean_tx_ring() does not clean up the packet because it
is not completed yet.
The root cause is that the DMA engine did not start yet (it was
preempted before doing so) and that dumb loop, loops until that packet
is gone.
This is broken since c233cf4 ("gianfar: Fix tx napi polling").
What remains are spurious interrupts if CPU0 cleans up TX packages and
CPU1 returns with IRQ_NONE.
Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
|
each per-queue lock is taken with spin_lock_irqsave() except in the case
where all of them are taken for some kind of serialisation. As an
optimisation local_irq_save() is used so that lock_tx_qs() and
lock_rx_qs() can use just the spin_lock() variant instead.
On RT local_irq_save() behaves differently so we use the nort()
variant.
Lockdep screems easily by "ethtool -K eth0 rx off tx off"
What remains is missing lockdep annotation that makes lockdep think
lock_tx_qs() may cause a dead lock.
Cc: stable-rt@vger.kernel.org
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
|
The adjust_link() disables interrupts before taking the queue
locks. On RT those locks are converted to "sleeping" locks and
therefor the local_irq_save/restore must be converted to
local_irq_save/restore_nort.
Reported-by: Xianghua Xiao <xiaoxianghua@gmail.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Tested-by: Xianghua Xiao <xiaoxianghua@gmail.com>
|
|
Argh, cut and paste wasn't enough...
Use this patch instead. It needs an irq disable. But, believe it or not,
on SMP this is actually better. If the irq is shared (as it is in Mark's
case), we don't stop the irq of other devices from being handled on
another CPU (unfortunately for Mark, he pinned all interrupts to one CPU).
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
drivers/net/ethernet/3com/3c59x.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
Signed-off-by: Ingo Molnar <mingo@elte.hu>
|
|
Preempt-RT runs into a live lock issue with the NETDEV_TX_LOCKED micro
optimization. The reason is that the softirq thread is rescheduling
itself on that return value. Depending on priorities it starts to
monoplize the CPU and livelock on UP systems.
Remove it.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Joe Korty reported, that __irq_set_affinity_locked() schedules a
workqueue while holding a rawlock which results in a might_sleep()
warning.
This patch moves the invokation into a process context so that we only
wakeup() a process while holding the lock.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Creates long latencies for no value
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
As per changes in include/linux/jbd_common.h for avoiding the
bit_spin_locks on RT ("fs: jbd/jbd2: Make state lock and journal
head lock rt safe") we do the same thing here.
We use the non atomic __set_bit and __clear_bit inside the scope of
the lock to preserve the ability of the existing LIST_DEBUG code to
use the zero'th bit in the sanity checks.
As a bit spinlock, we had no lockdep visibility into the usage
of the list head locking. Now, if we were to implement it as a
standard non-raw spinlock, we would see:
BUG: sleeping function called from invalid context at kernel/rtmutex.c:658
in_atomic(): 1, irqs_disabled(): 0, pid: 122, name: udevd
5 locks held by udevd/122:
#0: (&sb->s_type->i_mutex_key#7/1){+.+.+.}, at: [<ffffffff811967e8>] lock_rename+0xe8/0xf0
#1: (rename_lock){+.+...}, at: [<ffffffff811a277c>] d_move+0x2c/0x60
#2: (&dentry->d_lock){+.+...}, at: [<ffffffff811a0763>] dentry_lock_for_move+0xf3/0x130
#3: (&dentry->d_lock/2){+.+...}, at: [<ffffffff811a0734>] dentry_lock_for_move+0xc4/0x130
#4: (&dentry->d_lock/3){+.+...}, at: [<ffffffff811a0747>] dentry_lock_for_move+0xd7/0x130
Pid: 122, comm: udevd Not tainted 3.4.47-rt62 #7
Call Trace:
[<ffffffff810b9624>] __might_sleep+0x134/0x1f0
[<ffffffff817a24d4>] rt_spin_lock+0x24/0x60
[<ffffffff811a0c4c>] __d_shrink+0x5c/0xa0
[<ffffffff811a1b2d>] __d_drop+0x1d/0x40
[<ffffffff811a24be>] __d_move+0x8e/0x320
[<ffffffff811a278e>] d_move+0x3e/0x60
[<ffffffff81199598>] vfs_rename+0x198/0x4c0
[<ffffffff8119b093>] sys_renameat+0x213/0x240
[<ffffffff817a2de5>] ? _raw_spin_unlock+0x35/0x60
[<ffffffff8107781c>] ? do_page_fault+0x1ec/0x4b0
[<ffffffff817a32ca>] ? retint_swapgs+0xe/0x13
[<ffffffff813eb0e6>] ? trace_hardirqs_on_thunk+0x3a/0x3f
[<ffffffff8119b0db>] sys_rename+0x1b/0x20
[<ffffffff817a3b96>] system_call_fastpath+0x1a/0x1f
Since we are only taking the lock during short lived list operations,
lets assume for now that it being raw won't be a significant latency
concern.
Cc: stable-rt@vger.kernel.org
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
|
bit_spin_locks break under RT.
Based on a previous patch from Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
--
include/linux/buffer_head.h | 10 ++++++++++
include/linux/jbd_common.h | 24 ++++++++++++++++++++++++
2 files changed, 34 insertions(+)
|
|
Wrap the bit_spin_lock calls into a separate inline and add the RT
replacements with a real spinlock.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Bit spinlocks are not working on RT. Replace them.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Since commit 94dfd7ed ("USB: HCD: support giveback of URB in tasklet
context") I see
|BUG: sleeping function called from invalid context at kernel/rtmutex.c:673
|in_atomic(): 0, irqs_disabled(): 1, pid: 109, name: irq/11-uhci_hcd
|no locks held by irq/11-uhci_hcd/109.
|irq event stamp: 440
|hardirqs last enabled at (439): [<ffffffff816a7555>] _raw_spin_unlock_irqrestore+0x75/0x90
|hardirqs last disabled at (440): [<ffffffff81514906>] __usb_hcd_giveback_urb+0x46/0xc0
|softirqs last enabled at (0): [<ffffffff81081821>] copy_process.part.52+0x511/0x1510
|softirqs last disabled at (0): [< (null)>] (null)
|CPU: 3 PID: 109 Comm: irq/11-uhci_hcd Not tainted 3.12.0-rt0-rc1+ #13
|Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
| 0000000000000000 ffff8800db9ffbe0 ffffffff8169f064 0000000000000000
| ffff8800db9ffbf8 ffffffff810b2122 ffff88020f03e888 ffff8800db9ffc18
| ffffffff816a6944 ffffffff810b5748 ffff88020f03c000 ffff8800db9ffc50
|Call Trace:
| [<ffffffff8169f064>] dump_stack+0x4e/0x8f
| [<ffffffff810b2122>] __might_sleep+0x112/0x190
| [<ffffffff816a6944>] rt_spin_lock+0x24/0x60
| [<ffffffff8158435b>] hid_ctrl+0x3b/0x190
| [<ffffffff8151490f>] __usb_hcd_giveback_urb+0x4f/0xc0
| [<ffffffff81514aaf>] usb_hcd_giveback_urb+0x3f/0x140
| [<ffffffff815346af>] uhci_giveback_urb+0xaf/0x280
| [<ffffffff8153666a>] uhci_scan_schedule+0x47a/0xb10
| [<ffffffff81537336>] uhci_irq+0xa6/0x1a0
| [<ffffffff81513c48>] usb_hcd_irq+0x28/0x40
| [<ffffffff810c8ba3>] irq_forced_thread_fn+0x23/0x70
| [<ffffffff810c918f>] irq_thread+0x10f/0x150
| [<ffffffff810a6fad>] kthread+0xcd/0xe0
| [<ffffffff816a842c>] ret_from_fork+0x7c/0xb0
on -RT we run threaded so no need to disable interrupts.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
|
[ tglx: Now that irqf_disabled is dead we should kill that ]
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Frederic Weisbecker reported this warning:
[ 45.228562] BUG: sleeping function called from invalid context at kernel/rtmutex.c:683
[ 45.228571] in_atomic(): 0, irqs_disabled(): 1, pid: 4290, name: ntpdate
[ 45.228576] INFO: lockdep is turned off.
[ 45.228580] irq event stamp: 0
[ 45.228583] hardirqs last enabled at (0): [<(null)>] (null)
[ 45.228589] hardirqs last disabled at (0): [<ffffffff8025449d>] copy_process+0x68d/0x1500
[ 45.228602] softirqs last enabled at (0): [<ffffffff8025449d>] copy_process+0x68d/0x1500
[ 45.228609] softirqs last disabled at (0): [<(null)>] (null)
[ 45.228617] Pid: 4290, comm: ntpdate Tainted: G W 2.6.29-rc4-rt1-tip #1
[ 45.228622] Call Trace:
[ 45.228632] [<ffffffff8027dfb0>] ? print_irqtrace_events+0xd0/0xe0
[ 45.228639] [<ffffffff8024cd73>] __might_sleep+0x113/0x130
[ 45.228646] [<ffffffff8077c811>] rt_spin_lock+0xa1/0xb0
[ 45.228653] [<ffffffff80296a3d>] res_counter_charge+0x5d/0x130
[ 45.228660] [<ffffffff802fb67f>] __mem_cgroup_try_charge+0x7f/0x180
[ 45.228667] [<ffffffff802fc407>] mem_cgroup_charge_common+0x57/0x90
[ 45.228674] [<ffffffff80212096>] ? ftrace_call+0x5/0x2b
[ 45.228680] [<ffffffff802fc49d>] mem_cgroup_newpage_charge+0x5d/0x60
[ 45.228688] [<ffffffff802d94ce>] __do_fault+0x29e/0x4c0
[ 45.228694] [<ffffffff8077c843>] ? rt_spin_unlock+0x23/0x80
[ 45.228700] [<ffffffff802db8b5>] handle_mm_fault+0x205/0x890
[ 45.228707] [<ffffffff80212096>] ? ftrace_call+0x5/0x2b
[ 45.228714] [<ffffffff8023495e>] do_page_fault+0x11e/0x2a0
[ 45.228720] [<ffffffff8077e5a5>] page_fault+0x25/0x30
[ 45.228727] [<ffffffff8043e1ed>] ? __clear_user+0x3d/0x70
[ 45.228733] [<ffffffff8043e1d1>] ? __clear_user+0x21/0x70
The reason is the raw IRQ flag use of kernel/res_counter.c.
The irq flags tricks there seem a bit pointless: it cannot protect the
c->parent linkage because local_irq_save() is only per CPU.
So replace it with _nort(). This code needs a second look.
Reported-by: Frederic Weisbecker <fweisbec@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Use the local_irq_*_nort variants to reduce latencies in RT. The code
is serialized by the locks. No need to disable interrupts.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Use the _nort() primitives.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Fixes in_atomic stack-dump, when Mellanox module is loaded into the RT
Kernel.
Michael S. Tsirkin <mst@dev.mellanox.co.il> sayeth:
"Basically, if you just make spin_lock_irqsave (and spin_lock_irq) not disable
interrupts for non-raw spinlocks, I think all of infiniband will be fine without
changes."
Signed-off-by: Sven-Thorsten Dietrich <sven@thebigcorporation.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Use the local_irq_*_nort variants.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Use the local_irq_*_nort variants.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
RT needs a few preempt_disable/enable points which are not necessary
otherwise. Implement variants to avoid #ifdeffery.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Add local_irq_*_(no)rt variant which are mainly used to break
interrupt disabled sections on PREEMPT_RT or to explicitely disable
interrupts on PREEMPT_RT.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Disable stuff which is known to have issues on RT
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
Gives me an option to screw printk and actually see what the machine
says.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1314967289.1301.11.camel@twins
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/n/tip-ykb97nsfmobq44xketrxs977@git.kernel.org
|
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
|
If the user specified a threshold at module load time, use it.
Cc: stable-rt@vger.kernel.org
Acked-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Mike Galbraith <bitbucket@online.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
|
There's no reason to use stop machine to search for hardware latency.
Simply disabling interrupts while running the loop will do enough to
check if something comes in that wasn't disabled by interrupts being
off, which is exactly what stop machine does.
Instead of using stop machine, just have the thread disable interrupts
while it checks for hardware latency.
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
|
As ktime_get() calls into the timing code which does a read_seq(), it
may be affected by other CPUS that touch that lock. To remove this
dependency, use the trace_clock_local() which is already exported
for module use. If CONFIG_TRACING is enabled, use that as the clock,
otherwise use ktime_get().
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
|
The hwlat_detector reads two timestamps in a row, then reports any
gap between those calls. The problem is, it misses everything between
the second reading of the time stamp to the first reading of the time stamp
in the next loop. That's were most of the time is spent, which means,
chances are likely that it will miss all hardware latencies. This
defeats the purpose.
By also testing the first time stamp from the previous loop second
time stamp (the outer loop), we are more likely to find a latency.
Setting the threshold to 1, here's what the report now looks like:
1347415723.0232202770 0 2
1347415725.0234202822 0 2
1347415727.0236202875 0 2
1347415729.0238202928 0 2
1347415731.0240202980 0 2
1347415734.0243203061 0 2
1347415736.0245203113 0 2
1347415738.0247203166 2 0
1347415740.0249203219 0 3
1347415742.0251203272 0 3
1347415743.0252203299 0 3
1347415745.0254203351 0 2
1347415747.0256203404 0 2
1347415749.0258203457 0 2
1347415751.0260203510 0 2
1347415754.0263203589 0 2
1347415756.0265203642 0 2
1347415758.0267203695 0 2
1347415760.0269203748 0 2
1347415762.0271203801 0 2
1347415764.0273203853 2 0
There's some hardware latency that takes 2 microseconds to run.
Signed-off-by: Steven Rostedt <srostedt@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|