Age | Commit message (Collapse) | Author |
|
commit d5483feda85a8f39ee2e940e279547c686aac30c upstream.
Fix failures to create namespaces due to the vmem_altmap not advertising
enough free space to store the memmap.
WARNING: CPU: 15 PID: 8022 at arch/x86/mm/init_64.c:656 arch_add_memory+0xde/0xf0
[..]
Call Trace:
dump_stack+0x63/0x83
__warn+0xcb/0xf0
warn_slowpath_null+0x1d/0x20
arch_add_memory+0xde/0xf0
devm_memremap_pages+0x244/0x440
pmem_attach_disk+0x37e/0x490 [nd_pmem]
nd_pmem_probe+0x7e/0xa0 [nd_pmem]
nvdimm_bus_probe+0x71/0x120 [libnvdimm]
driver_probe_device+0x2bb/0x460
bind_store+0x114/0x160
drv_attr_store+0x25/0x30
In commit 658922e57b84 "libnvdimm, pfn: fix memmap reservation sizing"
we arranged for the capacity to be allocated, but failed to also update
the 'npfns' parameter. This leads to cases where there is enough
capacity reserved to hold all the allocated sections, but
vmemmap_populate_hugepages() still encounters -ENOMEM from
altmap_alloc_block_buf().
This fix is a stop-gap until we can teach the core memory hotplug
implementation to permit sub-section hotplug.
Fixes: 658922e57b84 ("libnvdimm, pfn: fix memmap reservation sizing")
Reported-by: Anisha Allada <anisha.allada@intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit b2518c78ce76896f0f8f7940bf02104b227e1709 upstream.
The following BUG was observed when nd_pmem_notify() was called
for a BTT device. The use of a pmem_device pointer is not valid
with BTT.
BUG: unable to handle kernel NULL pointer dereference at 0000000000000030
IP: nd_pmem_notify+0x30/0xf0 [nd_pmem]
Call Trace:
nd_device_notify+0x40/0x50
child_notify+0x10/0x20
device_for_each_child+0x50/0x90
nd_region_notify+0x20/0x30
nd_device_notify+0x40/0x50
nvdimm_region_notify+0x27/0x30
acpi_nfit_scrub+0x341/0x590 [nfit]
process_one_work+0x197/0x450
worker_thread+0x4e/0x4a0
kthread+0x109/0x140
Fix nd_pmem_notify() by setting nd_region and badblocks pointers
properly for BTT.
Cc: Vishal Verma <vishal.l.verma@intel.com>
Fixes: 719994660c24 ("libnvdimm: async notification support")
Signed-off-by: Toshi Kani <toshi.kani@hpe.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit bc042fdfbb92b5b13421316b4548e2d6e98eed37 upstream.
In the case where a dimm does not have any associated flush hints the
ndrd->flush_wpq array may be uninitialized leading to crashes with the
following signature:
BUG: unable to handle kernel NULL pointer dereference at 0000000000000010
IP: region_visible+0x10f/0x160 [libnvdimm]
Call Trace:
internal_create_group+0xbe/0x2f0
sysfs_create_groups+0x40/0x80
device_add+0x2d8/0x650
nd_async_device_register+0x12/0x40 [libnvdimm]
async_run_entry_fn+0x39/0x170
process_one_work+0x212/0x6c0
? process_one_work+0x197/0x6c0
worker_thread+0x4e/0x4a0
kthread+0x10c/0x140
? process_one_work+0x6c0/0x6c0
? kthread_create_on_node+0x60/0x60
ret_from_fork+0x31/0x40
Reviewed-by: Jeff Moyer <jmoyer@redhat.com>
Fixes: f284a4f23752 ("libnvdimm: introduce nvdimm_flush() and nvdimm_has_flush()")
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 6de65fcfdb51835789b245203d1bfc8d14cb1e06 upstream.
msg_written_handler() may set ssif_info->multi_data to NULL
when using ipmitool to write fru.
Before setting ssif_info->multi_data to NULL, add new local
pointer "data_to_send" and store correct i2c data pointer to
it to fix NULL pointer kernel panic and incorrect ssif_info->multi_pos.
Signed-off-by: Joeseph Chang <joechang@codeaurora.org>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit dcb9cfaa5ea9aa0ec08aeb92582ccfe3e4c719a9 upstream.
Make sure to check the tty-device pointer before looking up the sibling
platform device to avoid dereferencing a NULL-pointer when the tty is
one end of a Unix98 pty.
Fixes: 74cdad37cd24 ("Bluetooth: hci_intel: Add runtime PM support")
Fixes: 1ab1f239bf17 ("Bluetooth: hci_intel: Add support for platform driver")
Cc: Loic Poulain <loic.poulain@intel.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 95065a61e9bf25fb85295127fba893200c2bbbd8 upstream.
Make sure to check the tty-device pointer before looking up the sibling
platform device to avoid dereferencing a NULL-pointer when the tty is
one end of a Unix98 pty.
Fixes: 0395ffc1ee05 ("Bluetooth: hci_bcm: Add PM for BCM devices")
Cc: Frederic Danis <frederic.danis@linux.intel.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 77dae6134440420bac334581a3ccee94cee1c054 upstream.
While using emacs, cat or others' commands in konsole with recent
kernels, I have met many times that CTRL-C freeze konsole. After
konsole freeze I can't type anything, then I have to open a new one,
it is very annoying.
See bug report:
https://bugs.kde.org/show_bug.cgi?id=175283
The platform in that bug report is Solaris, but now the pty in linux
has the same problem or the same behavior as Solaris :)
It has high possibility to trigger the problem follow steps below:
Note: In my test, BigFile is a text file whose size is bigger than 1G
1:open konsole
1:cat BigFile
2:CTRL-C
After some digging, I find out the reason is that commit 1d1d14da12e7
("pty: Fix buffer flush deadlock") changes the behavior of pty_flush_buffer.
Thread A Thread B
-------- --------
1:n_tty_poll return POLLIN
2:CTRL-C trigger pty_flush_buffer
tty_buffer_flush
n_tty_flush_buffer
3:attempt to check count of chars:
ioctl(fd, TIOCINQ, &available)
available is equal to 0
4:read(fd, buffer, avaiable)
return 0
5:konsole close fd
Yes, I know we could use the same patch included in the BUG report as
a workaround for linux platform too. But I think the data in ldisc is
belong to application of another side, we shouldn't clear it when we
want to flush write buffer of this side in pty_flush_buffer. So I think
it is better to disable ldisc flush in pty_flush_buffer, because its new
hehavior bring no benefit except that it mess up the behavior between
POLLIN, and TIOCINQ or FIONREAD.
Also I find no flush_buffer function in others' tty driver has the
same behavior as current pty_flush_buffer.
Fixes: 1d1d14da12e7 ("pty: Fix buffer flush deadlock")
Signed-off-by: Wang YanQing <udknight@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 77e6fe7fd2b7cba0bf2f2dc8cde51d7b9a35bf74 upstream.
Make sure to actually suspend the device before returning after a failed
(or deferred) probe.
Note that autosuspend must be disabled before runtime pm is disabled in
order to balance the usage count due to a negative autosuspend delay as
well as to make the final put suspend the device synchronously.
Fixes: 388bc2622680 ("omap-serial: Fix the error handling in the omap_serial probe")
Cc: Shubhrajyoti D <shubhrajyoti@ti.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 099bd73dc17ed77aa8c98323e043613b6e8f54fc upstream.
An unbalanced and misplaced synchronous put was used to suspend the
device on driver unbind, something which with a likewise misplaced
pm_runtime_disable leads to external aborts when an open port is being
removed.
Unhandled fault: external abort on non-linefetch (0x1028) at 0xfa024010
...
[<c046e760>] (serial_omap_set_mctrl) from [<c046a064>] (uart_update_mctrl+0x50/0x60)
[<c046a064>] (uart_update_mctrl) from [<c046a400>] (uart_shutdown+0xbc/0x138)
[<c046a400>] (uart_shutdown) from [<c046bd2c>] (uart_hangup+0x94/0x190)
[<c046bd2c>] (uart_hangup) from [<c045b760>] (__tty_hangup+0x404/0x41c)
[<c045b760>] (__tty_hangup) from [<c045b794>] (tty_vhangup+0x1c/0x20)
[<c045b794>] (tty_vhangup) from [<c046ccc8>] (uart_remove_one_port+0xec/0x260)
[<c046ccc8>] (uart_remove_one_port) from [<c046ef4c>] (serial_omap_remove+0x40/0x60)
[<c046ef4c>] (serial_omap_remove) from [<c04845e8>] (platform_drv_remove+0x34/0x4c)
Fix this up by resuming the device before deregistering the port and by
suspending and disabling runtime pm only after the port has been
removed.
Also make sure to disable autosuspend before disabling runtime pm so
that the usage count is balanced and device actually suspended before
returning.
Note that due to a negative autosuspend delay being set in probe, the
unbalanced put would actually suspend the device on first driver unbind,
while rebinding and again unbinding would result in a negative
power.usage_count.
Fixes: 7e9c8e7dbf3b ("serial: omap: make sure to suspend device before remove")
Cc: Felipe Balbi <balbi@kernel.org>
Cc: Santosh Shilimkar <santosh.shilimkar@ti.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 768d64f491a530062ddad50e016fb27125f8bd7c upstream.
Driver should provide its own struct device for all DMA-mapping calls instead
of extracting device pointer from DMA engine channel. Although this is harmless
from the driver operation perspective on ARM architecture, it is always good
to use the DMA mapping API in a proper way. This patch fixes following DMA API
debug warning:
WARNING: CPU: 0 PID: 0 at lib/dma-debug.c:1241 check_sync+0x520/0x9f4
samsung-uart 12c20000.serial: DMA-API: device driver tries to sync DMA memory it has not allocated [device address=0x000000006df0f580] [size=64 bytes]
Modules linked in:
CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.11.0-rc1-00137-g07ca963 #51
Hardware name: SAMSUNG EXYNOS (Flattened Device Tree)
[<c011aaa4>] (unwind_backtrace) from [<c01127c0>] (show_stack+0x20/0x24)
[<c01127c0>] (show_stack) from [<c06ba5d8>] (dump_stack+0x84/0xa0)
[<c06ba5d8>] (dump_stack) from [<c0139528>] (__warn+0x14c/0x180)
[<c0139528>] (__warn) from [<c01395a4>] (warn_slowpath_fmt+0x48/0x50)
[<c01395a4>] (warn_slowpath_fmt) from [<c0729058>] (check_sync+0x520/0x9f4)
[<c0729058>] (check_sync) from [<c072967c>] (debug_dma_sync_single_for_device+0x88/0xc8)
[<c072967c>] (debug_dma_sync_single_for_device) from [<c0803c10>] (s3c24xx_serial_start_tx_dma+0x100/0x2f8)
[<c0803c10>] (s3c24xx_serial_start_tx_dma) from [<c0804338>] (s3c24xx_serial_tx_chars+0x198/0x33c)
Reported-by: Seung-Woo Kim <sw0312.kim@samsung.com>
Fixes: 62c37eedb74c8 ("serial: samsung: add dma reqest/release functions")
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Reviewed-by: Krzysztof Kozlowski <krzk@kernel.org>
Reviewed-by: Shuah Khan <shuahkh@osg.samsung.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit ed01e50acdd3e4a640cf9ebd28a7e810c3ceca97 upstream.
If device_add() fails, cleanup the cdev. Otherwise, we leak a kobj_map()
with a stale device number.
As Jason points out, there is a small possibility that userspace has
opened and mapped the device in the time between cdev_add() and the
device_add() failure. We need a new kill_dax_dev() helper to invalidate
any established mappings.
Fixes: ba09c01d2fa8 ("dax: convert to the cdev api")
Reported-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit b6eac931b9bb2bce4db7032c35b41e5e34ec22a5 upstream.
The driver progress routines can call cond_resched() when
a timeslice is exhausted and irqs are enabled.
If the ULP had been holding a spin lock without disabling irqs and
the post send directly called the progress routine, the cond_resched()
could yield allowing another thread from the same ULP to deadlock
on that same lock.
Correct by replacing the current hfi1_do_send() calldown with a unique
one for post send and adding an argument to hfi1_do_send() to indicate
that the send engine is running in a thread. If the routine is not
running in a thread, avoid calling cond_resched().
Fixes: Commit 831464ce4b74 ("IB/hfi1: Don't call cond_resched in atomic mode when sending packets")
Reviewed-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Mike Marciniszyn <mike.marciniszyn@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit fb7a91746af18b2ebf596778b38a709cdbc488d3 upstream.
A warning message during SRIOV multicast cleanup should have actually been
a debug level message. The condition generating the warning does no harm
and can fill the message log.
In some cases, during testing, some tests were so intense as to swamp the
message log with these warning messages, causing a stall in the console
message log output task. This stall caused an NMI to be sent to all CPUs
(so that they all dumped their stacks into the message log).
Aside from the message flood causing an NMI, the tests all passed.
Once the message flood which caused the NMI is removed (by reducing the
warning message to debug level), the NMI no longer occurs.
Sample message log (console log) output illustrating the flood and
resultant NMI (snippets with comments and modified with ... instead
of hex digits, to satisfy checkpatch.pl):
<mlx4_ib> _mlx4_ib_mcg_port_cleanup: ... WARNING: group refcount 1!!!...
*** About 4000 almost identical lines in less than one second ***
<mlx4_ib> _mlx4_ib_mcg_port_cleanup: ... WARNING: group refcount 1!!!...
INFO: rcu_sched detected stalls on CPUs/tasks: { 17} (...)
*** { 17} above indicates that CPU 17 was the one that stalled ***
sending NMI to all CPUs:
...
NMI backtrace for cpu 17
CPU: 17 PID: 45909 Comm: kworker/17:2
Hardware name: HP ProLiant DL360p Gen8, BIOS P71 09/08/2013
Workqueue: events fb_flashcursor
task: ffff880478...... ti: ffff88064e...... task.ti: ffff88064e......
RIP: 0010:[ffffffff81......] [ffffffff81......] io_serial_in+0x15/0x20
RSP: 0018:ffff88064e257cb0 EFLAGS: 00000002
RAX: 0000000000...... RBX: ffffffff81...... RCX: 0000000000......
RDX: 0000000000...... RSI: 0000000000...... RDI: ffffffff81......
RBP: ffff88064e...... R08: ffffffff81...... R09: 0000000000......
R10: 0000000000...... R11: ffff88064e...... R12: 0000000000......
R13: 0000000000...... R14: ffffffff81...... R15: 0000000000......
FS: 0000000000......(0000) GS:ffff8804af......(0000) knlGS:000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080......
CR2: 00007f2a2f...... CR3: 0000000001...... CR4: 0000000000......
DR0: 0000000000...... DR1: 0000000000...... DR2: 0000000000......
DR3: 0000000000...... DR6: 00000000ff...... DR7: 0000000000......
Stack:
ffff88064e...... ffffffff81...... ffffffff81...... 0000000000......
ffffffff81...... ffff88064e...... ffffffff81...... ffffffff81......
ffffffff81...... ffff88064e...... ffffffff81...... 0000000000......
Call Trace:
[<ffffffff813d099b>] wait_for_xmitr+0x3b/0xa0
[<ffffffff813d0b5c>] serial8250_console_putchar+0x1c/0x30
[<ffffffff813d0b40>] ? serial8250_console_write+0x140/0x140
[<ffffffff813cb5fa>] uart_console_write+0x3a/0x80
[<ffffffff813d0aae>] serial8250_console_write+0xae/0x140
[<ffffffff8107c4d1>] call_console_drivers.constprop.15+0x91/0xf0
[<ffffffff8107d6cf>] console_unlock+0x3bf/0x400
[<ffffffff813503cd>] fb_flashcursor+0x5d/0x140
[<ffffffff81355c30>] ? bit_clear+0x120/0x120
[<ffffffff8109d5fb>] process_one_work+0x17b/0x470
[<ffffffff8109e3cb>] worker_thread+0x11b/0x400
[<ffffffff8109e2b0>] ? rescuer_thread+0x400/0x400
[<ffffffff810a5aef>] kthread+0xcf/0xe0
[<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
[<ffffffff81645858>] ret_from_fork+0x58/0x90
[<ffffffff810a5a20>] ? kthread_create_on_node+0x140/0x140
Code: 48 89 e5 d3 e6 48 63 f6 48 03 77 10 8b 06 5d c3 66 0f 1f 44 00 00 66 66 66 6
As indicated in the stack trace above, the console output task got swamped.
Fixes: b9c5d6a64358 ("IB/mlx4: Add multicast group (MCG) paravirtualization for SR-IOV")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 99e68909d5aba1861897fe7afc3306c3c81b6de0 upstream.
In mlx4_ib_add, procedure mlx4_ib_alloc_eqs is called to allocate EQs.
However, in the mlx4_ib_add error flow, procedure mlx4_ib_free_eqs is not
called to free the allocated EQs.
Fixes: e605b743f33d ("IB/mlx4: Increase the number of vectors (EQs) available for ULPs")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 771a52584096c45e4565e8aabb596eece9d73d61 upstream.
When udev renames the netdev devices, ipoib debugfs entries does not
get renamed. As a result, if subsequent probe of ipoib device reuse the
name then creating a debugfs entry for the new device would fail.
Also, moved ipoib_create_debug_files and ipoib_delete_debug_files as part
of ipoib event handling in order to avoid any race condition between these.
Fixes: 1732b0ef3b3a ([IPoIB] add path record information in debugfs)
Signed-off-by: Vijay Kumar <vijay.ac.kumar@oracle.com>
Signed-off-by: Shamir Rabinovitch <shamir.rabinovitch@oracle.com>
Reviewed-by: Mark Bloch <markb@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 8561eae60ff9417a50fa1fb2b83ae950dc5c1e21 upstream.
The Infiniband spec defines "A multicast address is defined by a
MGID and a MLID" (section 10.5). Currently the MLID value is not
validated.
Add check to verify that the MLID value is in the correct address
range.
Fixes: 0c33aeedb2cf ("[IB] Add checks to multicast attach and detach")
Reviewed-by: Ira Weiny <ira.weiny@intel.com>
Reviewed-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com>
Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com>
Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com>
Reviewed-by: Leon Romanovsky <leonro@mellanox.com>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit b312be3d87e4c80872cbea869e569175c5eb0f9a upstream.
The kernel commit cited below restructured ib device management
so that the device kobject is initialized in ib_alloc_device.
As part of the restructuring, the kobject is now initialized in
procedure ib_alloc_device, and is later added to the device hierarchy
in the ib_register_device call stack, in procedure
ib_device_register_sysfs (which calls device_add).
However, in the ib_device_register_sysfs error flow, if an error
occurs following the call to device_add, the cleanup procedure
device_unregister is called. This call results in the device object
being deleted -- which results in various use-after-free crashes.
The correct cleanup call is device_del -- which undoes device_add
without deleting the device object.
The device object will then (correctly) be deleted in the
ib_register_device caller's error cleanup flow, when the caller invokes
ib_dealloc_device.
Fixes: 55aeed06544f6 ("IB/core: Make ib_alloc_device init the kobject")
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Doug Ledford <dledford@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 0cfef2b7410b64d7a430947e0b533314c4f97153 upstream.
If the mmap_sem is contented then the vfio type1 IOMMU backend will
defer locked page accounting updates to a workqueue task. This has a
few problems and depending on which side the user tries to play, they
might be over-penalized for unmaps that haven't yet been accounted or
race the workqueue to enter more mappings than they're allowed. The
original intent of this workqueue mechanism seems to be focused on
reducing latency through the ioctl, but we cannot do so at the cost
of correctness. Remove this workqueue mechanism and update the
callers to allow for failure. We can also now recheck the limit under
write lock to make sure we don't exceed it.
vfio_pin_pages_remote() also now necessarily includes an unwind path
which we can jump to directly if the consecutive page pinning finds
that we're exceeding the user's memory limits. This avoids the
current lazy approach which does accounting and mapping up to the
fault, only to return an error on the next iteration to unwind the
entire vfio_dma.
Reviewed-by: Peter Xu <peterx@redhat.com>
Reviewed-by: Kirti Wankhede <kwankhede@nvidia.com>
Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 948f581a53b704b984aa20df009f0a2b4cf7f907 upstream.
dm-thin does not free the discard_parent bio after all chained sub
bios finished. The following kmemleak report could be observed after
pool with discard_passdown option processes discard bios in
linux v4.11-rc7. To fix this, we drop the discard_parent bio reference
when its endio (passdown_endio) called.
unreferenced object 0xffff8803d6b29700 (size 256):
comm "kworker/u8:0", pid 30349, jiffies 4379504020 (age 143002.776s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
01 00 00 00 00 00 00 f0 00 00 00 00 00 00 00 00 ................
backtrace:
[<ffffffff81a5efd9>] kmemleak_alloc+0x49/0xa0
[<ffffffff8114ec34>] kmem_cache_alloc+0xb4/0x100
[<ffffffff8110eec0>] mempool_alloc_slab+0x10/0x20
[<ffffffff8110efa5>] mempool_alloc+0x55/0x150
[<ffffffff81374939>] bio_alloc_bioset+0xb9/0x260
[<ffffffffa018fd20>] process_prepared_discard_passdown_pt1+0x40/0x1c0 [dm_thin_pool]
[<ffffffffa018b409>] break_up_discard_bio+0x1a9/0x200 [dm_thin_pool]
[<ffffffffa018b484>] process_discard_cell_passdown+0x24/0x40 [dm_thin_pool]
[<ffffffffa018b24d>] process_discard_bio+0xdd/0xf0 [dm_thin_pool]
[<ffffffffa018ecf6>] do_worker+0xa76/0xd50 [dm_thin_pool]
[<ffffffff81086239>] process_one_work+0x139/0x370
[<ffffffff810867b1>] worker_thread+0x61/0x450
[<ffffffff8108b316>] kthread+0xd6/0xf0
[<ffffffff81a6cd1f>] ret_from_fork+0x3f/0x70
[<ffffffffffffffff>] 0xffffffffffffffff
Signed-off-by: Dennis Yang <dennisyang@qnap.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 23a601248958fa4142d49294352fe8d1fdf3e509 upstream.
Otherwise the request-based DM blk-mq request_queue will be put into
service without being properly exported via sysfs.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Cc: Christoph Hellwig <hch@lst.de>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 117aceb030307dcd431fdcff87ce988d3016c34a upstream.
When committing era metadata to disk, it doesn't always save the latest
spacemap metadata root in superblock. Due to this, metadata is getting
corrupted sometimes when reopening the device. The correct order of update
should be, pre-commit (shadows spacemap root), save the spacemap root
(newly shadowed block) to in-core superblock and then the final commit.
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 6263b51eb3190d30351360fd168959af7e3a49a9 upstream.
The CCP has the ability to perform several operations simultaneously,
but only one interrupt. When implemented as a PCI device and using
MSI-X/MSI interrupts, use a tasklet model to service interrupts. By
disabling and enabling interrupts from the CCP, coupled with the
queuing that tasklets provide, we can ensure that all events
(occurring on the device) are recognized and serviced.
This change fixes a problem wherein 2 or more busy queues can cause
notification bits to change state while a (CCP) interrupt is being
serviced, but after the queue state has been evaluated. This results
in the event being 'lost' and the queue hanging, waiting to be
serviced. Since the status bits are never fully de-asserted, the
CCP never generates another interrupt (all bits zero -> one or more
bits one), and no further CCP operations will be executed.
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 7b537b24e76a1e8e6d7ea91483a45d5b1426809b upstream.
The CCP has the ability to perform several operations simultaneously,
but only one interrupt. When implemented as a PCI device and using
MSI-X/MSI interrupts, use a tasklet model to service interrupts. By
disabling and enabling interrupts from the CCP, coupled with the
queuing that tasklets provide, we can ensure that all events
(occurring on the device) are recognized and serviced.
This change fixes a problem wherein 2 or more busy queues can cause
notification bits to change state while a (CCP) interrupt is being
serviced, but after the queue state has been evaluated. This results
in the event being 'lost' and the queue hanging, waiting to be
serviced. Since the status bits are never fully de-asserted, the
CCP never generates another interrupt (all bits zero -> one or more
bits one), and no further CCP operations will be executed.
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 116591fe3eef11c6f06b662c9176385f13891183 upstream.
Ensure that we disable interrupts first when shutting down
the driver.
Signed-off-by: Gary R Hook <ghook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 56467cb11cf8ae4db9003f54b3d3425b5f07a10a upstream.
Each CCP queue can product interrupts for 4 conditions:
operation complete, queue empty, error, and queue stopped.
This driver only works with completion and error events.
Signed-off-by: Gary R Hook <gary.hook@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit f5cccf49428447dfbc9edb7a04bb8fc316269781 upstream.
While running a bind/unbind stress test with the dwc3 usb driver on rk3399,
the following crash was observed.
Unable to handle kernel NULL pointer dereference at virtual address 00000218
pgd = ffffffc00165f000
[00000218] *pgd=000000000174f003, *pud=000000000174f003,
*pmd=0000000001750003, *pte=00e8000001751713
Internal error: Oops: 96000005 [#1] PREEMPT SMP
Modules linked in: uinput uvcvideo videobuf2_vmalloc cmac
ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 nf_nat rfcomm
xt_mark fuse bridge stp llc zram btusb btrtl btbcm btintel bluetooth
ip6table_filter mwifiex_pcie mwifiex cfg80211 cdc_ether usbnet r8152 mii joydev
snd_seq_midi snd_seq_midi_event snd_rawmidi snd_seq snd_seq_device ppp_async
ppp_generic slhc tun
CPU: 1 PID: 29814 Comm: kworker/1:1 Not tainted 4.4.52 #507
Hardware name: Google Kevin (DT)
Workqueue: pm pm_runtime_work
task: ffffffc0ac540000 ti: ffffffc0af4d4000 task.ti: ffffffc0af4d4000
PC is at autosuspend_check+0x74/0x174
LR is at autosuspend_check+0x70/0x174
...
Call trace:
[<ffffffc00080dcc0>] autosuspend_check+0x74/0x174
[<ffffffc000810500>] usb_runtime_idle+0x20/0x40
[<ffffffc000785ae0>] __rpm_callback+0x48/0x7c
[<ffffffc000786af0>] rpm_idle+0x1e8/0x498
[<ffffffc000787cdc>] pm_runtime_work+0x88/0xcc
[<ffffffc000249bb8>] process_one_work+0x390/0x6b8
[<ffffffc00024abcc>] worker_thread+0x480/0x610
[<ffffffc000251a80>] kthread+0x164/0x178
[<ffffffc0002045d0>] ret_from_fork+0x10/0x40
Source:
(gdb) l *0xffffffc00080dcc0
0xffffffc00080dcc0 is in autosuspend_check
(drivers/usb/core/driver.c:1778).
1773 /* We don't need to check interfaces that are
1774 * disabled for runtime PM. Either they are unbound
1775 * or else their drivers don't support autosuspend
1776 * and so they are permanently active.
1777 */
1778 if (intf->dev.power.disable_depth)
1779 continue;
1780 if (atomic_read(&intf->dev.power.usage_count) > 0)
1781 return -EBUSY;
1782 w |= intf->needs_remote_wakeup;
Code analysis shows that intf is set to NULL in usb_disable_device() prior
to setting actconfig to NULL. At the same time, usb_runtime_idle() does not
lock the usb device, and neither does any of the functions in the
traceback. This means that there is no protection against a race condition
where usb_disable_device() is removing dev->actconfig->interface[] pointers
while those are being accessed from autosuspend_check().
To solve the problem, synchronize and validate device state between
autosuspend_check() and usb_disconnect().
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 245b2eecee2aac6fdc77dcafaa73c33f9644c3c7 upstream.
While stress testing a usb controller using a bind/unbind looop, the
following error loop was observed.
usb 7-1.2: new low-speed USB device number 3 using xhci-hcd
usb 7-1.2: hub failed to enable device, error -108
usb 7-1-port2: cannot disable (err = -22)
usb 7-1-port2: couldn't allocate usb_device
usb 7-1-port2: cannot disable (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: activate --> -22
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
hub 7-1:1.0: hub_ext_port_status failed (err = -22)
** 57 printk messages dropped ** hub 7-1:1.0: activate --> -22
** 82 printk messages dropped ** hub 7-1:1.0: hub_ext_port_status failed (err = -22)
This continues forever. After adding tracebacks into the code,
the call sequence leading to this is found to be as follows.
[<ffffffc0007fc8e0>] hub_activate+0x368/0x7b8
[<ffffffc0007fceb4>] hub_resume+0x2c/0x3c
[<ffffffc00080b3b8>] usb_resume_interface.isra.6+0x128/0x158
[<ffffffc00080b5d0>] usb_suspend_both+0x1e8/0x288
[<ffffffc00080c9c4>] usb_runtime_suspend+0x3c/0x98
[<ffffffc0007820a0>] __rpm_callback+0x48/0x7c
[<ffffffc00078217c>] rpm_callback+0xa8/0xd4
[<ffffffc000786234>] rpm_suspend+0x84/0x758
[<ffffffc000786ca4>] rpm_idle+0x2c8/0x498
[<ffffffc000786ed4>] __pm_runtime_idle+0x60/0xac
[<ffffffc00080eba8>] usb_autopm_put_interface+0x6c/0x7c
[<ffffffc000803798>] hub_event+0x10ac/0x12ac
[<ffffffc000249bb8>] process_one_work+0x390/0x6b8
[<ffffffc00024abcc>] worker_thread+0x480/0x610
[<ffffffc000251a80>] kthread+0x164/0x178
[<ffffffc0002045d0>] ret_from_fork+0x10/0x40
kick_hub_wq() is called from hub_activate() even after failures to
communicate with the hub. This results in an endless sequence of
hub event -> hub activate -> wq trigger -> hub event -> ...
Provide two solutions for the problem.
- Only trigger the hub event queue if communication with the hub
is successful.
- After a suspend failure, only resume already suspended interfaces
if the communication with the device is still possible.
Each of the changes fixes the observed problem. Use both to improve
robustness.
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 3d6159640da9c9175d1ca42f151fc1a14caded59 upstream.
DWC3 driver uses of_usb_get_phy_mode() which is
implemented in drivers/usb/phy/of.c and in bare minimal
configuration it might not be pulled in kernel binary.
In case of ARC or ARM this could be easily reproduced with
"allnodefconfig" +CONFIG_USB=m +CONFIG_USB_DWC3=m.
On building all ends-up with:
---------------------->8------------------
Kernel: arch/arm/boot/Image is ready
Kernel: arch/arm/boot/zImage is ready
Building modules, stage 2.
MODPOST 5 modules
ERROR: "of_usb_get_phy_mode" [drivers/usb/dwc3/dwc3.ko] undefined!
make[1]: *** [__modpost] Error 1
make: *** [modules] Error 2
---------------------->8------------------
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Masahiro Yamada <yamada.masahiro@socionext.com>
Cc: Geert Uytterhoeven <geert+renesas@glider.be>
Cc: Nicolas Pitre <nicolas.pitre@linaro.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Felipe Balbi <balbi@kernel.org>
Cc: Felix Fietkau <nbd@nbd.name>
Cc: Jeremy Kerr <jk@ozlabs.org>
Cc: linux-snps-arc@lists.infradead.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 6e253d0fbc665b36192b8ed3cecdbb65b413a1eb upstream.
With commit bc49d1d17dcf ("usb: gadget: don't couple configfs to legacy
gadgets"),it is possible to build a modular kernel with both built-in
configfs support and modular legacy gadget drivers.
But when building a kernel without modules, it is also necessary to be
able to build with configfs but without any legacy gadget driver. This
was a possible configuration when the USB_CONFIGFS was a part of the
choice options, but not anymore.
Mark the choice for legacy gadget drivers as optional restores this.
Fixes: bc49d1d17dcf ("usb: gadget: don't couple configfs to legacy gadgets")
Signed-off-by: Romain Izard <romain.izard.pro@gmail.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 2c930e3d0aed1505e86e0928d323df5027817740 upstream.
Add missing continue in switch.
Addresses-Coverity-ID: 1248733
Signed-off-by: Gustavo A. R. Silva <garsilva@embeddedor.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 8ec04a491825e08068e92bed0bba7821893b6433 upstream.
The timer expiry routine `jr3_pci_poll_dev()` checks for expiry by
checking whether the absolute value of `jiffies` (stored in local
variable `now`) is greater than the expected expiry time in jiffy units.
This will fail when `jiffies` wraps around. Also, it seems to make
sense to handle the expiry one jiffy earlier than the current test. Use
`time_after_eq()` to check for expiry.
Signed-off-by: Ian Abbott <abbotti@mev.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 45292be0b3db0b7f8286683b376e2d9f949d11f9 upstream.
For some reason, the driver does not consider allocation of the
subdevice private data to be a fatal error when attaching the COMEDI
device. It tests the subdevice private data pointer for validity at
certain points, but omits some crucial tests. In particular,
`jr3_pci_auto_attach()` calls `jr3_pci_alloc_spriv()` to allocate and
initialize the subdevice private data, but the same function
subsequently dereferences the pointer to access the `next_time_min` and
`next_time_max` members without checking it first. The other missing
test is in the timer expiry routine `jr3_pci_poll_dev()`, but it will
crash before it gets that far.
Fix the bug by returning `-ENOMEM` from `jr3_pci_auto_attach()` as soon
as one of the calls to `jr3_pci_alloc_spriv()` returns `NULL`. The
COMEDI core will subsequently call `jr3_pci_detach()` to clean up.
Signed-off-by: Ian Abbott <abbotti@mev.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit b58f45c8fc301fe83ee28cad3e64686c19e78f1c upstream.
Make sure to deregister the USB driver before releasing the tty driver
to avoid use-after-free in the USB disconnect callback where the tty
devices are deregistered.
Fixes: 61e121047645 ("staging: gdm7240: adding LTE USB driver")
Cc: Won Kang <wkang77@gmail.com>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 12ecd24ef93277e4e5feaf27b0b18f2d3828bc5e upstream.
Since 4.9 mandated USB buffers be heap allocated this causes the driver
to fail.
Since there is a wide range of buffer sizes use kmemdup to create
allocated buffer.
Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 05c0cf88bec588a7cb34de569acd871ceef26760 upstream.
Since 4.9 mandated USB buffers to be heap allocated. This causes
the driver to fail.
Create buffer for USB transfers.
Signed-off-by: Malcolm Priestley <tvboxspy@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 19445816996d1a89682c37685fe95959631d9f32 upstream.
This reverts commit 833415a3e781 ("cdc-wdm: fix "out-of-sync" due to
missing notifications")
There have been several reports of wdm_read returning unexpected EIO
errors with QMI devices using the qmi_wwan driver. The reporters
confirm that reverting prevents these errors. I have been unable to
reproduce the bug myself, and have no explanation to offer either. But
reverting is the safe choice here, given that the commit was an
attempt to work around a firmware problem. Living with a firmware
problem is still better than adding driver bugs.
Reported-by: Kasper Holtze <kasper@holtze.dk>
Reported-by: Aleksander Morgado <aleksander@aleksander.es>
Reported-by: Daniele Palmas <dnlplm@gmail.com>
Fixes: 833415a3e781 ("cdc-wdm: fix "out-of-sync" due to missing notifications")
Signed-off-by: Bjørn Mork <bjorn@mork.no>
Acked-by: Oliver Neukum <oneukum@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
call init_usb_class simultaneously
commit 2f86a96be0ccb1302b7eee7855dbee5ce4dc5dfb upstream.
There is race condition when two USB class drivers try to call
init_usb_class at the same time and leads to crash.
code path: probe->usb_register_dev->init_usb_class
To solve this, mutex locking has been added in init_usb_class() and
destroy_usb_class().
As pointed by Alan, removed "if (usb_class)" test from destroy_usb_class()
because usb_class can never be NULL there.
Signed-off-by: Ajay Kaher <ajay.kaher@samsung.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 31c5d1922b90ddc1da6a6ddecef7cd31f17aa32b upstream.
This development kit has an FT4232 on it with a custom USB VID/PID.
The FT4232 provides four UARTs, but only two are used. The UART 0
is used by the FlashPro5 programmer and UART 2 is connected to the
SmartFusion2 CortexM3 SoC UART port.
Note that the USB VID is registered to Actel according to Linux USB
VID database, but that was acquired by Microsemi.
Signed-off-by: Marek Vasut <marex@denx.de>
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 6fc091fb0459ade939a795bfdcaf645385b951d4 upstream.
Print correct command ring address using 'val_64'.
Signed-off-by: Peter Chen <peter.chen@nxp.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 69307ccb9ad7ccb653e332de68effdeaaab6907d upstream.
As per [1] issue #4,
"The periodic EP scheduler always tries to schedule the EPs
that have large intervals (interval equal to or greater than
128 microframes) into different microframes. So it maintains
an internal counter and increments for each large interval
EP added. When the counter is greater than 128, the scheduler
rejects the new EP. So when the hub re-enumerated 128 times,
it triggers this condition."
This results in Bandwidth error when devices with periodic
endpoints (ISO/INT) having bInterval > 7 are plugged and
unplugged several times on a TUSB73x0 XHCI host.
Workaround this issue by limiting the bInterval to 7
(i.e. interval to 6) for High-speed or faster periodic endpoints.
[1] - http://www.ti.com/lit/er/sllz076/sllz076.pdf
Signed-off-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 197b806ae5db60c6f609d74da04ddb62ea5e1b00 upstream.
While testing modification of per se_node_acl queue_depth forcing
session reinstatement via lio_target_nacl_cmdsn_depth_store() ->
core_tpg_set_initiator_node_queue_depth(), a hung task bug triggered
when changing cmdsn_depth invoked session reinstatement while an iscsi
login was already waiting for session reinstatement to complete.
This can happen when an outstanding se_cmd descriptor is taking a
long time to complete, and session reinstatement from iscsi login
or cmdsn_depth change occurs concurrently.
To address this bug, explicitly set session_fall_back_to_erl0 = 1
when forcing session reinstatement, so session reinstatement is
not attempted if an active session is already being shutdown.
This patch has been tested with two scenarios. The first when
iscsi login is blocked waiting for iscsi session reinstatement
to complete followed by queue_depth change via configfs, and
second when queue_depth change via configfs us blocked followed
by a iscsi login driven session reinstatement.
Note this patch depends on commit d36ad77f702 to handle multiple
sessions per se_node_acl when changing cmdsn_depth, and for
pre v4.5 kernels will need to be included for stable as well.
Reported-by: Gary Guo <ghg@datera.io>
Tested-by: Gary Guo <ghg@datera.io>
Cc: Gary Guo <ghg@datera.io>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 59ac9c078141b8fd0186c0b18660a1b2c24e724e upstream.
This patch fixes zero-length READ and WRITE handling in target/FILEIO,
which was broken a long time back by:
Since:
commit d81cb44726f050d7cf1be4afd9cb45d153b52066
Author: Paolo Bonzini <pbonzini@redhat.com>
Date: Mon Sep 17 16:36:11 2012 -0700
target: go through normal processing for all zero-length commands
which moved zero-length READ and WRITE completion out of target-core,
to doing submission into backend driver code.
To address this, go ahead and invoke target_complete_cmd() for any
non negative return value in fd_do_rw().
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com>
Reviewed-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Cc: Andy Grover <agrover@redhat.com>
Cc: David Disseldorp <ddiss@suse.de>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit a71a5dc7f833943998e97ca8fa6a4c708a0ed1a9 upstream.
Following the bugfix for handling non SAM_STAT_GOOD COMPARE_AND_WRITE
status during COMMIT phase in commit 9b2792c3da1, the same bug exists
for the READ phase as well.
This would manifest first as a lost SCSI response, and eventual
hung task during fabric driver logout or re-login, as existing
shutdown logic waited for the COMPARE_AND_WRITE se_cmd->cmd_kref
to reach zero.
To address this bug, compare_and_write_callback() has been changed
to set post_ret = 1 and return TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE
as necessary to signal failure status.
Reported-by: Bill Borsari <wgb@datera.io>
Cc: Bill Borsari <wgb@datera.io>
Tested-by: Gary Guo <ghg@datera.io>
Cc: Gary Guo <ghg@datera.io>
Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 3089c1df10e2931b1d72d2ffa7d86431084c86b3 upstream.
The vm fault handler relies on the fact that the VMA owns a reference
to the BO. However, once mmap_sem is released, other tasks are free to
destroy the VMA, which can lead to the BO being freed. Fix two code
paths where that can happen, both related to vm fault retries.
Found via a lock debugging warning which flagged &bo->wu_mutex as
locked while being destroyed.
Fixes: cbe12e74ee4e ("drm/ttm: Allow vm fault retries")
Signed-off-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit e7ee74b56f23ba447d3124f2eccc32033cca501d upstream.
This event is used by the Firmware to limit the RX BA win size
for a specific link.
The event handler updates the new size in the mac's sta->sta struct.
BA sessions opened for that link will use the new restricted
win_size. This limitation remains until a new update is received or
until the link is closed.
Signed-off-by: Maxim Altshul <maxim.altshul@ti.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 42c7372a111630dab200c2f959424f5ec3bf79a4 upstream.
When starting a new BA session, we must pass the win_size to the FW.
To do this we take max_rx_aggregation_subframes (BA RX win size)
which is stored in ieee80211_sta structure (e.g per link and not per HW)
We will use the value stored per link when passing the win_size to
firmware through the ACX_BA_SESSION_RX_SETUP command.
Signed-off-by: Maxim Altshul <maxim.altshul@ti.com>
Signed-off-by: Kalle Valo <kvalo@codeaurora.org>
Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 84d582d236dc1f9085e741affc72e9ba061a67c2 upstream.
Recent discussion (http://marc.info/?l=xen-devel&m=149192184523741)
established that commit 72a9b186292d ("xen: Remove event channel
notification through Xen PCI platform device") (and thus commit
da72ff5bfcb0 ("partially revert "xen: Remove event channel
notification through Xen PCI platform device"")) are unnecessary and,
in fact, prevent HVM guests from booting on Xen releases prior to 4.0
Therefore we revert both of those commits.
The summary of that discussion is below:
Here is the brief summary of the current situation:
Before the offending commit (72a9b186292):
1) INTx does not work because of the reset_watches path.
2) The reset_watches path is only taken if you have Xen > 4.0
3) The Linux Kernel by default will use vector inject if the hypervisor
support. So even INTx does not work no body running the kernel with
Xen > 4.0 would notice. Unless he explicitly disabled this feature
either in the kernel or in Xen (and this can only be disabled by
modifying the code, not user-supported way to do it).
After the offending commit (+ partial revert):
1) INTx is no longer support for HVM (only for PV guests).
2) Any HVM guest The kernel will not boot on Xen < 4.0 which does
not have vector injection support. Since the only other mode
supported is INTx which.
So based on this summary, I think before commit (72a9b186292) we were
in much better position from a user point of view.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Bjorn Helgaas <bhelgaas@google.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien.grall@arm.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Ross Lagerwall <ross.lagerwall@citrix.com>
Cc: xen-devel@lists.xenproject.org
Cc: linux-kernel@vger.kernel.org
Cc: linux-pci@vger.kernel.org
Cc: Anthony Liguori <aliguori@amazon.com>
Cc: KarimAllah Ahmed <karahmed@amazon.de>
Signed-off-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 922c60e89d52730050c6ccca218bff40cc8bcd8e ]
If an error is encountered in mdio_mux_init(), the error path will call
mdiobus_free(). Since mdiobus_register() has been called prior to
mdio_mux_init(), the bus->state will not be MDIOBUS_UNREGISTERED. This
causes a BUG_ON() in mdiobus_free(). To correct this issue, add an
error path for mdio_mux_init() which calls mdiobus_unregister() prior to
mdiobus_free().
Signed-off-by: Jon Mason <jon.mason@broadcom.com>
Fixes: 98bc865a1ec8 ("net: mdio-mux: Add MDIO mux driver for iProc SoCs")
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit ac45bd93a5035c2f39c9862b8b6ed692db0fdc87 ]
We have the number of longs, but we need to calculate the number of
bytes required.
Fixes: c0c050c58d84 ("bnxt_en: New Broadcom ethernet driver.")
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Michael Chan <michael.chan@broadcom.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 4c54dc0277d0d55a9248c43aebd31858f926a056 ]
This patch adds support for Telit ME910 PID 0x1100.
Signed-off-by: Daniele Palmas <dnlplm@gmail.com>
Acked-by: Bjørn Mork <bjorn@mork.no>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|