summaryrefslogtreecommitdiff
path: root/drivers/iommu/dma-iommu.c
AgeCommit message (Collapse)Author
2016-08-10iommu/dma: Respect IOMMU aperture when allocatingRobin Murphy
Where a device driver has set a 64-bit DMA mask to indicate the absence of addressing limitations, we still need to ensure that we don't allocate IOVAs beyond the actual input size of the IOMMU. The reported aperture is the most reliable way we have of inferring that input address size, so use that to enforce a hard upper limit where available. Fixes: 0db2e5d18f76 ("iommu: Implement common IOMMU ops for DMA mapping") Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-08-09iommu/dma: Don't put uninitialised IOVA domainsRobin Murphy
Due to the limitations of having to wait until we see a device's DMA restrictions before we know how we want an IOVA domain initialised, there is a window for error if a DMA ops domain is allocated but later freed without ever being used. In that case, init_iova_domain() was never called, so calling put_iova_domain() from iommu_put_dma_cookie() ends up trying to take an uninitialised lock and crashing. Make things robust by skipping the call unless the IOVA domain actually has been initialised, as we probably should have done from the start. Fixes: 0db2e5d18f76 ("iommu: Implement common IOMMU ops for DMA mapping") Cc: stable@vger.kernel.org Reported-by: Nate Watterson <nwatters@codeaurora.org> Reviewed-by: Nate Watterson <nwatters@codeaurora.org> Tested-by: Nate Watterson <nwatters@codeaurora.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Tested-by: Eric Auger <eric.auger@redhat.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-08-04dma-mapping: use unsigned long for dma_attrsKrzysztof Kozlowski
The dma-mapping core and the implementations do not change the DMA attributes passed by pointer. Thus the pointer can point to const data. However the attributes do not have to be a bitfield. Instead unsigned long will do fine: 1. This is just simpler. Both in terms of reading the code and setting attributes. Instead of initializing local attributes on the stack and passing pointer to it to dma_set_attr(), just set the bits. 2. It brings safeness and checking for const correctness because the attributes are passed by value. Semantic patches for this change (at least most of them): virtual patch virtual context @r@ identifier f, attrs; @@ f(..., - struct dma_attrs *attrs + unsigned long attrs , ...) { ... } @@ identifier r.f; @@ f(..., - NULL + 0 ) and // Options: --all-includes virtual patch virtual context @r@ identifier f, attrs; type t; @@ t f(..., struct dma_attrs *attrs); @@ identifier r.f; @@ f(..., - NULL + 0 ) Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.com Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: Vineet Gupta <vgupta@synopsys.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> Acked-by: Mark Salter <msalter@redhat.com> [c6x] Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris] Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm] Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp] Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core] Acked-by: David Vrabel <david.vrabel@citrix.com> [xen] Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb] Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon] Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Acked-by: Bjorn Andersson <bjorn.andersson@linaro.org> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32] Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc] Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-05-09iommu/dma: Finish optimising higher-order allocationsRobin Murphy
Now that we know exactly which page sizes our caller wants to use in the given domain, we can restrict higher-order allocation attempts to just those sizes, if any, and avoid wasting any time or effort on other sizes which offer no benefit. In the same vein, this also lets us accommodate a minimum order greater than 0 for special cases. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Tested-by: Yong Wu <yong.wu@mediatek.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-05-09iommu: Allow selecting page sizes per domainRobin Murphy
Many IOMMUs support multiple page table formats, meaning that any given domain may only support a subset of the hardware page sizes presented in iommu_ops->pgsize_bitmap. There are also certain use-cases where the creator of a domain may want to control which page sizes are used, for example to force the use of hugepage mappings to reduce pagetable walk depth. To this end, add a per-domain pgsize_bitmap to represent the subset of page sizes actually in use, to make it possible for domains with different requirements to coexist. Signed-off-by: Will Deacon <will.deacon@arm.com> [rm: hijacked and rebased original patch with new commit message] Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-05-09iommu/dma: Implement scatterlist segment mergingRobin Murphy
Stop wasting IOVA space by over-aligning scatterlist segments for a theoretical worst-case segment boundary mask, and instead take the real limits into account to merge consecutive segments wherever appropriate, so our callers can benefit from getting back nicely simplified lists. This also represents the last piece of functionality wanted by users of the current arch/arm implementation, thus brings us a small step closer to converting that over to the common code. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-04-05iommu/dma: Restore scatterlist offsets correctlyRobin Murphy
With the change to stashing just the IOVA-page-aligned remainder of the CPU-page offset rather than the whole thing, the failure path in __invalidate_sg() also needs tweaking to account for that in the case of differing page sizes where the two offsets may not be equivalent. Similarly in __finalise_sg(), lest the architecture-specific wrappers later get the wrong address for cache maintenance on sync or unmap. Fixes: 164afb1d85b8 ("iommu/dma: Use correct offset in map_sg") Reported-by: Magnus Damm <damm+renesas@opensource.se> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Cc: stable@ver.kernel.org # v4.4+ Signed-off-by: Joerg Roedel <jroedel@suse.de>
2016-01-07iommu/dma: Use correct offset in map_sgRobin Murphy
When mapping a non-page-aligned scatterlist entry, we copy the original offset to the output DMA address before aligning it to hand off to iommu_map_sg(), then later adding the IOVA page address portion to get the final mapped address. However, when the IOVA page size is smaller than the CPU page size, it is the offset within the IOVA page we want, not that within the CPU page, which can easily be larger than an IOVA page and thus result in an incorrect final address. Fix the bug by taking only the IOVA-aligned part of the offset as the basis of the DMA address, not the whole thing. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/dma: Avoid unlikely high-order allocationsRobin Murphy
Doug reports that the equivalent page allocator on 32-bit ARM exhibits particularly pathalogical behaviour under memory pressure when fragmentation is high, where allocating a 4MB buffer takes tens of seconds and the number of calls to alloc_pages() is over 9000![1] We can drastically improve that situation without losing the other benefits of high-order allocations when they would succeed, by assuming memory pressure is relatively constant over the course of an allocation, and not retrying allocations at orders we know to have failed before. This way, the best-case behaviour remains unchanged, and in the worst case we should see at most a dozen or so (MAX_ORDER - 1) failed attempts before falling back to single pages for the remainder of the buffer. [1]:http://lists.infradead.org/pipermail/linux-arm-kernel/2015-December/394660.html Reported-by: Douglas Anderson <dianders@chromium.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-12-28iommu/dma: Add some missing #includesRobin Murphy
dma-iommu.c was naughtily relying on an implicit transitive #include of linux/vmalloc.h, which is apparently not present on some architectures. Add that, plus a couple more headers for other functions which are used similarly. Reported-by: kbuild test robot <lkp@intel.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
2015-10-15iommu: Implement common IOMMU ops for DMA mappingRobin Murphy
Taking inspiration from the existing arch/arm code, break out some generic functions to interface the DMA-API to the IOMMU-API. This will do the bulk of the heavy lifting for IOMMU-backed dma-mapping. Since associating an IOVA allocator with an IOMMU domain is a fairly common need, rather than introduce yet another private structure just to do this for ourselves, extend the top-level struct iommu_domain with the notion. A simple opaque cookie allows reuse by other IOMMU API users with their various different incompatible allocator types. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>