summaryrefslogtreecommitdiff
path: root/arch/arc/mm
AgeCommit message (Collapse)Author
2017-06-24mm: larger stack guard gap, between vmasHugh Dickins
commit 1be7107fbe18eed3e319a6c3e83c78254b693acb upstream. Stack guard page is a useful feature to reduce a risk of stack smashing into a different mapping. We have been using a single page gap which is sufficient to prevent having stack adjacent to a different mapping. But this seems to be insufficient in the light of the stack usage in userspace. E.g. glibc uses as large as 64kB alloca() in many commonly used functions. Others use constructs liks gid_t buffer[NGROUPS_MAX] which is 256kB or stack strings with MAX_ARG_STRLEN. This will become especially dangerous for suid binaries and the default no limit for the stack size limit because those applications can be tricked to consume a large portion of the stack and a single glibc call could jump over the guard page. These attacks are not theoretical, unfortunatelly. Make those attacks less probable by increasing the stack guard gap to 1MB (on systems with 4k pages; but make it depend on the page size because systems with larger base pages might cap stack allocations in the PAGE_SIZE units) which should cover larger alloca() and VLA stack allocations. It is obviously not a full fix because the problem is somehow inherent, but it should reduce attack space a lot. One could argue that the gap size should be configurable from userspace, but that can be done later when somebody finds that the new 1MB is wrong for some special case applications. For now, add a kernel command line option (stack_guard_gap) to specify the stack gap size (in page units). Implementation wise, first delete all the old code for stack guard page: because although we could get away with accounting one extra page in a stack vma, accounting a larger gap can break userspace - case in point, a program run with "ulimit -S -v 20000" failed when the 1MB gap was counted for RLIMIT_AS; similar problems could come with RLIMIT_MLOCK and strict non-overcommit mode. Instead of keeping gap inside the stack vma, maintain the stack guard gap as a gap between vmas: using vm_start_gap() in place of vm_start (or vm_end_gap() in place of vm_end if VM_GROWSUP) in just those few places which need to respect the gap - mainly arch_get_unmapped_area(), and and the vma tree's subtree_gap support for that. Original-patch-by: Oleg Nesterov <oleg@redhat.com> Original-patch-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Hugh Dickins <hughd@google.com> Acked-by: Michal Hocko <mhocko@suse.com> Tested-by: Helge Deller <deller@gmx.de> # parisc Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> [wt: backport to 4.11: adjust context] [wt: backport to 4.9: adjust context ; kernel doc was not in admin-guide] Signed-off-by: Willy Tarreau <w@1wt.eu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-01-09ARC: mm: arc700: Don't assume 2 colours for aliasing VIPT dcacheVineet Gupta
commit 08fe007968b2b45e831daf74899f79a54d73f773 upstream. An ARC700 customer reported linux boot crashes when upgrading to bigger L1 dcache (64K from 32K). Turns out they had an aliasing VIPT config and current code only assumed 2 colours, while theirs had 4. So default to 4 colours and complain if there are fewer. Ideally this needs to be a Kconfig option, but heck that's too much of hassle for a single user. Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2016-11-28ARC: mm: IOC: Don't enable IOC by defaultVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-11-03arc: Implement arch-specific dma_map_ops.mmapAlexey Brodkin
We used to use generic implementation of dma_map_ops.mmap which is dma_common_mmap() but that only worked for simpler cached mappings when vaddr = paddr. If a driver requests uncached DMA buffer kernel maps it to virtual address so that MMU gets involved and page uncached status takes into account. In that case usage of dma_common_mmap() lead to mapping of vaddr to vaddr for user-space which is obviously wrong. For more detals please refer to verbose explanation here [1]. So here we implement our own version of mmap() which always deals with dma_addr and maps underlying memory to user-space properly (note that DMA buffer mapped to user-space is always uncached because there's no way to properly manage cache from user-space). [1] https://lkml.org/lkml/2016/10/26/973 Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: <stable@vger.kernel.org> #4.5+ Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-10-28ARC: mm: retire ARC_DBG_TLB_MISS_COUNT...Vineet Gupta
... given that we have perf counters abel to do the same thing non intrusively Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-10-28ARC: boot log: remove awkward space comma from MMU lineVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-10-28ARCv2: boot log: print IOC exists as well as enabled statusVineet Gupta
Previously we would not print the case when IOC existed but was not enabled. And while at it, reduce one line off boot printing by consolidating the Peripheral address space and IO-Coherency which in a way applies to them Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-10-24ARCv2: IOC: use @ioc_enable not @ioc_exist where intendedVineet Gupta
if user disables IOC from debugger at startup (by clearing @ioc_enable), @ioc_exists is cleared too. This means boot prints don't capture the fact that IOC was present but disabled which could be misleading. So invert how we use @ioc_enable and @ioc_exists and make it more canonical. @ioc_exists represent whether hardware is present or not and stays same whether enabled or not. @ioc_enable is still user driven, but will be auto-disabled if IOC hardware is not present, i.e. if @ioc_exist=0. This is opposite to what we were doing before, but much clearer. This means @ioc_enable is now the "exported" toggle in rest of code such as dma mapping API. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-09-30ARCv2: Support dynamic peripheral address space in HS38 rel 3.0 coresVineet Gupta
HS release 3.0 provides for even more flexibility in specifying the volatile address space for mapping peripherals. With HS 2.1 @start was made flexible / programmable - with HS 3.0 even @end can be setup (vs. fixed to 0xFFFF_FFFF before). So add code to reflect that and while at it remove an unused struct defintion Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-08-19ARC: export kmapVineet Gupta
| MODPOST 7 modules | ERROR: "kmap" [fs/ext2/ext2.ko] undefined! | ../scripts/Makefile.modpost:91: recipe for target '__modpost' failed Cc: <stable@vger.kernel.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-08-10ARC: Elide redundant setup of DMA callbacksVineet Gupta
For resources shared by all cores such as SLC and IOC, only the master core needs to do any setups / enabling / disabling etc. Cc: <stable@vger.kernel.org> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-08-04dma-mapping: use unsigned long for dma_attrsKrzysztof Kozlowski
The dma-mapping core and the implementations do not change the DMA attributes passed by pointer. Thus the pointer can point to const data. However the attributes do not have to be a bitfield. Instead unsigned long will do fine: 1. This is just simpler. Both in terms of reading the code and setting attributes. Instead of initializing local attributes on the stack and passing pointer to it to dma_set_attr(), just set the bits. 2. It brings safeness and checking for const correctness because the attributes are passed by value. Semantic patches for this change (at least most of them): virtual patch virtual context @r@ identifier f, attrs; @@ f(..., - struct dma_attrs *attrs + unsigned long attrs , ...) { ... } @@ identifier r.f; @@ f(..., - NULL + 0 ) and // Options: --all-includes virtual patch virtual context @r@ identifier f, attrs; type t; @@ t f(..., struct dma_attrs *attrs); @@ identifier r.f; @@ f(..., - NULL + 0 ) Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.com Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: Vineet Gupta <vgupta@synopsys.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> Acked-by: Mark Salter <msalter@redhat.com> [c6x] Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris] Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm] Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp] Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core] Acked-by: David Vrabel <david.vrabel@citrix.com> [xen] Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb] Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon] Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Acked-by: Bjorn Andersson <bjorn.andersson@linaro.org> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32] Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc] Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-08-02treewide: replace obsolete _refok by __refFabian Frederick
There was only one use of __initdata_refok and __exit_refok __init_refok was used 46 times against 82 for __ref. Those definitions are obsolete since commit 312b1485fb50 ("Introduce new section reference annotations tags: __ref, __refdata, __refconst") This patch removes the following compatibility definitions and replaces them treewide. /* compatibility defines */ #define __init_refok __ref #define __initdata_refok __refdata #define __exit_refok __ref I can also provide separate patches if necessary. (One patch per tree and check in 1 month or 2 to remove old definitions) [akpm@linux-foundation.org: coding-style fixes] Link: http://lkml.kernel.org/r/1466796271-3043-1-git-send-email-fabf@skynet.be Signed-off-by: Fabian Frederick <fabf@skynet.be> Cc: Ingo Molnar <mingo@redhat.com> Cc: Sam Ravnborg <sam@ravnborg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-29Merge tag 'arc-4.8-rc1' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc Pull ARC updates from Vineet Gupta: "Things have been calm here - nothing much except for a few fixes" * tag 'arc-4.8-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/vgupta/arc: ARC: mm: don't loose PTE_SPECIAL in pte_modify() ARC: dma: fix address translation in arc_dma_free ARC: typo fix in mm/ioremap.c ARC: fix linux-next build breakage
2016-07-26mm: do not pass mm_struct into handle_mm_faultKirill A. Shutemov
We always have vma->vm_mm around. Link: http://lkml.kernel.org/r/1466021202-61880-8-git-send-email-kirill.shutemov@linux.intel.com Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-07-20ARC: dma: fix address translation in arc_dma_freeVladimir Kondratiev
page should be calculated using physical address. If platform uses non-trivial dma-to-phys memory translation, dma_handle should be converted to physicval address before calculation of page. Failing to do so results in struct page * pointing to wrong or non-existent memory. Fixes: f2e3d55397ff ("ARC: dma: reintroduce platform specific dma<->phys") Cc: stable@vger.kernel.org #4.6+ Signed-off-by: Vladimir Kondratiev <vladimir.kondratiev@intel.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-07-19ARC: typo fix in mm/ioremap.cAlexey Brodkin
Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-05-30Fix typosAndrea Gelmini
Signed-off-by: Andrea Gelmini <andrea.gelmini@gelma.net> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-05-09ARC: [plat-eznps] Use dedicated user stack topNoam Camus
NPS use special mapping right below TASK_SIZE. Hence we need to lower STACK_TOP so that user stack won't overlap NPS special mapping. Signed-off-by: Noam Camus <noamc@ezchip.com> Acked-by: Vineet Gupta <vgupta@synopsys.com>
2016-05-09ARC: Make vmalloc size configurableNoam Camus
On ARC, lower 2G of address space is translated and used for - user vaddr space (region 0 to 5) - unused kernel-user gutter (region 6) - kernel vaddr space (region 7) where each region simply represents 256MB of address space. The kernel vaddr space of 256MB is used to implement vmalloc, modules So far this was enough, but not on EZChip system with 4K CPUs (given that per cpu mechanism uses vmalloc for allocating chunks) So allow VMALLOC_SIZE to be configurable by expanding down into the unused kernel-user gutter region which at default 256M was excessive anyways. Also use _BITUL() to fix a build error since PGDIR_SIZE cannot use "1UL" as called from assembly code in mm/tlbex.S Signed-off-by: Noam Camus <noamc@ezchip.com> [vgupta: rewrote changelog, debugged bootup crash due to int vs. hex] Acked-by: Vineet Gupta <vgupta@synopsys.com>
2016-05-05ARC: support HIGHMEM even without PAE40Vineet Gupta
Initial HIGHMEM support on ARC was introduced for PAE40 where the low memory (0x8000_0000 based) and high memory (0x1_0000_0000) were physically contiguous. So CONFIG_FLATMEM sufficed (despite a peipheral hole in the middle, which wasted a bit of struct page memory, but things worked). However w/o PAE, highmem was not possible and we could only reach ~1.75GB of DDR. Now there is a use case to access ~4GB of DDR w/o PAE40 The idea is to have low memory at canonical 0x8000_0000 and highmem at 0 so enire 4GB address space is available for physical addressing This needs additional platform/interconnect mapping to convert the non contiguous physical addresses into linear bus adresses. From Linux point of view, non contiguous divide means FLATMEM no longer works and DISCONTIGMEM is needed to track the pfns in the 2 regions. This scheme would also work for PAE40, only better in that we don't waste struct page memory for the peripheral hole. The DT description will be something like memory { ... reg = <0x80000000 0x200000000 /* 512MB: lowmem */ 0x00000000 0x10000000>; /* 256MB: highmem */ } Signed-off-by: Noam Camus <noamc@ezchip.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-04-27ARC: add support for reserved memory defined by device treeAlexey Brodkin
Enable reserved memory initialization from device tree. Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Cc: Grant Likely <grant.likely@linaro.org> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: linux-kernel@vger.kernel.org Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-04-04mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macrosKirill A. Shutemov
PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} macros were introduced *long* time ago with promise that one day it will be possible to implement page cache with bigger chunks than PAGE_SIZE. This promise never materialized. And unlikely will. We have many places where PAGE_CACHE_SIZE assumed to be equal to PAGE_SIZE. And it's constant source of confusion on whether PAGE_CACHE_* or PAGE_* constant should be used in a particular case, especially on the border between fs and mm. Global switching to PAGE_CACHE_SIZE != PAGE_SIZE would cause to much breakage to be doable. Let's stop pretending that pages in page cache are special. They are not. The changes are pretty straight-forward: - <foo> << (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - <foo> >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) -> <foo>; - PAGE_CACHE_{SIZE,SHIFT,MASK,ALIGN} -> PAGE_{SIZE,SHIFT,MASK,ALIGN}; - page_cache_get() -> get_page(); - page_cache_release() -> put_page(); This patch contains automated changes generated with coccinelle using script below. For some reason, coccinelle doesn't patch header files. I've called spatch for them manually. The only adjustment after coccinelle is revert of changes to PAGE_CAHCE_ALIGN definition: we are going to drop it later. There are few places in the code where coccinelle didn't reach. I'll fix them manually in a separate patch. Comments and documentation also will be addressed with the separate patch. virtual patch @@ expression E; @@ - E << (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ expression E; @@ - E >> (PAGE_CACHE_SHIFT - PAGE_SHIFT) + E @@ @@ - PAGE_CACHE_SHIFT + PAGE_SHIFT @@ @@ - PAGE_CACHE_SIZE + PAGE_SIZE @@ @@ - PAGE_CACHE_MASK + PAGE_MASK @@ expression E; @@ - PAGE_CACHE_ALIGN(E) + PAGE_ALIGN(E) @@ expression E; @@ - page_cache_get(E) + get_page(E) @@ expression E; @@ - page_cache_release(E) + put_page(E) Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Acked-by: Michal Hocko <mhocko@suse.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-03-19ARCv2: ioremap: Support dynamic peripheral address spaceVineet Gupta
The peripheral address space is architectural address window which is uncached and typically used to wire up peripherals. For ARC700 cores (ARCompact ISA based) this was fixed to 1GB region 0xC000_0000 - 0xFFFF_FFFF. For ARCv2 based HS38 cores the start address is flexible and can be 0xC, 0xD, 0xE, 0xF 000_000 by programming AUX_NON_VOLATILE_LIMIT reg (typically done in bootloader) Further in cas of PAE, the physical address can extend beyond 4GB so need to confine this check, otherwise all pages beyond 4GB will be treated as uncached Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-03-19ARC: dma: reintroduce platform specific dma<->physVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-03-19ARC: dma: ioremap: use phys_addr_t consistenctly in code pathsVineet Gupta
To support dma in physical memory beyond 4GB with PAE40 Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-03-19ARC: dma: pass_phys() not sg_virt() to cache opsVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-03-19ARC: dma: non-coherent pages need V-P mapping if in HIGHMEMVineet Gupta
Previously a non-coherent page (hardware IOC or simply driver needs) could be handled by cpu with paddr alone (kvaddr used to be needed for coherent mappings to enforce uncached semantics via a MMU mapping). Now however such a page might still require a V-P mapping if it was in physical address space > 32bits due to PAE40, which the CPU can't access directly with a paddr So decouple decision of kvaddr allocation from type of alloc request (coh/non-coh) Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-03-19ARC: dma: Use struct page based page allocator helpersVineet Gupta
vs. the ones which reutne void *, so that we can handle pages > 4GB in subsequent patches Also plug a potential page leak in case ioremap fails Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-03-11ARC: Fix misspellings in comments.Adam Buchbinder
Signed-off-by: Adam Buchbinder <adam.buchbinder@gmail.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2016-01-21arc: convert to dma_map_opsChristoph Hellwig
[vgupta@synopsys.com: ARC: dma mapping fixes #2] Signed-off-by: Christoph Hellwig <hch@lst.de> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Cc: Joerg Roedel <jroedel@suse.de> Cc: Sebastian Ott <sebott@linux.vnet.ibm.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com> Cc: Carlos Palminha <CARLOS.PALMINHA@synopsys.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2016-01-16mm: differentiate page_mapped() from page_mapcount() for compound pagesKirill A. Shutemov
Let's define page_mapped() to be true for compound pages if any sub-pages of the compound page is mapped (with PMD or PTE). On other hand page_mapcount() return mapcount for this particular small page. This will make cases like page_get_anon_vma() behave correctly once we allow huge pages to be mapped with PTE. Most users outside core-mm should use page_mapcount() instead of page_mapped(). Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Tested-by: Sasha Levin <sasha.levin@oracle.com> Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Jerome Marchand <jmarchan@redhat.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Hugh Dickins <hughd@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Rik van Riel <riel@redhat.com> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-12-21ARC: mm: HIGHMEM: Fix section mismatch splatVineet Gupta
| WARNING: vmlinux.o(.text+0xd6c2): Section mismatch in reference from the function alloc_kmap_pgtable() to the function | .init.text:__alloc_bootmem_low() The function alloc_kmap_pgtable() references the function __init __alloc_bootmem_low(). This is often because alloc_kmap_pgtable lacks a __init annotation or the annotation of __alloc_bootmem_low is wrong. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-12-17ARC: [plat-sim] unbork non default CONFIG_LINUX_LINK_BASEVineet Gupta
HIGHMEM support bumped the default memory size for nsim platform to 1G. Thus total memory ended at the very edge of start of peripherals address space. With linux link base shifted, memory started bleeding into peripheral space which caused early boot bad_page spew ! Fixes: 29e332261d2 ("ARC: mm: HIGHMEM: populate high memory from DT") Reported-by: Anton Kolesov <akolesov@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-11-16ARC: comments updateVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-11-14ARC: use ASL assembler mnemonicVineet Gupta
ARCompact and ARCv2 only have ASL, while binutils used to support LSL as a alias mnemonic. Newer binutils (upstream) don't want to do that so replace it. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-29ARC: mm: PAE40 supportVineet Gupta
This is the first working implementation of 40-bit physical address extension on ARCv2. Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28ARC: mm: PAE40: tlbex.S: Explicitify the size of pte_tVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28ARC: mm: PAE40: switch to using phys_addr_t for physical addressesVineet Gupta
That way a single flip of phys_addr_t to 64 bit ensures all places dealing with physical addresses get correct data Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28ARC: mm: HIGHMEM: populate high memory from DTVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28ARC: mm: HIGHMEM: kmap API implementationVineet Gupta
Implement kmap* API for ARC. This enables - permanent kernel maps (pkmaps): :kmap() API - fixmap : kmap_atomic() We use a very simple/uniform approach for both (unlike some of the other arches). So fixmap doesn't use the customary compile time address stuff. The important semantic is sleep'ability (pkmap) vs. not (fixmap) which the API guarantees. Note that this patch only enables highmem for subsequent PAE40 support as there is no real highmem for ARC in pure 32-bit paradigm as explained below. ARC has 2:2 address split of the 32-bit address space with lower half being translated (virtual) while upper half unstranslated (0x8000_0000 to 0xFFFF_FFFF). kernel itself is linked at base of unstranslated space (i.e. 0x8000_0000 onwards), which is mapped to say DDR 0x0 by external Bus Glue logic (outside the core). So kernel can potentially access 1.75G worth of memory directly w/o need for highmem. (the top 256M is taken by uncached peripheral space from 0xF000_0000 to 0xFFFF_FFFF) In PAE40, hardware can address memory beyond 4G (0x1_0000_0000) while the logical/virtual addresses remain 32-bits. Thus highmem is required for kernel proper to be able to access these pages for it's own purposes (user space is agnostic to this anyways). Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28ARC: mm: preps ahead of HIGHMEM support #2Vineet Gupta
Explicit'ify that all memory added so far is low memory Nothing semantical Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28ARC: mm: preps ahead of HIGHMEM supportVineet Gupta
Before we plug in highmem support, some of code needs to be ready for it - copy_user_highpage() needs to be using the kmap_atomic API - mk_pte() can't assume page_address() - do_page_fault() can't assume VMALLOC_END is end of kernel vaddr space Signed-off-by: Alexey Brodkin <abrodkin@synopsys.com> Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28ARC: mm: Improve Duplicate PD Fault handlerVineet Gupta
- Move the verbosity knob from .data to .bss by using inverted logic - No need to readout PD1 descriptor - clip the non pfn bits of PD0 to avoid clipping inside the loop Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-28ARC: Ensure DT mem base is same as what kernel is built withVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17ARC: boot log: decode more mmu config itemsVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17ARC: boot log: move helper macros to header for reuseVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17ARC: mm: compute TLB size as needed from ways * setsVineet Gupta
This frees up some bits to hold more high level info such as PAE being present, w/o increasing the size of already bloated cpuinfo struct Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17ARCv2: mm: THP: flush_pmd_tlb_range make SMP safeVineet Gupta
Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
2015-10-17ARCv2: mm: THP: Implement flush_pmd_tlb_range() optimizationVineet Gupta
Implement the TLB flush routine to evict a sepcific Super TLB entry, vs. moving to a new ASID on every such flush. Signed-off-by: Vineet Gupta <vgupta@synopsys.com>