summaryrefslogtreecommitdiff
path: root/arch/arm64/mm
AgeCommit message (Collapse)Author
2015-03-20arm64: Honor __GFP_ZERO in dma allocationsSuzuki K. Poulose
Current implementation doesn't zero out the pages allocated. Honor the __GFP_ZERO flag and zero out if set. Cc: <stable@vger.kernel.org> # v3.14+ Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-03-06arm64: Don't use is_module_addr in setting page attributesLaura Abbott
The set_memory_* functions currently only support module addresses. The addresses are validated using is_module_addr. That function is special though and relies on internal state in the module subsystem to work properly. At the time of module initialization and calling set_memory_*, it's too early for is_module_addr to work properly so it always returns false. Rather than be subject to the whims of the module state, just bounds check against the module virtual address range. Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-02-27arm64: Increase the swiotlb buffer size 64MBCatalin Marinas
With commit 3690951fc6d4 (arm64: Use swiotlb late initialisation), the swiotlb buffer size is limited to MAX_ORDER_NR_PAGES. However, there are platforms with 32-bit only devices that require bounce buffering via swiotlb. This patch changes the swiotlb initialisation to an early 64MB memblock allocation. In order to get the swiotlb buffer correctly allocated (via memblock_virt_alloc_low_nopanic), this patch also defines ARCH_LOW_ADDRESS_LIMIT to the maximum physical address capable of 32-bit DMA. Reported-by: Kefeng Wang <wangkefeng.wang@huawei.com> Tested-by: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-02-12Merge branch 'akpm' (patches from Andrew)Linus Torvalds
Merge second set of updates from Andrew Morton: "More of MM" * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (83 commits) mm/nommu.c: fix arithmetic overflow in __vm_enough_memory() mm/mmap.c: fix arithmetic overflow in __vm_enough_memory() vmstat: Reduce time interval to stat update on idle cpu mm/page_owner.c: remove unnecessary stack_trace field Documentation/filesystems/proc.txt: describe /proc/<pid>/map_files mm: incorporate read-only pages into transparent huge pages vmstat: do not use deferrable delayed work for vmstat_update mm: more aggressive page stealing for UNMOVABLE allocations mm: always steal split buddies in fallback allocations mm: when stealing freepages, also take pages created by splitting buddy page mincore: apply page table walker on do_mincore() mm: /proc/pid/clear_refs: avoid split_huge_page() mm: pagewalk: fix misbehavior of walk_page_range for vma(VM_PFNMAP) mempolicy: apply page table walker on queue_pages_range() arch/powerpc/mm/subpage-prot.c: use walk->vma and walk_page_vma() memcg: cleanup preparation for page table walk numa_maps: remove numa_maps->vma numa_maps: fix typo in gather_hugetbl_stats pagemap: use walk->vma instead of calling find_vma() clear_refs: remove clear_refs_private->vma and introduce clear_refs_test_walk() ...
2015-02-12Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: "arm64 updates for 3.20: - reimplementation of the virtual remapping of UEFI Runtime Services in a way that is stable across kexec - emulation of the "setend" instruction for 32-bit tasks (user endianness switching trapped in the kernel, SCTLR_EL1.E0E bit set accordingly) - compat_sys_call_table implemented in C (from asm) and made it a constant array together with sys_call_table - export CPU cache information via /sys (like other architectures) - DMA API implementation clean-up in preparation for IOMMU support - macros clean-up for KVM - dropped some unnecessary cache+tlb maintenance - CONFIG_ARM64_CPU_SUSPEND clean-up - defconfig update (CPU_IDLE) The EFI changes going via the arm64 tree have been acked by Matt Fleming. There is also a patch adding sys_*stat64 prototypes to include/linux/syscalls.h, acked by Andrew Morton" * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (47 commits) arm64: compat: Remove incorrect comment in compat_siginfo arm64: Fix section mismatch on alloc_init_p[mu]d() arm64: Avoid breakage caused by .altmacro in fpsimd save/restore macros arm64: mm: use *_sect to check for section maps arm64: drop unnecessary cache+tlb maintenance arm64:mm: free the useless initial page table arm64: Enable CPU_IDLE in defconfig arm64: kernel: remove ARM64_CPU_SUSPEND config option arm64: make sys_call_table const arm64: Remove asm/syscalls.h arm64: Implement the compat_sys_call_table in C syscalls: Declare sys_*stat64 prototypes if __ARCH_WANT_(COMPAT_)STAT64 compat: Declare compat_sys_sigpending and compat_sys_sigprocmask prototypes arm64: uapi: expose our struct ucontext to the uapi headers smp, ARM64: Kill SMP single function call interrupt arm64: Emulate SETEND for AArch32 tasks arm64: Consolidate hotplug notifier for instruction emulation arm64: Track system support for mixed endian EL0 arm64: implement generic IOMMU configuration arm64: Combine coherent and non-coherent swiotlb dma_ops ...
2015-02-12mm/hugetlb: reduce arch dependent code around follow_huge_*Naoya Horiguchi
Currently we have many duplicates in definitions around follow_huge_addr(), follow_huge_pmd(), and follow_huge_pud(), so this patch tries to remove the m. The basic idea is to put the default implementation for these functions in mm/hugetlb.c as weak symbols (regardless of CONFIG_ARCH_WANT_GENERAL_HUGETL B), and to implement arch-specific code only when the arch needs it. For follow_huge_addr(), only powerpc and ia64 have their own implementation, and in all other architectures this function just returns ERR_PTR(-EINVAL). So this patch sets returning ERR_PTR(-EINVAL) as default. As for follow_huge_(pmd|pud)(), if (pmd|pud)_huge() is implemented to always return 0 in your architecture (like in ia64 or sparc,) it's never called (the callsite is optimized away) no matter how implemented it is. So in such architectures, we don't need arch-specific implementation. In some architecture (like mips, s390 and tile,) their current arch-specific follow_huge_(pmd|pud)() are effectively identical with the common code, so this patch lets these architecture use the common code. One exception is metag, where pmd_huge() could return non-zero but it expects follow_huge_pmd() to always return NULL. This means that we need arch-specific implementation which returns NULL. This behavior looks strange to me (because non-zero pmd_huge() implies that the architecture supports PMD-based hugepage, so follow_huge_pmd() can/should return some relevant value,) but that's beyond this cleanup patch, so let's keep it. Justification of non-trivial changes: - in s390, follow_huge_pmd() checks !MACHINE_HAS_HPAGE at first, and this patch removes the check. This is OK because we can assume MACHINE_HAS_HPAGE is true when follow_huge_pmd() can be called (note that pmd_huge() has the same check and always returns 0 for !MACHINE_HAS_HPAGE.) - in s390 and mips, we use HPAGE_MASK instead of PMD_MASK as done in common code. This patch forces these archs use PMD_MASK, but it's OK because they are identical in both archs. In s390, both of HPAGE_SHIFT and PMD_SHIFT are 20. In mips, HPAGE_SHIFT is defined as (PAGE_SHIFT + PAGE_SHIFT - 3) and PMD_SHIFT is define as (PAGE_SHIFT + PAGE_SHIFT + PTE_ORDER - 3), but PTE_ORDER is always 0, so these are identical. Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: David Rientjes <rientjes@google.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@suse.cz> Cc: Rik van Riel <riel@redhat.com> Cc: Andrea Arcangeli <aarcange@redhat.com> Cc: Luiz Capitulino <lcapitulino@redhat.com> Cc: Nishanth Aravamudan <nacc@linux.vnet.ibm.com> Cc: Lee Schermerhorn <lee.schermerhorn@hp.com> Cc: Steve Capper <steve.capper@linaro.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2015-01-29arm64: Fix section mismatch on alloc_init_p[mu]d()Catalin Marinas
Commit 523d6e9fae93 (arm64:mm: free the useless initial page table) introduced a BUG_ON checking for the allocation type but it was referring the early_alloc() function in the __init section. This patch changes the check to slab_is_available() and also relaxes the BUG to a WARN_ON_ONCE. Reported-by: Will Deacon <will.deacon@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-28arm64: mm: use *_sect to check for section mapsMark Rutland
The {pgd,pud,pmd}_bad family of macros have slightly fuzzy cross-architecture semantics, and seem to imply a populated entry that is not a next-level table, rather than a particular type of entry (e.g. a section map). In arm64 code, for those cases where we care about whether an entry is a section mapping, we can instead use the {pud,pmd}_sect macros to explicitly check for this case. This helps to document precisely what we care about, making the code easier to read, and allows for future relaxation of the *_bad macros to check for other "bad" entries. To that end this patch updates the table dumping and initial table setup to check for section mappings with {pud,pmd}_sect, and adds/restores BUG_ON(*_bad((*p)) checks after we've handled the *_sect and *_none cases so as to catch remaining "bad" cases. In the fault handling code, show_pte is left with *_bad checks as it only cares about whether it can walk the next level table, and this path is used for both kernel and userspace fault handling. The former case will be followed by a die() where we'll report the address that triggered the fault, which can be useful context for debugging. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Steve Capper <steve.capper@linaro.org> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Kees Cook <keescook@chromium.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-28arm64: drop unnecessary cache+tlb maintenanceMark Rutland
In paging_init, we call flush_cache_all, but this is backed by Set/Way operations which may not achieve anything in the presence of cache line migration and/or system caches. If the caches are already in an inconsistent state at this point, there is nothing we can do (short of flushing the entire physical address space by VA) to empty architected and system caches. As such, flush_cache_all only serves to mask other potential bugs. Hence, this patch removes the boot-time call to flush_cache_all. Immediately after the cache maintenance we flush the TLBs, but this is also unnecessary. Before enabling the MMU, the TLBs are invalidated, and thus are initially clean. When changing the contents of active tables (e.g. in fixup_executable() for DEBUG_RODATA) we perform the required TLB maintenance following the update, and therefore no additional maintenance is required to ensure the new table entries are in effect. Since activating the MMU we will not have modified system register fields permitted to be cached in a TLB, and therefore do not need maintenance for any cached system register fields. Hence, the TLB flush is unnecessary. Shortly after the unnecessary TLB flush, we update TTBR0 to point to an empty zero page rather than the idmap, and flush the TLBs. This maintenance is necessary to remove the global idmap entries from the TLBs (as they would conflict with userspace mappings), and is retained. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Marc Zyngier <marc.zyngier@arm.com> Acked-by: Steve Capper <steve.capper@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-28arm64:mm: free the useless initial page tablezhichang.yuan
For 64K page system, after mapping a PMD section, the corresponding initial page table is not needed any more. That page can be freed. Signed-off-by: Zhichang Yuan <zhichang.yuan@linaro.org> [catalin.marinas@arm.com: added BUG_ON() to catch late memblock freeing] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-27arm64: kernel: remove ARM64_CPU_SUSPEND config optionLorenzo Pieralisi
ARM64_CPU_SUSPEND config option was introduced to make code providing context save/restore selectable only on platforms requiring power management capabilities. Currently ARM64_CPU_SUSPEND depends on the PM_SLEEP config option which in turn is set by the SUSPEND config option. The introduction of CPU_IDLE for arm64 requires that code configured by ARM64_CPU_SUSPEND (context save/restore) should be compiled in in order to enable the CPU idle driver to rely on CPU operations carrying out context save/restore. The ARM64_CPUIDLE config option (ARM64 generic idle driver) is therefore forced to select ARM64_CPU_SUSPEND, even if there may be (ie PM_SLEEP) failed dependencies, which is not a clean way of handling the kernel configuration option. For these reasons, this patch removes the ARM64_CPU_SUSPEND config option and makes the context save/restore dependent on CPU_PM, which is selected whenever either SUSPEND or CPU_IDLE are configured, cleaning up dependencies in the process. This way, code previously configured through ARM64_CPU_SUSPEND is compiled in whenever a power management subsystem requires it to be present in the kernel (SUSPEND || CPU_IDLE), which is the behaviour expected on ARM64 kernels. The cpu_suspend and cpu_init_idle CPU operations are added only if CPU_IDLE is selected, since they are CPU_IDLE specific methods and should be grouped and defined accordingly. PSCI CPU operations are updated to reflect the introduced changes. Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Will Deacon <will.deacon@arm.com> Cc: Krzysztof Kozlowski <k.kozlowski@samsung.com> Cc: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-23arm64: Combine coherent and non-coherent swiotlb dma_opsCatalin Marinas
Since dev_archdata now has a dma_coherent state, combine the two coherent and non-coherent operations and remove their declaration, together with set_dma_ops, from the arch dma-mapping.h file. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-23arm64: Fix SCTLR_EL1 initialisationSuzuki K. Poulose
We initialise the SCTLR_EL1 value by read-modify-writeback of the desired bits, leaving the other bits (including reserved bits(RESx)) untouched. However, sometimes the boot monitor could leave garbage values in the RESx bits which could have different implications. This patch makes sure that all the bits, including the RESx bits, are set to the proper state, except for the 'endianness' control bits, EE(25) & E0E(24)- which are set early in the el2_setup. Updated the state of the Bit[6] in the comment to RES0 in the comment. Signed-off-by: Suzuki K. Poulose <suzuki.poulose@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-23arm64: add ioremap physical address informationMin-Hua Chen
In /proc/vmallocinfo, it's good to show the physical address of each ioremap in vmallocinfo. Add physical address information in arm64 ioremap. 0xffffc900047f2000-0xffffc900047f4000 8192 _nv013519rm+0x57/0xa0 [nvidia] phys=f8100000 ioremap 0xffffc900047f4000-0xffffc900047f6000 8192 _nv013519rm+0x57/0xa0 [nvidia] phys=f8008000 ioremap 0xffffc90004800000-0xffffc90004821000 135168 e1000_probe+0x22c/0xb95 [e1000e] phys=f4300000 ioremap 0xffffc900049c0000-0xffffc900049e1000 135168 _nv013521rm+0x4d/0xd0 [nvidia] phys=e0140000 ioremap Signed-off-by: Min-Hua Chen <orca.chen@gmail.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-23arm64: mm: dump: add missing includesMark Rutland
The arm64 dump code is currently relying on some definitions which are pulled in via transitive dependencies. It seems we have implicit dependencies on the following definitions: * MODULES_VADDR (asm/memory.h) * MODULES_END (asm/memory.h) * PAGE_OFFSET (asm/memory.h) * PTE_* (asm/pgtable-hwdef.h) * ENOMEM (linux/errno.h) * device_initcall (linux/init.h) This patch ensures we explicitly include the relevant headers for the above items, fixing the observed build issue and hopefully preventing future issues as headers are refactored. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Reported-by: Mark Brown <broonie@kernel.org> Acked-by: Steve Capper <steve.capper@linaro.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-23arm64: Fix overlapping VA allocationsMark Rutland
PCI IO space was intended to be 16MiB, at 32MiB below MODULES_VADDR, but commit d1e6dc91b532d3d3 ("arm64: Add architectural support for PCI") extended this to cover the full 32MiB. The final 8KiB of this 32MiB is also allocated for the fixmap, allowing for potential clashes between the two. This change was masked by assumptions in mem_init and the page table dumping code, which assumed the I/O space to be 16MiB long through seaparte hard-coded definitions. This patch changes the definition of the PCI I/O space allocation to live in asm/memory.h, along with the other VA space allocations. As the fixmap allocation depends on the number of fixmap entries, this is moved below the PCI I/O space allocation. Both the fixmap and PCI I/O space are guarded with 2MB of padding. Sites assuming the I/O space was 16MiB are moved over use new PCI_IO_{START,END} definitions, which will keep in sync with the size of the IO space (now restored to 16MiB). As a useful side effect, the use of the new PCI_IO_{START,END} definitions prevents a build issue in the dumping code due to a (now redundant) missing include of io.h for PCI_IOBASE. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Cc: Kees Cook <keescook@chromium.org> Cc: Laura Abbott <lauraa@codeaurora.org> Cc: Liviu Dudau <liviu.dudau@arm.com> Cc: Steve Capper <steve.capper@linaro.org> Cc: Will Deacon <will.deacon@arm.com> [catalin.marinas@arm.com: reorder FIXADDR and PCI_IO address_markers_idx enum] Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-23arm64: dump: Fix implicit inclusion of definition for PCI_IOBASEMark Brown
Since c9465b4ec37a68425 (arm64: add support to dump the kernel page tables) allmodconfig has failed to build on arm64 as a result of: ../arch/arm64/mm/dump.c:55:20: error: 'PCI_IOBASE' undeclared here (not in a function) Fix this by explicitly including io.h to ensure that a definition is present. Signed-off-by: Mark Brown <broonie@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-01-22arm64/efi: move virtmap init to early initcallArd Biesheuvel
Now that the create_mapping() code in mm/mmu.c is able to support setting up kernel page tables at initcall time, we can move the whole virtmap creation to arm64_enable_runtime_services() instead of having a distinct stage during early boot. This also allows us to drop the arm64-specific EFI_VIRTMAP flag. Signed-off-by: Ard Biesheuvel <ard.biesheuvel-QSEj5FYQhm4dnm+yROfE0A@public.gmane.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-22arm64: add better page protections to arm64Laura Abbott
Add page protections for arm64 similar to those in arm. This is for security reasons to prevent certain classes of exploits. The current method: - Map all memory as either RWX or RW. We round to the nearest section to avoid creating page tables before everything is mapped - Once everything is mapped, if either end of the RWX section should not be X, we split the PMD and remap as necessary - When initmem is to be freed, we change the permissions back to RW (using stop machine if necessary to flush the TLB) - If CONFIG_DEBUG_RODATA is set, the read only sections are set read only. Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Kees Cook <keescook@chromium.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-16arm64: respect mem= for EFIMark Rutland
When booting with EFI, we acquire the EFI memory map after parsing the early params. This unfortuantely renders the option useless as we call memblock_enforce_memory_limit (which uses memblock_remove_range behind the scenes) before we've added any memblocks. We end up removing nothing, then adding all of memory later when efi_init calls reserve_regions. Instead, we can log the limit and apply this later when we do the rest of the memblock work in memblock_init, which should work regardless of the presence of EFI. At the same time we may as well move the early parameter into arm64's mm/init.c, close to arm64_memblock_init. Any memory which must be mapped (e.g. for use by EFI runtime services) must be mapped explicitly reather than relying on the linear mapping, which may be truncated as a result of a mem= option passed on the kernel command line. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Cc: Leif Lindholm <leif.lindholm@linaro.org> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-16arm64: partially revert "ARM: 8167/1: extend the reserved memory for initrd ↵Catalin Marinas
to be page aligned" This patch partially reverts commit 421520ba98290a73b35b7644e877a48f18e06004 (only the arm64 part). There is no guarantee that the boot-loader places other images like dtb in a different page than initrd start/end, especially when the kernel is built with 64KB pages. When this happens, such pages must not be freed. The free_reserved_area() already takes care of rounding up "start" and rounding down "end" to avoid freeing partially used pages. Cc: <stable@vger.kernel.org> # 3.17+ Reported-by: Peter Maydell <Peter.Maydell@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2015-01-15Merge branch 'arm64/common-esr-macros' of ↵Catalin Marinas
git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux ESR_ELx definitions clean-up from Mark Rutland. * 'arm64/common-esr-macros' of git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux: arm64: kvm: decode ESR_ELx.EC when reporting exceptions arm64: kvm: remove ESR_EL2_* macros arm64: remove ESR_EL1_* macros arm64: kvm: move to ESR_ELx macros arm64: decode ESR_ELx.EC when reporting exceptions arm64: move to ESR_ELx macros arm64: introduce common ESR_ELx_* definitions
2015-01-15arm64: move to ESR_ELx macrosMark Rutland
Now that we have common ESR_ELx_* macros, move the core arm64 code over to them. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Peter Maydell <peter.maydell@linaro.org> Cc: Will Deacon <will.deacon@arm.com>
2015-01-13arm64: remove broken cachepolicy codeMark Rutland
The cachepolicy kernel parameter was intended to aid in the debugging of coherency issues, but it is fundamentally broken for several reasons: * On SMP platforms, only the boot CPU's tcr_el1 is altered. Secondary CPUs may therefore use differ w.r.t. the attributes they apply to MT_NORMAL memory, resulting in a loss of coherency. * The cache maintenance using flush_dcache_all (based on Set/Way operations) is not guaranteed to empty a given CPU's cache hierarchy while said CPU has caches enabled, it cannot empty the caches of other coherent PEs, nor is it guaranteed to flush data to the PoC even when caches are disabled. * The TLBs are not invalidated around the modification of MAIR_EL1 and TCR_EL1, as required by the architecture (as both are permitted to be cached in a TLB). This may result in CPUs using attributes other than those expected for some memory accesses, resulting in a loss of coherency. * Exclusive accesses are not architecturally guaranteed to function as expected on memory marked as Write-Through or Non-Cacheable. Thus changing the attributes of MT_NORMAL away from the (architecurally safe) defaults may cause uses of these instructions (e.g. atomics) to behave erratically. Given this, the cachepolicy code cannot be used for debugging purposes as it alone is likely to cause coherency issues. This patch removes the broken cachepolicy code. Signed-off-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2015-01-12arm64/efi: remove idmap manipulations from UEFI codeArd Biesheuvel
Now that we have moved the call to SetVirtualAddressMap() to the stub, UEFI has no use for the ID map, so we can drop the code that installs ID mappings for UEFI memory regions. Acked-by: Leif Lindholm <leif.lindholm@linaro.org> Acked-by: Will Deacon <will.deacon@arm.com> Tested-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2015-01-12arm64/mm: add create_pgd_mapping() to create private page tablesArd Biesheuvel
For UEFI, we need to install the memory mappings used for Runtime Services in a dedicated set of page tables. Add create_pgd_mapping(), which allows us to allocate and install those page table entries early. Reviewed-by: Will Deacon <will.deacon@arm.com> Tested-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2015-01-12arm64/mm: add explicit struct_mm argument to __create_mapping()Ard Biesheuvel
Currently, swapper_pg_dir and idmap_pg_dir share the init_mm mm_struct instance. To allow the introduction of other pg_dir instances, for instance, for UEFI's mapping of Runtime Services, make the struct_mm instance an explicit argument that gets passed down to the pmd and pte instantiation functions. Note that the consumers (pmd_populate/pgd_populate) of the mm_struct argument don't actually inspect it, but let's fix it for correctness' sake. Acked-by: Steve Capper <steve.capper@linaro.org> Tested-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
2014-12-11arm64: mm: dump: don't skip final regionMark Rutland
If the final page table entry we walk is a valid mapping, the page table dumping code will not log the region this entry is part of, as the final note_page call in ptdump_show will trigger an early return. Luckily this isn't seen on contemporary systems as they typically don't have enough RAM to extend the linear mapping right to the end of the address space. In note_page, we log a region when we reach its end (i.e. we hit an entry immediately afterwards which has different prot bits or is invalid). The final entry has no subsequent entry, so we will not log this immediately. We try to cater for this with a subsequent call to note_page in ptdump_show, but this returns early as 0 < LOWEST_ADDR, and hence we will skip a valid mapping if it spans to the final entry we note. Unlike 32-bit ARM, the pgd with the kernel mapping is never shared with user mappings, so we do not need the check to ensure we don't log user page tables. Due to the way addr is constructed in the walk_* functions, it can never be less than LOWEST_ADDR when walking the page tables, so it is not necessary to avoid dereferencing invalid table addresses. The existing checks for st->current_prot and st->marker[1].start_address are sufficient to ensure we will not print and/or dereference garbage when trying to log information. This patch removes the unnecessary check against LOWEST_ADDR, ensuring we log all regions in the kernel page table, including those which span right to the end of the address space. Cc: Kees Cook <keescook@chromium.org> Acked-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Steve Capper <steve.capper@linaro.org> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-12-11arm64: mm: dump: fix shift warningMark Rutland
When building with 48-bit VAs, it's possible to get the following warning when building the arm64 page table dumping code: arch/arm64/mm/dump.c: In function ‘walk_pgd’: arch/arm64/mm/dump.c:266:2: warning: right shift count >= width of type pgd_t *pgd = pgd_offset(mm, 0); ^ As pgd_offset is a macro and the second argument is not cast to any particular type, the zero will be given integer type by the compiler. As pgd_offset passes the pargument to pgd_index, we then try to shift the 32-bit integer by at least 39 bits (for 4k pages). Elsewhere the pgd_offset is passed a second argument of unsigned long type, so let's do the same here by passing '0UL' rather than '0'. Cc: Kees Cook <keescook@chromium.org> Acked-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Steve Capper <steve.capper@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-12-05arm64: remove the unnecessary arm64_swiotlb_init()Ding Tianhong
The commit 3690951fc6d42f3a0903987677d0e592c49dd8db (arm64: Use swiotlb late initialisation) switches the DMA mapping code to swiotlb_tlb_late_init_with_default_size(), the arm64_swiotlb_init() will not used anymore, so remove this function. Signed-off-by: Ding Tianhong <dingtianhong@huawei.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-12-01arm64: compat: align cacheflush syscall with arch/armVladimir Murzin
Update handling of cacheflush syscall with changes made in arch/arm counterpart: - return error to userspace when flushing syscall fails - split user cache-flushing into interruptible chunks - don't bother rounding to nearest vma Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com> [will: changed internal return value from -EINTR to 0 to match arch/arm/] Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-26arm64: add support to dump the kernel page tablesLaura Abbott
In a similar manner to arm, it's useful to be able to dump the page tables to verify permissions and memory types. Add a debugfs file to check the page tables. Acked-by: Steve Capper <steve.capper@linaro.org> Tested-by: Steve Capper <steve.capper@linaro.org> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> [will: s/BUFFERABLE/NORMAL-NC/] Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25arm64: Factor out fixmap initialization from ioremapLaura Abbott
The fixmap API was originally added for arm64 for early_ioremap purposes. It can be used for other purposes too so move the initialization from ioremap to somewhere more generic. This makes it obvious where the fixmap is being set up and allows for a cleaner implementation of __set_fixmap. Reviewed-by: Kees Cook <keescook@chromium.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Mark Rutland <mark.rutland@arm.com> Tested-by: Kees Cook <keescook@chromium.org> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25arm64: add Cortex-A53 cache errata workaroundAndre Przywara
The ARM errata 819472, 826319, 827319 and 824069 define the same workaround for these hardware issues in certain Cortex-A53 parts. Use the new alternatives framework and the CPU MIDR detection to patch "cache clean" into "cache clean and invalidate" instructions if an affected CPU is detected at runtime. Signed-off-by: Andre Przywara <andre.przywara@arm.com> [will: add __maybe_unused to squash gcc warning] Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-25arm64: add alternative runtime patchingAndre Przywara
With a blatant copy of some x86 bits we introduce the alternative runtime patching "framework" to arm64. This is quite basic for now and we only provide the functions we need at this time. This is connected to the newly introduced feature bits. Signed-off-by: Andre Przywara <andre.przywara@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-21arm64: mm: report unhandled level-0 translation faults correctlyWill Deacon
Translation faults that occur due to the input address being outside of the address range mapped by the relevant base register are reported as level 0 faults in ESR.DFSC. If the faulting access cannot be resolved by the kernel (e.g. because it is not mapped by a vma), then we report "input address range fault" on the console. This was fine until we added support for 48-bit VAs, which actually place PGDs at level 0 and can trigger faults for invalid addresses that are within the range of the page tables. This patch changes the string to report "level 0 translation fault", which is far less confusing. Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-20arm64: pgalloc: consistently use PGALLOC_GFPMark Rutland
We currently allocate different levels of page tables with a variety of differing flags, and the PGALLOC_GFP flags, intended for use when allocating any level of page table, are only used for ptes in pte_alloc_one. On x86, PGALLOC_GFP is used for all page table allocations. Currently the major differences are: * __GFP_NOTRACK -- Needed to ensure page tables are always accessible in the presence of kmemcheck to prevent recursive faults. Currently kmemcheck cannot be selected for arm64. * __GFP_REPEAT -- Causes the allocator to try to reclaim pages and retry upon a failure to allocate. * __GFP_ZERO -- Sometimes passed explicitly, sometimes zalloc variants are used. While we've no encountered issues so far, it would be preferable to be consistent. This patch ensures all levels of table are allocated in the same manner, with PGALLOC_GFP. Cc: Steve Capper <steve.capper@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-18arm64/mm: Remove hack in mmap randomize layoutYann Droneaud
Since commit 8a0a9bd4db63 ('random: make get_random_int() more random'), get_random_int() returns a random value for each call, so comment and hack introduced in mmap_rnd() as part of commit 1d18c47c735e ('arm64: MMU fault handling and page table management') are incorrects. Commit 1d18c47c735e seems to use the same hack introduced by commit a5adc91a4b44 ('powerpc: Ensure random space between stack and mmaps'), latter copied in commit 5a0efea09f42 ('sparc64: Sharpen address space randomization calculations.'). But both architectures were cleaned up as part of commit fa8cbaaf5a68 ('powerpc+sparc64/mm: Remove hack in mmap randomize layout') as hack is no more needed since commit 8a0a9bd4db63. So the present patch removes the comment and the hack around get_random_int() on AArch64's mmap_rnd(). Cc: David S. Miller <davem@davemloft.net> Cc: Anton Blanchard <anton@samba.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Will Deacon <will.deacon@arm.com> Acked-by: Dan McGee <dpmcgee@gmail.com> Signed-off-by: Yann Droneaud <ydroneaud@opteya.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-11-06arm64: fix data type for physical addressMin-Hua Chen
Use phys_addr_t for physical address in alloc_init_pud. Although phys_addr_t and unsigned long are 64 bit in arm64, it is better to use phys_addr_t to describe physical addresses. Signed-off-by: Min-Hua Chen <orca.chen@gmail.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2014-10-24arm64: Fix memblock current_limit with 64K pages and 48-bit VACatalin Marinas
With 48-bit VA space, the 64K page configuration uses 3 levels instead of 2 and PUD_SIZE != PMD_SIZE. Since with 64K pages we only cover PMD_SIZE with the initial swapper_pg_dir populated in head.S, the memblock current_limit needs to be set accordingly in map_mem() to avoid allocating unmapped memory. The memblock current_limit is progressively increased as more blocks are mapped. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-20arm64: mm: Correct fixmap pagetable typesSteve Capper
Compiling with STRICT_MM_TYPECHECKS gives the following arch/arm64/mm/ioremap.c: In function ‘early_ioremap_init’: arch/arm64/mm/ioremap.c:152:2: warning: passing argument 3 of ‘pud_populate’ from incompatible pointer type pud_populate(&init_mm, pud, bm_pmd); The data types for bm_pmd and bm_pud are incorrectly set to pte_t. This patch corrects these types. Signed-off-by: Steve Capper <steve.capper@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-20arm64: Align less than PAGE_SIZE pgds naturallyCatalin Marinas
When the pgd size is smaller than PAGE_SIZE, pgd_alloc() uses kzalloc() to save space. However, this is not always naturally aligned as required by the architecture. This patch creates a kmem_cache for pgd allocations with the correct alignment. The current kernel configurations with 4K pages + 39-bit VA and 64K pages + 42-bit VA use a full page for the pgd and are not affected. The patch is required for 48-bit VA with 64K pages where the pgd is 512 bytes. Reported-by: Christoffer Dall <christoffer.dall@linaro.org> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-10arm64: mm: enable RCU fast_gupSteve Capper
Activate the RCU fast_gup for ARM64. We also need to force THP splits to broadcast an IPI s.t. we block in the fast_gup page walker. As THP splits are comparatively rare, this should not lead to a noticeable performance degradation. Some pre-requisite functions pud_write and pud_page are also added. [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Steve Capper <steve.capper@linaro.org> Tested-by: Dann Frazier <dann.frazier@canonical.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Cc: Hugh Dickins <hughd@google.com> Cc: Russell King <rmk@arm.linux.org.uk> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Will Deacon <will.deacon@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Cc: Andrea Arcangeli <aarcange@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-10-10arm64: add atomic pool for non-coherent and CMA allocationsLaura Abbott
Neither CMA nor noncoherent allocations support atomic allocations. Add a dedicated atomic pool to support this. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Laura Abbott <lauraa@codeaurora.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Riley <davidriley@chromium.org> Cc: Olof Johansson <olof@lixom.net> Cc: Ritesh Harjain <ritesh.harjani@gmail.com> Cc: Russell King <linux@arm.linux.org.uk> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Will Deacon <will.deacon@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2014-10-08Merge tag 'arm64-upstream' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull arm64 updates from Catalin Marinas: - eBPF JIT compiler for arm64 - CPU suspend backend for PSCI (firmware interface) with standard idle states defined in DT (generic idle driver to be merged via a different tree) - Support for CONFIG_DEBUG_SET_MODULE_RONX - Support for unmapped cpu-release-addr (outside kernel linear mapping) - set_arch_dma_coherent_ops() implemented and bus notifiers removed - EFI_STUB improvements when base of DRAM is occupied - Typos in KGDB macros - Clean-up to (partially) allow kernel building with LLVM - Other clean-ups (extern keyword, phys_addr_t usage) * tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (51 commits) arm64: Remove unneeded extern keyword ARM64: make of_device_ids const arm64: Use phys_addr_t type for physical address aarch64: filter $x from kallsyms arm64: Use DMA_ERROR_CODE to denote failed allocation arm64: Fix typos in KGDB macros arm64: insn: Add return statements after BUG_ON() arm64: debug: don't re-enable debug exceptions on return from el1_dbg Revert "arm64: dmi: Add SMBIOS/DMI support" arm64: Implement set_arch_dma_coherent_ops() to replace bus notifiers of: amba: use of_dma_configure for AMBA devices arm64: dmi: Add SMBIOS/DMI support arm64: Correct ftrace calls to aarch64_insn_gen_branch_imm() arm64:mm: initialize max_mapnr using function set_max_mapnr setup: Move unmask of async interrupts after possible earlycon setup arm64: LLVMLinux: Fix inline arm64 assembly for use with clang arm64: pageattr: Correctly adjust unaligned start addresses net: bpf: arm64: fix module memory leak when JIT image build fails arm64: add PSCI CPU_SUSPEND based cpu_suspend support arm64: kernel: introduce cpu_init_idle CPU operation ...
2014-10-02Merge branches 'fiq' (early part), 'fixes', 'l2c' (early part) and 'misc' ↵Russell King
into for-next
2014-10-02ARM: 8167/1: extend the reserved memory for initrd to be page alignedYalin Wang
This patch extends the start and end address of initrd to be page aligned, so that we can free all memory including the un-page aligned head or tail page of initrd, if the start or end address of initrd are not page aligned, the page can't be freed by free_initrd_mem() function. Signed-off-by: Yalin Wang <yalin.wang@sonymobile.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-10-02arm64: Use phys_addr_t type for physical addressMin-Hua Chen
Change the type of physical address from unsigned long to phys_addr_t, make valid_phys_addr_range more readable. Signed-off-by: Min-Hua Chen <orca.chen@gmail.com> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-10-02arm64: Use DMA_ERROR_CODE to denote failed allocationSean Paul
This patch replaces the static assignment of ~0 to dma_handle with DMA_ERROR_CODE to be consistent with other platforms. Signed-off-by: Sean Paul <seanpaul@chromium.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2014-09-22arm64: Implement set_arch_dma_coherent_ops() to replace bus notifiersCatalin Marinas
Commit 6ecba8eb51b7 (arm64: Use bus notifiers to set per-device coherent DMA ops) introduced bus notifiers to set the coherent dma ops based on the 'dma-coherent' DT property. Since the generic of_dma_configure() handles this property for platform and AMBA devices, replace the notifiers with set_arch_dma_coherent_ops(). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>