summaryrefslogtreecommitdiff
path: root/arch/arm64/mm/hugetlbpage.c
diff options
context:
space:
mode:
authorSteve Capper <steve.capper@linaro.org>2013-05-28 12:35:51 (GMT)
committerSteve Capper <steve.capper@linaro.org>2013-06-14 08:52:19 (GMT)
commit59911ca4325dc7bd95e05c988fef3593b694e62c (patch)
treecc4a255ac8b12f57f946900f6c52e850691b2433 /arch/arm64/mm/hugetlbpage.c
parent072b1b62a6436b71ab951faae4500db2fbed63de (diff)
downloadlinux-59911ca4325dc7bd95e05c988fef3593b694e62c.tar.xz
ARM64: mm: Move PTE_PROT_NONE bit.
Under ARM64, PTEs can be broadly categorised as follows: - Present and valid: Bit #0 is set. The PTE is valid and memory access to the region may fault. - Present and invalid: Bit #0 is clear and bit #1 is set. Represents present memory with PROT_NONE protection. The PTE is an invalid entry, and the user fault handler will raise a SIGSEGV. - Not present (file or swap): Bits #0 and #1 are clear. Memory represented has been paged out. The PTE is an invalid entry, and the fault handler will try and re-populate the memory where necessary. Huge PTEs are block descriptors that have bit #1 clear. If we wish to represent PROT_NONE huge PTEs we then run into a problem as there is no way to distinguish between regular and huge PTEs if we set bit #1. To resolve this ambiguity this patch moves PTE_PROT_NONE from bit #1 to bit #2 and moves PTE_FILE from bit #2 to bit #3. The number of swap/file bits is reduced by 1 as a consequence, leaving 60 bits for file and swap entries. Signed-off-by: Steve Capper <steve.capper@linaro.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Diffstat (limited to 'arch/arm64/mm/hugetlbpage.c')
0 files changed, 0 insertions, 0 deletions