summaryrefslogtreecommitdiff
path: root/arch/powerpc/include/asm/tlbflush.h
diff options
context:
space:
mode:
authorAneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>2016-04-29 13:26:05 (GMT)
committerMichael Ellerman <mpe@ellerman.id.au>2016-05-01 08:33:09 (GMT)
commit1a472c9dba6b9646fd36717968f6a531b4441c7d (patch)
tree3cab56eaa3a25ff717b38f4a712d430b48a78fb3 /arch/powerpc/include/asm/tlbflush.h
parent676012a66f651a98808459bc8ab75661828ed96f (diff)
downloadlinux-1a472c9dba6b9646fd36717968f6a531b4441c7d.tar.xz
powerpc/mm/radix: Add tlbflush routines
Core kernel doesn't track the page size of the VA range that we are invalidating. Hence we end up flushing TLB for the entire mm here. Later patches will improve this. We also don't flush page walk cache separetly instead use RIC=2 when flushing TLB, because we do a MMU gather flush after freeing page table. MMU_NO_CONTEXT is updated for hash. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Diffstat (limited to 'arch/powerpc/include/asm/tlbflush.h')
-rw-r--r--arch/powerpc/include/asm/tlbflush.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/powerpc/include/asm/tlbflush.h b/arch/powerpc/include/asm/tlbflush.h
index 2fc4331..1b38eea 100644
--- a/arch/powerpc/include/asm/tlbflush.h
+++ b/arch/powerpc/include/asm/tlbflush.h
@@ -58,6 +58,7 @@ extern void __flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
#elif defined(CONFIG_PPC_STD_MMU_32)
+#define MMU_NO_CONTEXT (0)
/*
* TLB flushing for "classic" hash-MMU 32-bit CPUs, 6xx, 7xx, 7xxx
*/