summaryrefslogtreecommitdiff
path: root/arch/arm/include/asm/memblock.h
diff options
context:
space:
mode:
authorRussell King <rmk+kernel@arm.linux.org.uk>2012-01-13 15:00:51 (GMT)
committerRussell King <rmk+kernel@arm.linux.org.uk>2012-01-13 15:02:35 (GMT)
commit716a3dc20084da9b3ab17bd125005a5345e23e3b (patch)
treef7ba487050d33fc2913fdee81b384f5578ccb105 /arch/arm/include/asm/memblock.h
parent4de3a8e101150feaefa1139611a50ff37467f33e (diff)
downloadlinux-716a3dc20084da9b3ab17bd125005a5345e23e3b.tar.xz
ARM: Add arm_memblock_steal() to allocate memory away from the kernel
Several platforms are now using the memblock_alloc+memblock_free+ memblock_remove trick to obtain memory which won't be mapped in the kernel's page tables. Most platforms do this (correctly) in the ->reserve callback. However, OMAP has started to call these functions outside of this callback, and this is extremely unsafe - memory will not be unmapped, and could well be given out after memblock is no longer responsible for its management. So, provide arm_memblock_steal() to perform this function, and ensure that it panic()s if it is used inappropriately. Convert everyone over, including OMAP. As a result, OMAP with OMAP4_ERRATA_I688 enabled will panic on boot with this change. Mark this option as BROKEN and make it depend on BROKEN. OMAP needs to be fixed, or 137d105d50 (ARM: OMAP4: Fix errata i688 with MPU interconnect barriers.) reverted until such time it can be fixed correctly. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Diffstat (limited to 'arch/arm/include/asm/memblock.h')
-rw-r--r--arch/arm/include/asm/memblock.h2
1 files changed, 2 insertions, 0 deletions
diff --git a/arch/arm/include/asm/memblock.h b/arch/arm/include/asm/memblock.h
index b8da2e4..00ca5f9 100644
--- a/arch/arm/include/asm/memblock.h
+++ b/arch/arm/include/asm/memblock.h
@@ -6,4 +6,6 @@ struct machine_desc;
extern void arm_memblock_init(struct meminfo *, struct machine_desc *);
+phys_addr_t arm_memblock_steal(phys_addr_t size, phys_addr_t align);
+
#endif