summaryrefslogtreecommitdiff
path: root/arch/um/include
diff options
context:
space:
mode:
authorJeff Dike <jdike@addtoit.com>2008-02-05 06:31:01 (GMT)
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2008-02-05 17:44:29 (GMT)
commit3963333fe6767f15141ab2dc3b933721c636c212 (patch)
tree62fbec62adf1796709dfa197e12dd725911e0fc9 /arch/um/include
parent42a2b54ce8c7b9d4f418995a7950e7e2e15e52ce (diff)
downloadlinux-3963333fe6767f15141ab2dc3b933721c636c212.tar.xz
uml: cover stubs with a VMA
Give the stubs a VMA. This allows the removal of a truly nasty kludge to make sure that mm->nr_ptes was correct in exit_mmap. The underlying problem was always that the stubs, which have ptes, and thus allocated a page table, weren't covered by a VMA. This patch fixes that by using install_special_mapping in arch_dup_mmap and activate_context to create the VMA. The stubs have to be moved, since shift_arg_pages seems to assume that the stack is the only VMA present at that point during exec, and uses vma_adjust to fiddle its VMA. However, that extends the stub VMA by the amount removed from the stack VMA. To avoid this problem, the stubs were moved to a different fixed location at the start of the address space. The init_stub_pte calls were moved from init_new_context to arch_dup_mmap because I was occasionally seeing arch_dup_mmap not being called, causing exit_mmap to die. Rather than figure out what was really happening, I decided it was cleaner to just move the calls so that there's no doubt that both the pte and VMA creation happen, no matter what. arch_exit_mmap is used to clear the stub ptes at exit time. The STUB_* constants in as-layout.h no longer depend on UM_TASK_SIZE, that that definition is removed, along with the comments complaining about gcc. Because the stubs are no longer at the top of the address space, some care is needed while flushing TLBs. update_pte_range checks for addresses in the stub range and skips them. flush_thread now issues two unmaps, one for the range before STUB_START and one for the range after STUB_END. Signed-off-by: Jeff Dike <jdike@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch/um/include')
-rw-r--r--arch/um/include/as-layout.h19
-rw-r--r--arch/um/include/common-offsets.h3
2 files changed, 4 insertions, 18 deletions
diff --git a/arch/um/include/as-layout.h b/arch/um/include/as-layout.h
index a2008f5..606bb5c 100644
--- a/arch/um/include/as-layout.h
+++ b/arch/um/include/as-layout.h
@@ -29,21 +29,10 @@
#define _AC(X, Y) __AC(X, Y)
#endif
-/*
- * The "- 1"'s are to avoid gcc complaining about integer overflows
- * and unrepresentable decimal constants. With 3-level page tables,
- * TASK_SIZE is 0x80000000, which gets turned into its signed decimal
- * equivalent in asm-offsets.s. gcc then complains about that being
- * unsigned only in C90. To avoid that, UM_TASK_SIZE is defined as
- * TASK_SIZE - 1. To compensate, we need to add the 1 back here.
- * However, adding it back to UM_TASK_SIZE produces more gcc
- * complaints. So, I adjust the thing being subtracted from
- * UM_TASK_SIZE instead. Bah.
- */
-#define STUB_CODE _AC((unsigned long), \
- UM_TASK_SIZE - (2 * UM_KERN_PAGE_SIZE - 1))
-#define STUB_DATA _AC((unsigned long), UM_TASK_SIZE - (UM_KERN_PAGE_SIZE - 1))
-#define STUB_START _AC(, STUB_CODE)
+#define STUB_START _AC(, 0x100000)
+#define STUB_CODE _AC((unsigned long), STUB_START)
+#define STUB_DATA _AC((unsigned long), STUB_CODE + UM_KERN_PAGE_SIZE)
+#define STUB_END _AC((unsigned long), STUB_DATA + UM_KERN_PAGE_SIZE)
#ifndef __ASSEMBLY__
diff --git a/arch/um/include/common-offsets.h b/arch/um/include/common-offsets.h
index 5b67d7c..b54bd35 100644
--- a/arch/um/include/common-offsets.h
+++ b/arch/um/include/common-offsets.h
@@ -39,6 +39,3 @@ DEFINE(UM_HZ, HZ);
DEFINE(UM_USEC_PER_SEC, USEC_PER_SEC);
DEFINE(UM_NSEC_PER_SEC, NSEC_PER_SEC);
DEFINE(UM_NSEC_PER_USEC, NSEC_PER_USEC);
-
-/* See as-layout.h for an explanation of the "- 1". Bah. */
-DEFINE(UM_TASK_SIZE, TASK_SIZE - 1);