diff options
author | Hillf Danton <hillf.zj@alibaba-inc.com> | 2014-12-10 23:44:41 (GMT) |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-12-11 01:41:08 (GMT) |
commit | 569f48b85813f053aeab35429ba1657cb7f426db (patch) | |
tree | c69c7657aa770c2d7789bcc7f7f5b7e96fb3a9bf /mm | |
parent | e4bd6a0248b2a026e07c19995c41a4cb5a49d797 (diff) | |
download | linux-569f48b85813f053aeab35429ba1657cb7f426db.tar.xz |
mm: hugetlb: fix __unmap_hugepage_range()
First, after flushing TLB, we have no need to scan pte from start again.
Second, before bail out loop, the address is forwarded one step.
Signed-off-by: Hillf Danton <hillf.zj@alibaba-inc.com>
Reviewed-by: Michal Hocko <mhocko@suse.cz>
Acked-by: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/hugetlb.c | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 9fd7227..30cd968 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2638,8 +2638,9 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma, tlb_start_vma(tlb, vma); mmu_notifier_invalidate_range_start(mm, mmun_start, mmun_end); + address = start; again: - for (address = start; address < end; address += sz) { + for (; address < end; address += sz) { ptep = huge_pte_offset(mm, address); if (!ptep) continue; @@ -2686,6 +2687,7 @@ again: page_remove_rmap(page); force_flush = !__tlb_remove_page(tlb, page); if (force_flush) { + address += sz; spin_unlock(ptl); break; } |