summaryrefslogtreecommitdiff
path: root/mm/vmscan.c
diff options
context:
space:
mode:
authorVladimir Davydov <vdavydov@virtuozzo.com>2016-01-20 23:03:10 (GMT)
committerLinus Torvalds <torvalds@linux-foundation.org>2016-01-21 01:09:18 (GMT)
commit5ccc5abaaf6f9242cc63342c5286990233f392fa (patch)
tree14406eb21280bafa4a447d4b45ea5bd6caa354af /mm/vmscan.c
parentd8b38438a0bcb362c396f49d8279ef7b505917f4 (diff)
downloadlinux-5ccc5abaaf6f9242cc63342c5286990233f392fa.tar.xz
mm: free swap cache aggressively if memcg swap is full
Swap cache pages are freed aggressively if swap is nearly full (>50% currently), because otherwise we are likely to stop scanning anonymous when we near the swap limit even if there is plenty of freeable swap cache pages. We should follow the same trend in case of memory cgroup, which has its own swap limit. Signed-off-by: Vladimir Davydov <vdavydov@virtuozzo.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/vmscan.c')
-rw-r--r--mm/vmscan.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 3be5f9d..bd620b6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1214,7 +1214,7 @@ cull_mlocked:
activate_locked:
/* Not a candidate for swapping, so reclaim swap space. */
- if (PageSwapCache(page) && vm_swap_full())
+ if (PageSwapCache(page) && mem_cgroup_swap_full(page))
try_to_free_swap(page);
VM_BUG_ON_PAGE(PageActive(page), page);
SetPageActive(page);