summaryrefslogtreecommitdiff
AgeCommit message (Expand)Author
2016-07-28mm/zsmalloc: keep comments consistent with codeGanesh Mahendran
2016-07-28mm/zsmalloc: avoid calculate max objects of zspage twiceGanesh Mahendran
2016-07-28mm/zsmalloc: use class->objs_per_zspage to get num of max objectsGanesh Mahendran
2016-07-28mm/zsmalloc: take obj index back from find_alloced_objGanesh Mahendran
2016-07-28mm/zsmalloc: use obj_index to keep consistent with othersGanesh Mahendran
2016-07-28mm: bail out in shrink_inactive_list()Minchan Kim
2016-07-28mm, vmscan: account for skipped pages as a partial scanMel Gorman
2016-07-28mm: consider whether to decivate based on eligible zones inactive ratioMel Gorman
2016-07-28mm: remove reclaim and compaction retry approximationsMel Gorman
2016-07-28mm, vmscan: remove highmem_file_pagesMel Gorman
2016-07-28mm: add per-zone lru list statMinchan Kim
2016-07-28mm, vmscan: release/reacquire lru_lock on pgdat changeMel Gorman
2016-07-28mm, vmscan: remove redundant check in shrink_zones()Mel Gorman
2016-07-28mm, vmscan: Update all zone LRU sizes before updating memcgMel Gorman
2016-07-28mm: show node_pages_scanned per node, not zoneMinchan Kim
2016-07-28mm, pagevec: release/reacquire lru_lock on pgdat changeMel Gorman
2016-07-28mm, page_alloc: fix dirtyable highmem calculationMinchan Kim
2016-07-28mm, vmstat: remove zone and node double accounting by approximating retriesMel Gorman
2016-07-28mm, vmstat: print node-based stats in zoneinfo fileMel Gorman
2016-07-28mm: vmstat: account per-zone stalls and pages skipped during reclaimMel Gorman
2016-07-28mm: vmstat: replace __count_zone_vm_events with a zone id equivalentMel Gorman
2016-07-28mm: page_alloc: cache the last node whose dirty limit is reachedMel Gorman
2016-07-28mm, page_alloc: remove fair zone allocation policyMel Gorman
2016-07-28mm, vmscan: add classzone information to tracepointsMel Gorman
2016-07-28mm, vmscan: Have kswapd reclaim from all zones if reclaiming and buffer_heads...Mel Gorman
2016-07-28mm, vmscan: avoid passing in `remaining' unnecessarily to prepare_kswapd_sleep()Mel Gorman
2016-07-28mm, vmscan: avoid passing in classzone_idx unnecessarily to compaction_readyMel Gorman
2016-07-28mm, vmscan: avoid passing in classzone_idx unnecessarily to shrink_nodeMel Gorman
2016-07-28mm: convert zone_reclaim to node_reclaimMel Gorman
2016-07-28mm, page_alloc: wake kswapd based on the highest eligible zoneMel Gorman
2016-07-28mm, vmscan: only wakeup kswapd once per node for the requested classzoneMel Gorman
2016-07-28mm: move vmscan writes and file write accounting to the nodeMel Gorman
2016-07-28mm: move most file-based accounting to the nodeMel Gorman
2016-07-28mm: rename NR_ANON_PAGES to NR_ANON_MAPPEDMel Gorman
2016-07-28mm: move page mapped accounting to the nodeMel Gorman
2016-07-28mm, page_alloc: consider dirtyable memory in terms of nodesMel Gorman
2016-07-28mm, workingset: make working set detection node-awareMel Gorman
2016-07-28mm, memcg: move memcg limit enforcement from zones to nodesMel Gorman
2016-07-28mm, vmscan: make shrink_node decisions more node-centricMel Gorman
2016-07-28mm: vmscan: do not reclaim from kswapd if there is any eligible zoneMel Gorman
2016-07-28mm, vmscan: remove duplicate logic clearing node congestion and dirty stateMel Gorman
2016-07-28mm, vmscan: by default have direct reclaim only shrink once per nodeMel Gorman
2016-07-28mm, vmscan: simplify the logic deciding whether kswapd sleepsMel Gorman
2016-07-28mm, vmscan: remove balance gapMel Gorman
2016-07-28mm, vmscan: make kswapd reclaim in terms of nodesMel Gorman
2016-07-28mm, vmscan: have kswapd only scan based on the highest requested zoneMel Gorman
2016-07-28mm, vmscan: begin reclaiming pages on a per-node basisMel Gorman
2016-07-28mm, mmzone: clarify the usage of zone paddingMel Gorman
2016-07-28mm, vmscan: move LRU lists to nodeMel Gorman
2016-07-28mm, vmscan: move lru_lock to the nodeMel Gorman