linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: deduct the number of pages reclaimed by madvise from workingset
@ 2023-05-24  9:12 zhaoyang.huang
  2023-05-24 20:40 ` Suren Baghdasaryan
  2023-05-25 13:54 ` Johannes Weiner
  0 siblings, 2 replies; 6+ messages in thread
From: zhaoyang.huang @ 2023-05-24  9:12 UTC (permalink / raw)
  To: Andrew Morton, Johannes Weiner, Suren Baghdasaryan, linux-mm,
	linux-kernel, Zhaoyang Huang, ke.wang

From: Zhaoyang Huang <zhaoyang.huang@unisoc.com>

The pages reclaimed by madvise_pageout are made of inactive and dropped from LRU
forcefully, which lead to the coming up refault pages possess a large refault
distance than it should be. These could affect the accuracy of thrashing when
madvise_pageout is used as a common way of memory reclaiming as ANDROID does now.

Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
---
 include/linux/swap.h | 2 +-
 mm/madvise.c         | 4 ++--
 mm/vmscan.c          | 8 +++++++-
 3 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 2787b84..0312142 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -428,7 +428,7 @@ extern unsigned long mem_cgroup_shrink_node(struct mem_cgroup *mem,
 extern int vm_swappiness;
 long remove_mapping(struct address_space *mapping, struct folio *folio);
 
-extern unsigned long reclaim_pages(struct list_head *page_list);
+extern unsigned long reclaim_pages(struct mm_struct *mm, struct list_head *page_list);
 #ifdef CONFIG_NUMA
 extern int node_reclaim_mode;
 extern int sysctl_min_unmapped_ratio;
diff --git a/mm/madvise.c b/mm/madvise.c
index b6ea204..61c8d7b 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -420,7 +420,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 huge_unlock:
 		spin_unlock(ptl);
 		if (pageout)
-			reclaim_pages(&page_list);
+			reclaim_pages(mm, &page_list);
 		return 0;
 	}
 
@@ -516,7 +516,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd,
 	arch_leave_lazy_mmu_mode();
 	pte_unmap_unlock(orig_pte, ptl);
 	if (pageout)
-		reclaim_pages(&page_list);
+		reclaim_pages(mm, &page_list);
 	cond_resched();
 
 	return 0;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 20facec..048c10b 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2741,12 +2741,14 @@ static unsigned int reclaim_folio_list(struct list_head *folio_list,
 	return nr_reclaimed;
 }
 
-unsigned long reclaim_pages(struct list_head *folio_list)
+unsigned long reclaim_pages(struct mm_struct *mm, struct list_head *folio_list)
 {
 	int nid;
 	unsigned int nr_reclaimed = 0;
 	LIST_HEAD(node_folio_list);
 	unsigned int noreclaim_flag;
+	struct lruvec *lruvec;
+	struct mem_cgroup *memcg = get_mem_cgroup_from_mm(mm);
 
 	if (list_empty(folio_list))
 		return nr_reclaimed;
@@ -2764,10 +2766,14 @@ unsigned long reclaim_pages(struct list_head *folio_list)
 		}
 
 		nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid));
+		lruvec = &memcg->nodeinfo[nid]->lruvec;
+		workingset_age_nonresident(lruvec, -nr_reclaimed);
 		nid = folio_nid(lru_to_folio(folio_list));
 	} while (!list_empty(folio_list));
 
 	nr_reclaimed += reclaim_folio_list(&node_folio_list, NODE_DATA(nid));
+	lruvec = &memcg->nodeinfo[nid]->lruvec;
+	workingset_age_nonresident(lruvec, -nr_reclaimed);
 
 	memalloc_noreclaim_restore(noreclaim_flag);
 
-- 
1.9.1



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-05-26 17:31 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-24  9:12 [PATCH] mm: deduct the number of pages reclaimed by madvise from workingset zhaoyang.huang
2023-05-24 20:40 ` Suren Baghdasaryan
2023-05-25  1:23   ` Zhaoyang Huang
2023-05-25 13:54 ` Johannes Weiner
2023-05-26  6:38   ` Zhaoyang Huang
2023-05-26 17:31     ` Suren Baghdasaryan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox