linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: Make drop_caches keep reclaiming on all nodes
@ 2022-11-15 12:32 Jan Kara
  2022-11-15 16:39 ` Shakeel Butt
  0 siblings, 1 reply; 2+ messages in thread
From: Jan Kara @ 2022-11-15 12:32 UTC (permalink / raw)
  To: Andrew Morton; +Cc: linux-mm, Vladimir Davydov, Jan Kara, You Zhou, Pengfei Xu

Currently, drop_caches are reclaiming node-by-node, looping on each node
until reclaim could not make progress. This can however leave quite some
slab entries (such as filesystem inodes) unreclaimed if objects say on
node 1 keep objects on node 0 pinned. So move the "loop until no
progress" loop to the node-by-node iteration to retry reclaim also on
other nodes if reclaim on some nodes made progress. This fixes problem
when drop_caches was not reclaiming lots of otherwise perfectly fine to
reclaim inodes.

Reported-by: You Zhou <you.zhou@intel.com>
Reported-and-tested-by: Pengfei Xu <pengfei.xu@intel.com>
Signed-off-by: Jan Kara <jack@suse.cz>
---
 mm/vmscan.c | 33 ++++++++++++++++++---------------
 1 file changed, 18 insertions(+), 15 deletions(-)

diff --git a/mm/vmscan.c b/mm/vmscan.c
index 04d8b88e5216..70d6d035b0fc 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1020,31 +1020,34 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
 	return freed;
 }
 
-static void drop_slab_node(int nid)
+static unsigned long drop_slab_node(int nid)
 {
-	unsigned long freed;
-	int shift = 0;
+	unsigned long freed = 0;
+	struct mem_cgroup *memcg = NULL;
 
+	memcg = mem_cgroup_iter(NULL, NULL, NULL);
 	do {
-		struct mem_cgroup *memcg = NULL;
-
-		if (fatal_signal_pending(current))
-			return;
+		freed += shrink_slab(GFP_KERNEL, nid, memcg, 0);
+	} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
 
-		freed = 0;
-		memcg = mem_cgroup_iter(NULL, NULL, NULL);
-		do {
-			freed += shrink_slab(GFP_KERNEL, nid, memcg, 0);
-		} while ((memcg = mem_cgroup_iter(NULL, memcg, NULL)) != NULL);
-	} while ((freed >> shift++) > 1);
+	return freed;
 }
 
 void drop_slab(void)
 {
 	int nid;
+	int shift = 0;
+	unsigned long freed;
 
-	for_each_online_node(nid)
-		drop_slab_node(nid);
+	do {
+		freed = 0;
+		for_each_online_node(nid) {
+			if (fatal_signal_pending(current))
+				return;
+
+			freed += drop_slab_node(nid);
+		}
+	} while ((freed >> shift++) > 1);
 }
 
 static inline int is_page_cache_freeable(struct folio *folio)
-- 
2.35.3



^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [PATCH] mm: Make drop_caches keep reclaiming on all nodes
  2022-11-15 12:32 [PATCH] mm: Make drop_caches keep reclaiming on all nodes Jan Kara
@ 2022-11-15 16:39 ` Shakeel Butt
  0 siblings, 0 replies; 2+ messages in thread
From: Shakeel Butt @ 2022-11-15 16:39 UTC (permalink / raw)
  To: Jan Kara; +Cc: Andrew Morton, linux-mm, Vladimir Davydov, You Zhou, Pengfei Xu

On Tue, Nov 15, 2022 at 01:32:55PM +0100, Jan Kara wrote:
> Currently, drop_caches are reclaiming node-by-node, looping on each node
> until reclaim could not make progress. This can however leave quite some
> slab entries (such as filesystem inodes) unreclaimed if objects say on
> node 1 keep objects on node 0 pinned. So move the "loop until no
> progress" loop to the node-by-node iteration to retry reclaim also on
> other nodes if reclaim on some nodes made progress. This fixes problem
> when drop_caches was not reclaiming lots of otherwise perfectly fine to
> reclaim inodes.
> 
> Reported-by: You Zhou <you.zhou@intel.com>
> Reported-and-tested-by: Pengfei Xu <pengfei.xu@intel.com>
> Signed-off-by: Jan Kara <jack@suse.cz>

Reviewed-by: Shakeel Butt <shakeelb@google.com>


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-11-15 16:39 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-11-15 12:32 [PATCH] mm: Make drop_caches keep reclaiming on all nodes Jan Kara
2022-11-15 16:39 ` Shakeel Butt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox