linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Chen Ridong <chenridong@huaweicloud.com>
To: Shakeel Butt <shakeel.butt@linux.dev>
Cc: akpm@linux-foundation.org, axelrasmussen@google.com,
	yuanchu@google.com, weixugc@google.com, david@kernel.org,
	lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com,
	vbabka@suse.cz, rppt@kernel.org, surenb@google.com,
	mhocko@suse.com, corbet@lwn.net, hannes@cmpxchg.org,
	roman.gushchin@linux.dev, muchun.song@linux.dev,
	zhengqi.arch@bytedance.com, linux-mm@kvack.org,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	cgroups@vger.kernel.org, lujialin4@huawei.com,
	zhongjinji@honor.com
Subject: Re: [PATCH -next 1/5] mm/mglru: use mem_cgroup_iter for global reclaim
Date: Mon, 22 Dec 2025 15:27:26 +0800	[thread overview]
Message-ID: <702b6c0b-5e65-4f55-9a2f-4d07c3a84e39@huaweicloud.com> (raw)
In-Reply-To: <gkudpvytcc3aa5yjaigwtkyyyglmvnnqngrexfuqiv2mzxj5cn@e7rezszexd7l>



On 2025/12/22 11:12, Shakeel Butt wrote:
> On Tue, Dec 09, 2025 at 01:25:53AM +0000, Chen Ridong wrote:
>> From: Chen Ridong <chenridong@huawei.com>
>>
>> The memcg LRU was originally introduced for global reclaim to enhance
>> scalability. However, its implementation complexity has led to performance
>> regressions when dealing with a large number of memory cgroups [1].
>>
>> As suggested by Johannes [1], this patch adopts mem_cgroup_iter with
>> cookie-based iteration for global reclaim, aligning with the approach
>> already used in shrink_node_memcgs. This simplification removes the
>> dedicated memcg LRU tracking while maintaining the core functionality.
>>
>> It performed a stress test based on Yu Zhao's methodology [2] on a
>> 1 TB, 4-node NUMA system. The results are summarized below:
>>
>> 	pgsteal:
>> 						memcg LRU    memcg iter
>> 	stddev(pgsteal) / mean(pgsteal)		106.03%		93.20%
>> 	sum(pgsteal) / sum(requested)		98.10%		99.28%
>>
>> 	workingset_refault_anon:
>> 						memcg LRU    memcg iter
>> 	stddev(refault) / mean(refault)		193.97%		134.67%
>> 	sum(refault)				1963229		2027567
>>
>> The new implementation shows a clear fairness improvement, reducing the
>> standard deviation relative to the mean by 12.8 percentage points. The
>> pgsteal ratio is also closer to 100%. Refault counts increased by 3.2%
>> (from 1,963,229 to 2,027,567).
>>
>> The primary benefits of this change are:
>> 1. Simplified codebase by removing custom memcg LRU infrastructure
>> 2. Improved fairness in memory reclaim across multiple cgroups
>> 3. Better performance when creating many memory cgroups
>>
>> [1] https://lore.kernel.org/r/20251126171513.GC135004@cmpxchg.org
>> [2] https://lore.kernel.org/r/20221222041905.2431096-7-yuzhao@google.com
>> Suggested-by: Johannes Weiner <hannes@cmxpchg.org>
>> Signed-off-by: Chen Ridong <chenridong@huawei.com>
>> Acked-by: Johannes Weiner <hannes@cmxpchg.org>
>> ---
>>  mm/vmscan.c | 117 ++++++++++++++++------------------------------------
>>  1 file changed, 36 insertions(+), 81 deletions(-)
>>
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index fddd168a9737..70b0e7e5393c 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -4895,27 +4895,14 @@ static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>>  	return nr_to_scan < 0;
>>  }
>>  
>> -static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
>> +static void shrink_one(struct lruvec *lruvec, struct scan_control *sc)
>>  {
>> -	bool success;
>>  	unsigned long scanned = sc->nr_scanned;
>>  	unsigned long reclaimed = sc->nr_reclaimed;
>> -	struct mem_cgroup *memcg = lruvec_memcg(lruvec);
>>  	struct pglist_data *pgdat = lruvec_pgdat(lruvec);
>> +	struct mem_cgroup *memcg = lruvec_memcg(lruvec);
>>  
>> -	/* lru_gen_age_node() called mem_cgroup_calculate_protection() */
>> -	if (mem_cgroup_below_min(NULL, memcg))
>> -		return MEMCG_LRU_YOUNG;
>> -
>> -	if (mem_cgroup_below_low(NULL, memcg)) {
>> -		/* see the comment on MEMCG_NR_GENS */
>> -		if (READ_ONCE(lruvec->lrugen.seg) != MEMCG_LRU_TAIL)
>> -			return MEMCG_LRU_TAIL;
>> -
>> -		memcg_memory_event(memcg, MEMCG_LOW);
>> -	}
>> -
>> -	success = try_to_shrink_lruvec(lruvec, sc);
>> +	try_to_shrink_lruvec(lruvec, sc);
>>  
>>  	shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, sc->priority);
>>  
>> @@ -4924,86 +4911,55 @@ static int shrink_one(struct lruvec *lruvec, struct scan_control *sc)
>>  			   sc->nr_reclaimed - reclaimed);
>>  
>>  	flush_reclaim_state(sc);
>> -
>> -	if (success && mem_cgroup_online(memcg))
>> -		return MEMCG_LRU_YOUNG;
>> -
>> -	if (!success && lruvec_is_sizable(lruvec, sc))
>> -		return 0;
>> -
>> -	/* one retry if offlined or too small */
>> -	return READ_ONCE(lruvec->lrugen.seg) != MEMCG_LRU_TAIL ?
>> -	       MEMCG_LRU_TAIL : MEMCG_LRU_YOUNG;
>>  }
>>  
>>  static void shrink_many(struct pglist_data *pgdat, struct scan_control *sc)
>>  {
>> -	int op;
>> -	int gen;
>> -	int bin;
>> -	int first_bin;
>> -	struct lruvec *lruvec;
>> -	struct lru_gen_folio *lrugen;
>> +	struct mem_cgroup *target = sc->target_mem_cgroup;
>> +	struct mem_cgroup_reclaim_cookie reclaim = {
>> +		.pgdat = pgdat,
>> +	};
>> +	struct mem_cgroup_reclaim_cookie *cookie = &reclaim;
> 
> Please keep the naming same as shrink_node_memcgs i.e. use 'partial'
> here.
> 

Thank you, will update.

>>  	struct mem_cgroup *memcg;
>> -	struct hlist_nulls_node *pos;
>>  
>> -	gen = get_memcg_gen(READ_ONCE(pgdat->memcg_lru.seq));
>> -	bin = first_bin = get_random_u32_below(MEMCG_NR_BINS);
>> -restart:
>> -	op = 0;
>> -	memcg = NULL;
>> -
>> -	rcu_read_lock();
>> +	if (current_is_kswapd() || sc->memcg_full_walk)
>> +		cookie = NULL;
>>  
>> -	hlist_nulls_for_each_entry_rcu(lrugen, pos, &pgdat->memcg_lru.fifo[gen][bin], list) {
>> -		if (op) {
>> -			lru_gen_rotate_memcg(lruvec, op);
>> -			op = 0;
>> -		}
>> +	memcg = mem_cgroup_iter(target, NULL, cookie);
>> +	while (memcg) {
> 
> Please use the do-while loop same as shrink_node_memcgs and then change
> the goto next below to continue similar to shrink_node_memcgs.
> 

Will update.

>> +		struct lruvec *lruvec = mem_cgroup_lruvec(memcg, pgdat);
>>  
>> -		mem_cgroup_put(memcg);
>> -		memcg = NULL;
>> +		cond_resched();
>>  
>> -		if (gen != READ_ONCE(lrugen->gen))
>> -			continue;
>> +		mem_cgroup_calculate_protection(target, memcg);
>>  
>> -		lruvec = container_of(lrugen, struct lruvec, lrugen);
>> -		memcg = lruvec_memcg(lruvec);
>> +		if (mem_cgroup_below_min(target, memcg))
>> +			goto next;
>>  
>> -		if (!mem_cgroup_tryget(memcg)) {
>> -			lru_gen_release_memcg(memcg);
>> -			memcg = NULL;
>> -			continue;
>> +		if (mem_cgroup_below_low(target, memcg)) {
>> +			if (!sc->memcg_low_reclaim) {
>> +				sc->memcg_low_skipped = 1;
>> +				goto next;
>> +			}
>> +			memcg_memory_event(memcg, MEMCG_LOW);
>>  		}
>>  
>> -		rcu_read_unlock();
>> +		shrink_one(lruvec, sc);
>>  
>> -		op = shrink_one(lruvec, sc);
>> -
>> -		rcu_read_lock();
>> -
>> -		if (should_abort_scan(lruvec, sc))
>> +		if (should_abort_scan(lruvec, sc)) {
>> +			if (cookie)
>> +				mem_cgroup_iter_break(target, memcg);
>>  			break;
> 
> This seems buggy as we may break the loop without calling
> mem_cgroup_iter_break(). I think for kswapd the cookie will be NULL and
> if should_abort_scan() returns true, we will break the loop without
> calling mem_cgroup_iter_break() and will leak a reference to memcg.
> 

Thank you for catching that—my mistake.

This also brings up another point: In kswapd, the traditional LRU iterates through all memcgs, but
stops for the generational LRU (GENLRU) when should_abort_scan is met (i.e., enough pages are
reclaimed or the watermark is satisfied). Shouldn't both behave consistently?

Perhaps we should add should_abort_scan(lruvec, sc) in shrink_node_memcgs for the traditional LRU as
well?

-- 
Best regards,
Ridong



  reply	other threads:[~2025-12-22  7:27 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-12-09  1:25 [PATCH -next 0/5] mm/mglru: remove memcg lru Chen Ridong
2025-12-09  1:25 ` [PATCH -next 1/5] mm/mglru: use mem_cgroup_iter for global reclaim Chen Ridong
2025-12-22  3:12   ` Shakeel Butt
2025-12-22  7:27     ` Chen Ridong [this message]
2025-12-22 21:18       ` Shakeel Butt
2025-12-23  0:45         ` Chen Ridong
2025-12-09  1:25 ` [PATCH -next 2/5] mm/mglru: remove memcg lru Chen Ridong
2025-12-22  3:24   ` Shakeel Butt
2025-12-09  1:25 ` [PATCH -next 3/5] mm/mglru: extend shrink_one for both lrugen and non-lrugen Chen Ridong
2025-12-12  2:55   ` kernel test robot
2025-12-12  9:53     ` Chen Ridong
2025-12-15 21:13   ` Johannes Weiner
2025-12-16  1:14     ` Chen Ridong
2025-12-22 21:36       ` Shakeel Butt
2025-12-23  1:00         ` Chen Ridong
2025-12-22  3:49   ` Shakeel Butt
2025-12-22  7:44     ` Chen Ridong
2025-12-09  1:25 ` [PATCH -next 4/5] mm/mglru: combine shrink_many into shrink_node_memcgs Chen Ridong
2025-12-15 21:17   ` Johannes Weiner
2025-12-16  1:23     ` Chen Ridong
2025-12-22  7:40     ` Chen Ridong
2025-12-09  1:25 ` [PATCH -next 5/5] mm/mglru: factor lrugen state out of shrink_lruvec Chen Ridong
2025-12-12 10:15 ` [PATCH -next 0/5] mm/mglru: remove memcg lru Chen Ridong
2025-12-15 16:18 ` Michal Koutný
2025-12-16  0:45   ` Chen Ridong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=702b6c0b-5e65-4f55-9a2f-4d07c3a84e39@huaweicloud.com \
    --to=chenridong@huaweicloud.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=axelrasmussen@google.com \
    --cc=cgroups@vger.kernel.org \
    --cc=corbet@lwn.net \
    --cc=david@kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=lujialin4@huawei.com \
    --cc=mhocko@suse.com \
    --cc=muchun.song@linux.dev \
    --cc=roman.gushchin@linux.dev \
    --cc=rppt@kernel.org \
    --cc=shakeel.butt@linux.dev \
    --cc=surenb@google.com \
    --cc=vbabka@suse.cz \
    --cc=weixugc@google.com \
    --cc=yuanchu@google.com \
    --cc=zhengqi.arch@bytedance.com \
    --cc=zhongjinji@honor.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox