From: Chen Ridong <chenridong@huaweicloud.com>
To: Shakeel Butt <shakeel.butt@linux.dev>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
akpm@linux-foundation.org, axelrasmussen@google.com,
yuanchu@google.com, weixugc@google.com, david@kernel.org,
lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com,
vbabka@suse.cz, rppt@kernel.org, surenb@google.com,
mhocko@suse.com, corbet@lwn.net, roman.gushchin@linux.dev,
muchun.song@linux.dev, zhengqi.arch@bytedance.com,
linux-mm@kvack.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
lujialin4@huawei.com, zhongjinji@honor.com
Subject: Re: [PATCH -next 3/5] mm/mglru: extend shrink_one for both lrugen and non-lrugen
Date: Tue, 23 Dec 2025 09:00:47 +0800 [thread overview]
Message-ID: <135f565e-b660-4773-8f98-fcbef9772f42@huaweicloud.com> (raw)
In-Reply-To: <7kwk3bkvhvflsyxgljnxzvrxco2u2rxjcdwqooeboyrkf2oxjj@2nywxl2sc6g5>
On 2025/12/23 5:36, Shakeel Butt wrote:
> On Tue, Dec 16, 2025 at 09:14:45AM +0800, Chen Ridong wrote:
>>
>>
>> On 2025/12/16 5:13, Johannes Weiner wrote:
>>> On Tue, Dec 09, 2025 at 01:25:55AM +0000, Chen Ridong wrote:
>>>> From: Chen Ridong <chenridong@huawei.com>
>>>>
>>>> Currently, flush_reclaim_state is placed differently between
>>>> shrink_node_memcgs and shrink_many. shrink_many (only used for gen-LRU)
>>>> calls it after each lruvec is shrunk, while shrink_node_memcgs calls it
>>>> only after all lruvecs have been shrunk.
>>>>
>>>> This patch moves flush_reclaim_state into shrink_node_memcgs and calls it
>>>> after each lruvec. This unifies the behavior and is reasonable because:
>>>>
>>>> 1. flush_reclaim_state adds current->reclaim_state->reclaimed to
>>>> sc->nr_reclaimed.
>>>> 2. For non-MGLRU root reclaim, this can help stop the iteration earlier
>>>> when nr_to_reclaim is reached.
>>>> 3. For non-root reclaim, the effect is negligible since flush_reclaim_state
>>>> does nothing in that case.
>>>>
>>>> After moving flush_reclaim_state into shrink_node_memcgs, shrink_one can be
>>>> extended to support both lrugen and non-lrugen paths. It will call
>>>> try_to_shrink_lruvec for lrugen root reclaim and shrink_lruvec otherwise.
>>>>
>>>> Signed-off-by: Chen Ridong <chenridong@huawei.com>
>>>> ---
>>>> mm/vmscan.c | 57 +++++++++++++++++++++--------------------------------
>>>> 1 file changed, 23 insertions(+), 34 deletions(-)
>>>>
>>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>>> index 584f41eb4c14..795f5ebd9341 100644
>>>> --- a/mm/vmscan.c
>>>> +++ b/mm/vmscan.c
>>>> @@ -4758,23 +4758,7 @@ static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_control *sc)
>>>> return nr_to_scan < 0;
>>>> }
>>>>
>>>> -static void shrink_one(struct lruvec *lruvec, struct scan_control *sc)
>>>> -{
>>>> - unsigned long scanned = sc->nr_scanned;
>>>> - unsigned long reclaimed = sc->nr_reclaimed;
>>>> - struct pglist_data *pgdat = lruvec_pgdat(lruvec);
>>>> - struct mem_cgroup *memcg = lruvec_memcg(lruvec);
>>>> -
>>>> - try_to_shrink_lruvec(lruvec, sc);
>>>> -
>>>> - shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, sc->priority);
>>>> -
>>>> - if (!sc->proactive)
>>>> - vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned,
>>>> - sc->nr_reclaimed - reclaimed);
>>>> -
>>>> - flush_reclaim_state(sc);
>>>> -}
>>>> +static void shrink_one(struct lruvec *lruvec, struct scan_control *sc);
>>>>
>>>> static void shrink_many(struct pglist_data *pgdat, struct scan_control *sc)
>>>> {
>>>> @@ -5760,6 +5744,27 @@ static inline bool should_continue_reclaim(struct pglist_data *pgdat,
>>>> return inactive_lru_pages > pages_for_compaction;
>>>> }
>>>>
>>>> +static void shrink_one(struct lruvec *lruvec, struct scan_control *sc)
>>>> +{
>>>> + unsigned long scanned = sc->nr_scanned;
>>>> + unsigned long reclaimed = sc->nr_reclaimed;
>>>> + struct pglist_data *pgdat = lruvec_pgdat(lruvec);
>>>> + struct mem_cgroup *memcg = lruvec_memcg(lruvec);
>>>> +
>>>> + if (lru_gen_enabled() && root_reclaim(sc))
>>>> + try_to_shrink_lruvec(lruvec, sc);
>>>> + else
>>>> + shrink_lruvec(lruvec, sc);
>>>
>>
>> Hi Johannes, thank you for your reply.
>>
>>> Yikes. So we end up with:
>>>
>>> shrink_node_memcgs()
>>> shrink_one()
>>> if lru_gen_enabled && root_reclaim(sc)
>>> try_to_shrink_lruvec(lruvec, sc)
>>> else
>>> shrink_lruvec()
>>> if lru_gen_enabled && !root_reclaim(sc)
>>> lru_gen_shrink_lruvec(lruvec, sc)
>>> try_to_shrink_lruvec()
>>>
>>> I think it's doing too much at once. Can you get it into the following
>>> shape:
>>>
>>
>> You're absolutely right. This refactoring is indeed what patch 5/5 implements.
>>
>> With patch 5/5 applied, the flow becomes:
>>
>> shrink_node_memcgs()
>> shrink_one()
>> if lru_gen_enabled
>> lru_gen_shrink_lruvec --> symmetric with else shrink_lruvec()
>> if (root_reclaim(sc)) --> handle root reclaim.
>> try_to_shrink_lruvec()
>> else
>> ...
>> try_to_shrink_lruvec()
>> else
>> shrink_lruvec()
>>
>> This matches the structure you described.
>>
>> One note: shrink_one() is also called from lru_gen_shrink_node() when memcg is disabled, so I
>> believe it makes sense to keep this helper.
>
> I think we don't need shrink_one as it can be inlined to its callers and
> also shrink_node_memcgs() already handles mem_cgroup_disabled() case, so
> lru_gen_shrink_node() should not need shrink_one for such case.
>
I think you mean:
shrink_node
lru_gen_shrink_node
// We do not need to handle memcg-disabled case here,
// because shrink_node_memcgs can already handle it.
shrink_node_memcgs
for each memcg:
if lru_gen_enabled:
lru_gen_shrink_lruvec()
else
shrink_lruvec()
shrink_slab(sc->gfp_mask, pgdat->node_id, memcg, sc->priority);
if (!sc->proactive)
vmpressure(...)
flush_reclaim_state(sc);
With this structure, both shrink_many and shrink_one are no longer needed. That looks much cleaner.
I will update it accordingly.
Thank you very much.
>>
>>> shrink_node_memcgs()
>>> for each memcg:
>>> if lru_gen_enabled:
>>> lru_gen_shrink_lruvec()
>>> else
>>> shrink_lruvec()
>>>
>
> I actually like what Johannes has requested above but if that is not
> possible without changing some behavior then let's aim to do as much as
> possible in this series while keeping the same behavior. In a followup
> we can try to combine the behavior part.
>
>>
>> Regarding the patch split, I currently kept patch 3/5 and 5/5 separate to make the changes clearer
>> in each step. Would you prefer that I merge patch 3/5 with patch 5/5, so the full refactoring
>> appears in one patch?
>>
>> Looking forward to your guidance.
--
Best regards,
Ridong
next prev parent reply other threads:[~2025-12-23 1:01 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-09 1:25 [PATCH -next 0/5] mm/mglru: remove memcg lru Chen Ridong
2025-12-09 1:25 ` [PATCH -next 1/5] mm/mglru: use mem_cgroup_iter for global reclaim Chen Ridong
2025-12-22 3:12 ` Shakeel Butt
2025-12-22 7:27 ` Chen Ridong
2025-12-22 21:18 ` Shakeel Butt
2025-12-23 0:45 ` Chen Ridong
2025-12-09 1:25 ` [PATCH -next 2/5] mm/mglru: remove memcg lru Chen Ridong
2025-12-22 3:24 ` Shakeel Butt
2025-12-09 1:25 ` [PATCH -next 3/5] mm/mglru: extend shrink_one for both lrugen and non-lrugen Chen Ridong
2025-12-12 2:55 ` kernel test robot
2025-12-12 9:53 ` Chen Ridong
2025-12-15 21:13 ` Johannes Weiner
2025-12-16 1:14 ` Chen Ridong
2025-12-22 21:36 ` Shakeel Butt
2025-12-23 1:00 ` Chen Ridong [this message]
2025-12-22 3:49 ` Shakeel Butt
2025-12-22 7:44 ` Chen Ridong
2025-12-09 1:25 ` [PATCH -next 4/5] mm/mglru: combine shrink_many into shrink_node_memcgs Chen Ridong
2025-12-15 21:17 ` Johannes Weiner
2025-12-16 1:23 ` Chen Ridong
2025-12-22 7:40 ` Chen Ridong
2025-12-09 1:25 ` [PATCH -next 5/5] mm/mglru: factor lrugen state out of shrink_lruvec Chen Ridong
2025-12-12 10:15 ` [PATCH -next 0/5] mm/mglru: remove memcg lru Chen Ridong
2025-12-15 16:18 ` Michal Koutný
2025-12-16 0:45 ` Chen Ridong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=135f565e-b660-4773-8f98-fcbef9772f42@huaweicloud.com \
--to=chenridong@huaweicloud.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=cgroups@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=david@kernel.org \
--cc=hannes@cmpxchg.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=lujialin4@huawei.com \
--cc=mhocko@suse.com \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=rppt@kernel.org \
--cc=shakeel.butt@linux.dev \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
--cc=weixugc@google.com \
--cc=yuanchu@google.com \
--cc=zhengqi.arch@bytedance.com \
--cc=zhongjinji@honor.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox