* [RESEND PATCH] mm: bail out from partial cgroup_reclaim inside shrink_lruvec
@ 2026-02-10 5:43 zhaoyang.huang
2026-02-11 22:09 ` T.J. Mercier
2026-02-11 22:13 ` T.J. Mercier
0 siblings, 2 replies; 4+ messages in thread
From: zhaoyang.huang @ 2026-02-10 5:43 UTC (permalink / raw)
To: Andrew Morton, Yu Zhao, Michal Hocko, Rik van Riel, Shakeel Butt,
Roman Gushchin, Johannes Weiner, linux-mm, linux-kernel,
Zhaoyang Huang, steve.kang
From: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
Nowadays, ANDROID system replaces madivse with memory.reclaim to implement
user space memory management which desires to reclaim a certain amount of
memcg's memory. However, oversized reclaiming and high latency are observed
as there is no limitation over nr_reclaimed inside try_to_shrink_lruvec
when MGLRU enabled. Besides, this could also affect all none root_reclaim
such as reclaim_high etc.
The commit 'b82b530740b9' ("mm: vmscan: restore incremental cgroup
iteration") introduces sc->memcg_full_walk to limit the walk range of
mem_cgroup_iter. This commit would like to make single memcg's scanning
more precised by judging if nr_reclaimed reached when sc->memcg_full_walk
not set.
Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
---
mm/vmscan.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 670fe9fae5ba..03bda1094621 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -4832,8 +4832,8 @@ static bool should_abort_scan(struct lruvec *lruvec, struct scan_control *sc)
int i;
enum zone_watermarks mark;
- /* don't abort memcg reclaim to ensure fairness */
- if (!root_reclaim(sc))
+ /* don't abort full walk memcg reclaim to ensure fairness */
+ if (!root_reclaim(sc) && sc->memcg_full_walk)
return false;
if (sc->nr_reclaimed >= max(sc->nr_to_reclaim, compact_gap(sc->order)))
--
2.25.1
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [RESEND PATCH] mm: bail out from partial cgroup_reclaim inside shrink_lruvec
2026-02-10 5:43 [RESEND PATCH] mm: bail out from partial cgroup_reclaim inside shrink_lruvec zhaoyang.huang
@ 2026-02-11 22:09 ` T.J. Mercier
2026-02-11 22:13 ` T.J. Mercier
1 sibling, 0 replies; 4+ messages in thread
From: T.J. Mercier @ 2026-02-11 22:09 UTC (permalink / raw)
To: zhaoyang.huang
Cc: Andrew Morton, Yu Zhao, Michal Hocko, Rik van Riel, Shakeel Butt,
Roman Gushchin, Johannes Weiner, linux-mm, linux-kernel,
Zhaoyang Huang, steve.kang
[-- Attachment #1: Type: text/plain, Size: 1835 bytes --]
On Mon, Feb 9, 2026 at 9:44 PM zhaoyang.huang <zhaoyang.huang@unisoc.com>
wrote:
>
> From: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
Hi Zhaoyang,
> Nowadays, ANDROID system replaces madivse with memory.reclaim to implement
> user space memory management which desires to reclaim a certain amount of
> memcg's memory. However, oversized reclaiming and high latency are
observed
> as there is no limitation over nr_reclaimed inside try_to_shrink_lruvec
> when MGLRU enabled. Besides, this could also affect all none root_reclaim
> such as reclaim_high etc.
> The commit 'b82b530740b9' ("mm: vmscan: restore incremental cgroup
> iteration") introduces sc->memcg_full_walk to limit the walk range of
> mem_cgroup_iter. This commit would like to make single memcg's scanning
> more precised by judging if nr_reclaimed reached when sc->memcg_full_walk
> not set.
>
> Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
> ---
> mm/vmscan.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 670fe9fae5ba..03bda1094621 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -4832,8 +4832,8 @@ static bool should_abort_scan(struct lruvec
*lruvec, struct scan_control *sc)
> int i;
> enum zone_watermarks mark;
>
> - /* don't abort memcg reclaim to ensure fairness */
> - if (!root_reclaim(sc))
> + /* don't abort full walk memcg reclaim to ensure fairness */
> + if (!root_reclaim(sc) && sc->memcg_full_walk)
Can't we just get rid of this if (!root_reclaim(sc)) check entirely now
that commit 'b82b530740b9' ("mm: vmscan: restore incremental cgroup
iteration") provides eventual fairness for the proactive reclaim case? That
wasn't true when this check was added initially.
Thanks,
T.J.
[-- Attachment #2: Type: text/html, Size: 2315 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [RESEND PATCH] mm: bail out from partial cgroup_reclaim inside shrink_lruvec
2026-02-10 5:43 [RESEND PATCH] mm: bail out from partial cgroup_reclaim inside shrink_lruvec zhaoyang.huang
2026-02-11 22:09 ` T.J. Mercier
@ 2026-02-11 22:13 ` T.J. Mercier
2026-02-12 3:01 ` Zhaoyang Huang
1 sibling, 1 reply; 4+ messages in thread
From: T.J. Mercier @ 2026-02-11 22:13 UTC (permalink / raw)
To: zhaoyang.huang
Cc: Andrew Morton, Yu Zhao, Michal Hocko, Rik van Riel, Shakeel Butt,
Roman Gushchin, Johannes Weiner, linux-mm, linux-kernel,
Zhaoyang Huang, steve.kang
On Mon, Feb 9, 2026 at 9:44 PM zhaoyang.huang <zhaoyang.huang@unisoc.com> wrote:
>
> From: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
Hi Zhaoyang,
> Nowadays, ANDROID system replaces madivse with memory.reclaim to implement
> user space memory management which desires to reclaim a certain amount of
> memcg's memory. However, oversized reclaiming and high latency are observed
> as there is no limitation over nr_reclaimed inside try_to_shrink_lruvec
> when MGLRU enabled. Besides, this could also affect all none root_reclaim
> such as reclaim_high etc.
> The commit 'b82b530740b9' ("mm: vmscan: restore incremental cgroup
> iteration") introduces sc->memcg_full_walk to limit the walk range of
> mem_cgroup_iter. This commit would like to make single memcg's scanning
> more precised by judging if nr_reclaimed reached when sc->memcg_full_walk
> not set.
>
> Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
> ---
> mm/vmscan.c | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 670fe9fae5ba..03bda1094621 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -4832,8 +4832,8 @@ static bool should_abort_scan(struct lruvec *lruvec, struct scan_control *sc)
> int i;
> enum zone_watermarks mark;
>
> - /* don't abort memcg reclaim to ensure fairness */
> - if (!root_reclaim(sc))
> + /* don't abort full walk memcg reclaim to ensure fairness */
> + if (!root_reclaim(sc) && sc->memcg_full_walk)
> return false;
Can't we just get rid of this if (!root_reclaim(sc)) check entirely
now that commit 'b82b530740b9' ("mm: vmscan: restore incremental
cgroup
iteration") provides eventual fairness for the proactive reclaim case?
That wasn't true when this check was added initially.
Thanks,
T.J.
^ permalink raw reply [flat|nested] 4+ messages in thread* Re: [RESEND PATCH] mm: bail out from partial cgroup_reclaim inside shrink_lruvec
2026-02-11 22:13 ` T.J. Mercier
@ 2026-02-12 3:01 ` Zhaoyang Huang
0 siblings, 0 replies; 4+ messages in thread
From: Zhaoyang Huang @ 2026-02-12 3:01 UTC (permalink / raw)
To: T.J. Mercier
Cc: zhaoyang.huang, Andrew Morton, Yu Zhao, Michal Hocko,
Rik van Riel, Shakeel Butt, Roman Gushchin, Johannes Weiner,
linux-mm, linux-kernel, steve.kang
On Thu, Feb 12, 2026 at 6:13 AM T.J. Mercier <tjmercier@google.com> wrote:
>
> On Mon, Feb 9, 2026 at 9:44 PM zhaoyang.huang <zhaoyang.huang@unisoc.com> wrote:
> >
> > From: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
>
> Hi Zhaoyang,
>
> > Nowadays, ANDROID system replaces madivse with memory.reclaim to implement
> > user space memory management which desires to reclaim a certain amount of
> > memcg's memory. However, oversized reclaiming and high latency are observed
> > as there is no limitation over nr_reclaimed inside try_to_shrink_lruvec
> > when MGLRU enabled. Besides, this could also affect all none root_reclaim
> > such as reclaim_high etc.
> > The commit 'b82b530740b9' ("mm: vmscan: restore incremental cgroup
> > iteration") introduces sc->memcg_full_walk to limit the walk range of
> > mem_cgroup_iter. This commit would like to make single memcg's scanning
> > more precised by judging if nr_reclaimed reached when sc->memcg_full_walk
> > not set.
> >
> > Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
> > ---
> > mm/vmscan.c | 4 ++--
> > 1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/mm/vmscan.c b/mm/vmscan.c
> > index 670fe9fae5ba..03bda1094621 100644
> > --- a/mm/vmscan.c
> > +++ b/mm/vmscan.c
> > @@ -4832,8 +4832,8 @@ static bool should_abort_scan(struct lruvec *lruvec, struct scan_control *sc)
> > int i;
> > enum zone_watermarks mark;
> >
> > - /* don't abort memcg reclaim to ensure fairness */
> > - if (!root_reclaim(sc))
> > + /* don't abort full walk memcg reclaim to ensure fairness */
> > + if (!root_reclaim(sc) && sc->memcg_full_walk)
> > return false;
>
> Can't we just get rid of this if (!root_reclaim(sc)) check entirely
> now that commit 'b82b530740b9' ("mm: vmscan: restore incremental
> cgroup
> iteration") provides eventual fairness for the proactive reclaim case?
> That wasn't true when this check was added initially.
Thanks for the suggestion which works, I will resend the patch.
>
> Thanks,
> T.J.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2026-02-12 3:02 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-02-10 5:43 [RESEND PATCH] mm: bail out from partial cgroup_reclaim inside shrink_lruvec zhaoyang.huang
2026-02-11 22:09 ` T.J. Mercier
2026-02-11 22:13 ` T.J. Mercier
2026-02-12 3:01 ` Zhaoyang Huang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox