From: wangzicheng <wangzicheng@honor.com>
To: "T.J. Mercier" <tjmercier@google.com>, Yuanchu Xie <yuanchu@google.com>
Cc: Barry Song <21cnbao@gmail.com>,
"lsf-pc@lists.linux-foundation.org"
<lsf-pc@lists.linux-foundation.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
wangxin 00023513 <wangxin23@honor.com>, gao xu <gaoxu2@honor.com>,
wangtao <tao.wangtao@honor.com>,
liulu 00013167 <liulu.liu@honor.com>,
zhouxiaolong <zhouxiaolong9@honor.com>,
linkunli <linkunli@honor.com>,
"kasong@tencent.com" <kasong@tencent.com>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>,
"axelrasmussen@google.com" <axelrasmussen@google.com>,
"weixugc@google.com" <weixugc@google.com>,
Randy Dunlap <rdunlap@infradead.org>,
"Liam.Howlett@oracle.com" <Liam.Howlett@oracle.com>,
"willy@infradead.org" <willy@infradead.org>
Subject: RE: [LSF/MM/BPF TOPIC] MGLRU on Android: Real-World Problems and Challenges
Date: Fri, 27 Feb 2026 10:03:11 +0000 [thread overview]
Message-ID: <2e1aea6dec5849fda1c9bfffc58f9be0@honor.com> (raw)
In-Reply-To: <CABdmKX2mOD8CwAHPbn8Cha1uuMP8_mSXdH=914z5x_qTkJf7wA@mail.gmail.com>
> > >
> > > But this is not specific to MGLRU; it can also be an issue for the
> > > active/inactive LRU?
> >
> > Pardon my lack of familiarity with the use case, in what ways are
> > existing memcg protection features insufficient?
>
> I think he means that whichever memcg is next up on the memcg LRU is
> the one that gets reclaimed from first, regardless of whether that
> memcg is for a foreground app. Because there is currently no strong
> relationship between memcg LRU ordering and Android app state. Android
> updates oom_score_adj on app state transitions, so it'd be possible
> for the kernel to use that information from userspace to influence
> memcg LRU ordering, but I'm not really a great fan of that idea.
>
> Right, reclaim from foreground apps is not exclusively a MGLRU
> problem. There is an effort in both MGLRU and active/inactive LRU to
> provide some sort of fairness (now both are eventually-fair) for
> global reclaim, so that no memcg is the subject of disproportionate
> reclaim.
Hi T.J.,
Yes, exactly.
>
> I'm planning to investigate applying memory.low to foreground /
> top-app for reducing the amount of reclaim from foreground apps.
> However at the same time, we are also working on applying memory
> limits to all apps (memory.high) so that no single app (including a
> foreground app) can use all the system memory as a matter of Android
> policy.
>
> Thanks,
> T.J.
That sounds good to me. We're also experimenting with a similar approach.
In my opinion, these knobs are a good opportunity to combine some ML
and autotuning (in userspace, not in kernel space as the kernel-ML
patches did):
- explore a reasonable memory.low / memory.high range for a given app,
to both save memory and still keep good UX (hot launch / animations);
- classify apps into a few categories, and then reuse the tuned limits
for apps in the same class.
Best,
Zicheng
>
> > > Additionally, I’d like to add Q5 based on my observations:
> > > Q5:
> > > MGLRU places readahead folios in the newest generation. For example, if
> > > a page fault occurs at address 5, readahead fetches addresses 1–16, and
> > > all 16 folios are put in the youngest generation, even though many may
> > > not be needed. This can seriously impact reclamation performance, as
> > > these cold readahead folios occupy active slots.
> > >
> > > See the code below and the checks performed by lru_gen_in_fault().
> > >
> > > void folio_add_lru(struct folio *folio)
> > > {
> > > ...
> > > /* see the comment in lru_gen_folio_seq() */
> > > if (lru_gen_enabled() && !folio_test_unevictable(folio) &&
> > > lru_gen_in_fault() && !(current->flags & PF_MEMALLOC))
> > > folio_set_active(folio);
> > >
> > > folio_batch_add_and_move(folio, lru_add);
> > > }
> > > EXPORT_SYMBOL(folio_add_lru);
> > >
> > > I could have submitted a patchset to address this by initially marking
> > > only address 5 as active, and activating the other addresses later when
> > > they are actually mapped or accessed.
> >
> > I need to read it more closely but that's a good idea.
> >
> > Thanks,
> > Yuanchu
> >
next prev parent reply other threads:[~2026-02-27 10:03 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-24 3:17 wangzicheng
2026-02-24 17:10 ` Suren Baghdasaryan
2026-02-25 10:46 ` wangzicheng
2026-02-26 2:04 ` Kalesh Singh
2026-02-26 13:06 ` wangzicheng
2026-02-24 20:23 ` Barry Song
2026-02-25 10:43 ` wangzicheng
2026-02-26 8:03 ` Barry Song
2026-02-26 13:29 ` wangzicheng
2026-02-26 22:10 ` Yuanchu Xie
2026-02-27 0:13 ` T.J. Mercier
2026-02-27 10:03 ` wangzicheng [this message]
-- strict thread matches above, loose matches on Subject: below --
2026-02-14 10:06 wangzicheng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2e1aea6dec5849fda1c9bfffc58f9be0@honor.com \
--to=wangzicheng@honor.com \
--cc=21cnbao@gmail.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=gaoxu2@honor.com \
--cc=kasong@tencent.com \
--cc=linkunli@honor.com \
--cc=linux-mm@kvack.org \
--cc=liulu.liu@honor.com \
--cc=lsf-pc@lists.linux-foundation.org \
--cc=rdunlap@infradead.org \
--cc=tao.wangtao@honor.com \
--cc=tjmercier@google.com \
--cc=wangxin23@honor.com \
--cc=weixugc@google.com \
--cc=willy@infradead.org \
--cc=yuanchu@google.com \
--cc=zhouxiaolong9@honor.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox