From: David Rientjes <rientjes@google.com>
To: Yu Zhao <yuzhao@google.com>
Cc: "Henry Huang" <henry.hj@antgroup.com>,
yuanchu@google.com, akpm@linux-foundation.org,
谈鉴锋 <henry.tjf@antgroup.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
"朱辉(茶水)" <teawater@antgroup.com>
Subject: Re: [RFC v2] mm: Multi-Gen LRU: fix use mm/page_idle/bitmap
Date: Thu, 21 Dec 2023 21:14:52 -0800 (PST) [thread overview]
Message-ID: <931f2e6d-30a1-5f10-e879-65cb11c89b85@google.com> (raw)
In-Reply-To: <CAOUHufZwXBs4x7GeawrjZNEwTBdV=mf-DYrZuF4j=10URHwQTw@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 1859 bytes --]
On Thu, 21 Dec 2023, Yu Zhao wrote:
> > Thanks for replyting.
> >
> > On Fri, Dec 22, 2023 at 07:16 AM Yuanchu Xie wrote:
> > > How does the shared memory get charged to the cgroups?
> > > Does it all go to cgroup A or B exclusively, or do some pages get
> > > charged to each one?
> >
> > Some pages get charged to cgroup A, and the other get charged to cgroup B.
>
> Just a side note:
> We can potentially "fix" this, but it doesn't mean this is a good
> practice. In fact, I think this is an anti-pattern to memcgs:
> resources should be preferably isolated between memcgs, or if a
> resource has to be shared between memcgs, it should be charged in a
> predetermined way, not randomly to one of the memcgs sharing it.
>
Very interesting thread, a few questions for Henry to understand the
situation better:
- is the lack of predeterministic charging a problem for you? Are you
initially faulting it in a manner that charges it to the "right" memcg
and the refault of it after periodic reclaim can causing the charge to
appear "randomly," i.e. to whichever process happened to access it
next?
- are pages ever shared between different memcg hierarchies? You
mentioned sharing between processes in A and A/B, but I'm wondering
if there is sharing between two different memcg hierarchies where root
is the only common ancestor?
- do you anticipate a shorter scan period at some point? Proactively
reclaiming all memory colder than one hour is a long time :) Are you
concerned at all about the cost of doing your current idle bit
harvesting approach becoming too expensive if you significantly reduce
the scan period?
- is proactive reclaim being driven by writing to memory.reclaim, by
enforcing a smaller memory.high, or something else?
Looking forward to learning more about your particular issue.
next prev parent reply other threads:[~2023-12-22 5:14 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-06 12:50 Henry Huang
2023-12-06 12:50 ` Henry Huang
2023-12-07 1:30 ` Yu Zhao
2023-12-08 7:12 ` Henry Huang
2023-12-15 6:46 ` Yu Zhao
2023-12-15 10:53 ` Henry Huang
2023-12-16 21:06 ` Yu Zhao
2023-12-17 6:59 ` Henry Huang
2023-12-21 23:15 ` Yuanchu Xie
2023-12-22 2:44 ` Henry Huang
2023-12-22 4:35 ` Yu Zhao
2023-12-22 5:14 ` David Rientjes [this message]
2023-12-22 15:40 ` Henry Huang
2024-01-10 19:24 ` Yuanchu Xie
2024-01-12 4:40 ` Henry Huang
2023-12-15 7:23 ` Yu Zhao
2023-12-15 12:44 ` Henry Huang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=931f2e6d-30a1-5f10-e879-65cb11c89b85@google.com \
--to=rientjes@google.com \
--cc=akpm@linux-foundation.org \
--cc=henry.hj@antgroup.com \
--cc=henry.tjf@antgroup.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=teawater@antgroup.com \
--cc=yuanchu@google.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox