From: Yang Shi <shy828301@gmail.com>
To: Rik van Riel <riel@fb.com>
Cc: "Alex Zhu (Kernel)" <alexlzhu@fb.com>,
Kernel Team <Kernel-team@fb.com>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"willy@infradead.org" <willy@infradead.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"akpm@linux-foundation.org" <akpm@linux-foundation.org>
Subject: Re: [PATCH v3] mm: add thp_utilization metrics to /proc/thp_utilization
Date: Tue, 9 Aug 2022 10:11:31 -0700 [thread overview]
Message-ID: <CAHbLzkqpn2ExBJuPD8sYJrEDCUU9=FE3GFh8kL3Bmax0KytKPw@mail.gmail.com> (raw)
In-Reply-To: <fc108f58a4616d5d7d092a7c1f150069a92ee40c.camel@fb.com>
On Mon, Aug 8, 2022 at 11:35 AM Rik van Riel <riel@fb.com> wrote:
>
> On Mon, 2022-08-08 at 10:55 -0700, Yang Shi wrote:
> >
> > On Fri, Aug 5, 2022 at 12:52 PM Alex Zhu (Kernel) <alexlzhu@fb.com>
> > wrote:
> > >
> > > Sounds good, I’ll move this to debugfs then. Will follow up with
> > > the shrinker code
> > > in another patch. The shrinker relies on this scanning thread to
> > > figure out which
> > > THPs to reclaim.
> >
> > I'm wondering whether you could reuse the THP deferred split shrinker
> > or not. It is already memcg aware.
> >
> I'm not convinced that will buy much, since there is
> very little code duplication between the two.
>
> Merging the two might also bring about another bit of
> extra complexity, due to the deferred split shrinker
> wanting to shrink every single THP on its "to split"
> list, while for Alex's shrinker we probably want to
> split just one (or a few) THPs at a time, depending on
> memory pressure.
OK, it is hard to tell what it looks like now. But the THPs on the
deferred split list may be on the "low utilization split" list too?
IIUC the major difference is to replace zero-filled subpage to special
zero page, so you implemented another THP split function to handle it?
Anyway the code should answer the most questions.
>
next prev parent reply other threads:[~2022-08-09 17:11 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-05 18:40 alexlzhu
2022-08-05 18:50 ` Matthew Wilcox
2022-08-05 19:04 ` Rik van Riel
2022-08-05 19:24 ` Matthew Wilcox
2022-08-05 19:51 ` Alex Zhu (Kernel)
2022-08-08 17:55 ` Yang Shi
2022-08-08 18:35 ` Rik van Riel
2022-08-09 17:11 ` Yang Shi [this message]
2022-08-09 17:15 ` Alex Zhu (Kernel)
2022-08-09 23:35 ` Yu Zhao
2022-08-10 17:07 ` Yang Shi
2022-08-10 17:14 ` Alex Zhu (Kernel)
2022-08-10 17:54 ` Yu Zhao
2022-08-10 21:39 ` Alex Zhu (Kernel)
2022-08-10 21:56 ` Yu Zhao
2022-08-11 0:00 ` Alex Zhu (Kernel)
2022-08-11 1:15 ` Yu Zhao
2022-08-11 2:08 ` Alex Zhu (Kernel)
2022-08-11 19:20 ` Alex Zhu (Kernel)
2022-08-11 21:55 ` Yu Zhao
2022-08-11 22:12 ` Yang Shi
2022-08-11 22:59 ` Yu Zhao
2022-08-07 6:03 ` kernel test robot
2022-08-07 6:44 ` kernel test robot
2022-08-07 6:55 ` kernel test robot
2022-08-08 17:52 ` Yang Shi
2022-08-05 20:28 William Kucharski
2022-08-05 21:14 William Kucharski
2022-08-05 21:46 ` Alex Zhu (Kernel)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAHbLzkqpn2ExBJuPD8sYJrEDCUU9=FE3GFh8kL3Bmax0KytKPw@mail.gmail.com' \
--to=shy828301@gmail.com \
--cc=Kernel-team@fb.com \
--cc=akpm@linux-foundation.org \
--cc=alexlzhu@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=riel@fb.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox