From: Nico Pache <npache@redhat.com>
To: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: David Hildenbrand <david@redhat.com>,
Kiryl Shutsemau <kas@kernel.org>,
linux-mm@kvack.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org,
linux-trace-kernel@vger.kernel.org, ziy@nvidia.com,
baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com,
ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net,
rostedt@goodmis.org, mhiramat@kernel.org,
mathieu.desnoyers@efficios.com, akpm@linux-foundation.org,
baohua@kernel.org, willy@infradead.org, peterx@redhat.com,
wangkefeng.wang@huawei.com, usamaarif642@gmail.com,
sunnanyong@huawei.com, vishal.moola@gmail.com,
thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com,
aarcange@redhat.com, raquini@redhat.com,
anshuman.khandual@arm.com, catalin.marinas@arm.com,
tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com,
jack@suse.cz, cl@gentwo.org, jglisse@google.com,
surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org,
rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org,
hughd@google.com, richard.weiyang@gmail.com,
lance.yang@linux.dev, vbabka@suse.cz, rppt@kernel.org,
jannh@google.com, pfalcato@suse.de
Subject: Re: [PATCH v11 00/15] khugepaged: mTHP support
Date: Fri, 12 Sep 2025 18:28:55 -0600 [thread overview]
Message-ID: <CAA1CXcDyTR64jdhZPae2HPYOwsUxU1R1tj1hMeE=vV_ey9GXsg@mail.gmail.com> (raw)
In-Reply-To: <dcfc7e27-d3c8-4fd0-8b7b-ce8f5051d597@lucifer.local>
On Fri, Sep 12, 2025 at 12:22 PM Lorenzo Stoakes
<lorenzo.stoakes@oracle.com> wrote:
>
> On Fri, Sep 12, 2025 at 07:53:22PM +0200, David Hildenbrand wrote:
> > On 12.09.25 17:51, Lorenzo Stoakes wrote:
> > > With all this stuff said, do we have an actual plan for what we intend to do
> > > _now_?
> >
> > Oh no, no I have to use my brain and it's Friday evening.
>
> I apologise :)
>
> >
> > >
> > > As Nico has implemented a basic solution here that we all seem to agree is not
> > > what we want.
> > >
> > > Without needing special new hardware or major reworks, what would this parameter
> > > look like?
> > >
> > > What would the heuristics be? What about the eagerness scales?
> > >
> > > I'm but a simple kernel developer,
> >
> > :)
> >
> > and interested in simple pragmatic stuff :)
> > > do you have a plan right now David?
> >
> > Ehm, if you ask me that way ...
> >
> > >
> > > Maybe we can start with something simple like a rough percentage per eagerness
> > > entry that then gets scaled based on utilisation?
> >
> > ... I think we should probably:
> >
> > 1) Start with something very simple for mTHP that doesn't lock us into any particular direction.
>
> Yes.
>
> >
> > 2) Add an "eagerness" parameter with fixed scale and use that for mTHP as well
>
> Yes I think we're all pretty onboard with that it seems!
>
> >
> > 3) Improve that "eagerness" algorithm using a dynamic scale or #whatever
>
> Right, I feel like we could start with some very simple linear thing here and
> later maybe refine it?
I agree, something like 0,32,64,128,255,511 seem to map well, and is
not too different from what im doing with the scaling by
(HPAGE_PMD_ORDER - order).
>
> >
> > 4) Solve world peace and world hunger
>
> Yes! That would be pretty great ;)
This should probably be a larger priority
>
> >
> > 5) Connect it all to memory pressure / reclaim / shrinker / heuristics / hw hotness / #whatever
>
> I think these are TODOs :)
>
> >
> >
> > I maintain my initial position that just using
> >
> > max_ptes_none == 511 -> collapse mTHP always
> > max_ptes_none != 511 -> collapse mTHP only if we all PTEs are non-none/zero
> >
> > As a starting point is probably simple and best, and likely leaves room for any
> > changes later.
>
> Yes.
>
> >
> >
> > Of course, we could do what Nico is proposing here, as 1) and change it all later.
>
> Right.
>
> But that does mean for mTHP we're limited to 256 (or 255 was it?) but I guess
> given the 'creep' issue that's sensible.
I dont think thats much different to what david is trying to propose,
given eagerness=9 would be 50%.
at 10 or 511, no matter what, you will only ever collapse to the
largest enabled order.
The difference in my approach is that technically, with PMD disabled,
and 511, you would still need 50% utilization to collapse, which is
not ideal if you always want to collapse to some mTHP size even with 1
page occupied. With davids solution this is solved by never allowing
anything in between 255-511.
>
> >
> > It's just when it comes to documenting all that stuff in patch #15 that I feel like
> > "alright, we shouldn't be doing it longterm like that, so let's not make anybody
> > depend on any weird behavior here by over-domenting it".
> >
> > I mean
> >
> > "
> > +To prevent "creeping" behavior where collapses continuously promote to larger
> > +orders, if max_ptes_none >= HPAGE_PMD_NR/2 (255 on 4K page size), it is
> > +capped to HPAGE_PMD_NR/2 - 1 for mTHP collapses. This is due to the fact
> > +that introducing more than half of the pages to be non-zero it will always
> > +satisfy the eligibility check on the next scan and the region will be collapse.
> > "
> >
> > Is just way, way to detailed.
> >
> > I would just say "The kernel might decide to use a more conservative approach
> > when collapsing smaller THPs" etc.
> >
> >
> > Thoughts?
>
> Well I've sort of reviewed oppositely there :) well at least that it needs to be
> a hell of a lot clearer (I find that comment really compressed and I just don't
> really understand it).
I think your review is still valid to improve the internal code
comment. I think David is suggesting to not be so specific in the
actual admin-guide docs as we move towards a more opaque tunable.
>
> I guess I didn't think about people reading that and relying on it, so maybe we
> could alternatively make that succinct.
>
> But I think it'd be better to say something like "mTHP collapse cannot currently
> correctly function with half or more of the PTE entries empty, so we cap at just
> below this level" in this case.
Some middle ground might be the best answer, not too specific, but
also allude to the interworking a little.
Cheers,
-- Nico
>
> >
> > --
> > Cheers
> >
> > David / dhildenb
> >
>
> Cheers, Lorenzo
>
next prev parent reply other threads:[~2025-09-13 0:29 UTC|newest]
Thread overview: 79+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-12 3:27 Nico Pache
2025-09-12 3:27 ` [PATCH v11 01/15] khugepaged: rename hpage_collapse_* to collapse_* Nico Pache
2025-09-12 3:27 ` [PATCH v11 02/15] introduce collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
2025-09-12 3:27 ` [PATCH v11 03/15] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
2025-09-12 3:27 ` [PATCH v11 04/15] khugepaged: generalize alloc_charge_folio() Nico Pache
2025-09-12 3:28 ` [PATCH v11 05/15] khugepaged: generalize __collapse_huge_page_* for mTHP support Nico Pache
2025-09-12 3:28 ` [PATCH v11 06/15] khugepaged: introduce collapse_max_ptes_none helper function Nico Pache
2025-09-12 13:35 ` Lorenzo Stoakes
2025-09-12 23:26 ` Nico Pache
2025-09-15 10:30 ` Lorenzo Stoakes
2025-09-12 3:28 ` [PATCH v11 07/15] khugepaged: generalize collapse_huge_page for mTHP collapse Nico Pache
2025-09-12 3:28 ` [PATCH v11 08/15] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
2025-09-12 3:28 ` [PATCH v11 09/15] khugepaged: add per-order mTHP collapse failure statistics Nico Pache
2025-09-12 9:35 ` Baolin Wang
2025-09-12 3:28 ` [PATCH v11 10/15] khugepaged: improve tracepoints for mTHP orders Nico Pache
2025-09-12 3:28 ` [PATCH v11 11/15] khugepaged: introduce collapse_allowable_orders helper function Nico Pache
2025-09-12 9:24 ` Baolin Wang
2025-09-12 3:28 ` [PATCH v11 12/15] khugepaged: Introduce mTHP collapse support Nico Pache
2025-09-12 3:28 ` [PATCH v11 13/15] khugepaged: avoid unnecessary mTHP collapse attempts Nico Pache
2025-09-12 3:28 ` [PATCH v11 14/15] khugepaged: run khugepaged for all orders Nico Pache
2025-09-12 3:28 ` [PATCH v11 15/15] Documentation: mm: update the admin guide for mTHP collapse Nico Pache
2025-09-12 8:43 ` [PATCH v11 00/15] khugepaged: mTHP support Lorenzo Stoakes
2025-09-12 12:19 ` Kiryl Shutsemau
2025-09-12 12:25 ` David Hildenbrand
2025-09-12 13:37 ` Johannes Weiner
2025-09-12 13:46 ` David Hildenbrand
2025-09-12 14:01 ` Lorenzo Stoakes
2025-09-12 15:35 ` Pedro Falcato
2025-09-12 15:45 ` Lorenzo Stoakes
2025-09-12 15:15 ` Pedro Falcato
2025-09-12 15:38 ` Kiryl Shutsemau
2025-09-12 15:43 ` David Hildenbrand
2025-09-12 15:44 ` Kiryl Shutsemau
2025-09-12 15:51 ` David Hildenbrand
2025-09-15 13:43 ` Johannes Weiner
2025-09-15 14:45 ` David Hildenbrand
2025-09-12 23:31 ` Nico Pache
2025-09-15 9:22 ` Kiryl Shutsemau
2025-09-15 10:22 ` David Hildenbrand
2025-09-15 10:35 ` Lorenzo Stoakes
2025-09-15 10:39 ` David Hildenbrand
2025-09-15 10:40 ` Lorenzo Stoakes
2025-09-15 10:44 ` David Hildenbrand
2025-09-15 10:48 ` Lorenzo Stoakes
2025-09-15 10:52 ` David Hildenbrand
2025-09-15 10:59 ` Lorenzo Stoakes
2025-09-15 11:10 ` David Hildenbrand
2025-09-15 11:13 ` Lorenzo Stoakes
2025-09-15 11:16 ` David Hildenbrand
2025-09-15 12:16 ` Usama Arif
2025-09-15 10:43 ` Lorenzo Stoakes
2025-09-15 10:52 ` David Hildenbrand
2025-09-15 11:02 ` Lorenzo Stoakes
2025-09-15 11:14 ` David Hildenbrand
2025-09-15 11:23 ` Lorenzo Stoakes
2025-09-15 11:29 ` David Hildenbrand
2025-09-15 11:35 ` Lorenzo Stoakes
2025-09-15 11:45 ` David Hildenbrand
2025-09-15 12:01 ` Kiryl Shutsemau
2025-09-15 12:09 ` Lorenzo Stoakes
2025-09-15 11:41 ` Nico Pache
2025-09-15 12:59 ` David Hildenbrand
2025-09-12 13:47 ` David Hildenbrand
2025-09-12 14:28 ` David Hildenbrand
2025-09-12 14:35 ` Kiryl Shutsemau
2025-09-12 14:56 ` David Hildenbrand
2025-09-12 15:41 ` Kiryl Shutsemau
2025-09-12 15:45 ` David Hildenbrand
2025-09-12 15:51 ` Lorenzo Stoakes
2025-09-12 17:53 ` David Hildenbrand
2025-09-12 18:21 ` Lorenzo Stoakes
2025-09-13 0:28 ` Nico Pache [this message]
2025-09-15 10:44 ` Lorenzo Stoakes
2025-09-15 10:25 ` David Hildenbrand
2025-09-15 10:32 ` Lorenzo Stoakes
2025-09-15 10:37 ` David Hildenbrand
2025-09-15 10:46 ` Lorenzo Stoakes
2025-09-13 0:18 ` Nico Pache
2025-09-12 23:35 ` Nico Pache
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAA1CXcDyTR64jdhZPae2HPYOwsUxU1R1tj1hMeE=vV_ey9GXsg@mail.gmail.com' \
--to=npache@redhat.com \
--cc=Liam.Howlett@oracle.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=catalin.marinas@arm.com \
--cc=cl@gentwo.org \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=jack@suse.cz \
--cc=jannh@google.com \
--cc=jglisse@google.com \
--cc=kas@kernel.org \
--cc=lance.yang@linux.dev \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mathieu.desnoyers@efficios.com \
--cc=mhiramat@kernel.org \
--cc=mhocko@suse.com \
--cc=peterx@redhat.com \
--cc=pfalcato@suse.de \
--cc=raquini@redhat.com \
--cc=rdunlap@infradead.org \
--cc=richard.weiyang@gmail.com \
--cc=rientjes@google.com \
--cc=rostedt@goodmis.org \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=sunnanyong@huawei.com \
--cc=surenb@google.com \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tiwai@suse.de \
--cc=usamaarif642@gmail.com \
--cc=vbabka@suse.cz \
--cc=vishal.moola@gmail.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=ziy@nvidia.com \
--cc=zokeefe@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox