From: Johannes Weiner <hannes@cmpxchg.org>
To: David Hildenbrand <david@redhat.com>
Cc: Kiryl Shutsemau <kas@kernel.org>, Nico Pache <npache@redhat.com>,
linux-mm@kvack.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org,
ziy@nvidia.com, baolin.wang@linux.alibaba.com,
lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com,
ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net,
rostedt@goodmis.org, mhiramat@kernel.org,
mathieu.desnoyers@efficios.com, akpm@linux-foundation.org,
baohua@kernel.org, willy@infradead.org, peterx@redhat.com,
wangkefeng.wang@huawei.com, usamaarif642@gmail.com,
sunnanyong@huawei.com, vishal.moola@gmail.com,
thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com,
aarcange@redhat.com, raquini@redhat.com,
anshuman.khandual@arm.com, catalin.marinas@arm.com,
tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com,
jack@suse.cz, cl@gentwo.org, jglisse@google.com,
surenb@google.com, zokeefe@google.com, rientjes@google.com,
mhocko@suse.com, rdunlap@infradead.org, hughd@google.com,
richard.weiyang@gmail.com, lance.yang@linux.dev, vbabka@suse.cz,
rppt@kernel.org, jannh@google.com, pfalcato@suse.de
Subject: Re: [PATCH v11 00/15] khugepaged: mTHP support
Date: Mon, 15 Sep 2025 09:43:59 -0400 [thread overview]
Message-ID: <20250915134359.GA827803@cmpxchg.org> (raw)
In-Reply-To: <da251159-b39f-467b-a4e3-676aa761c0e8@redhat.com>
On Fri, Sep 12, 2025 at 03:46:36PM +0200, David Hildenbrand wrote:
> On 12.09.25 15:37, Johannes Weiner wrote:
> > On Fri, Sep 12, 2025 at 02:25:31PM +0200, David Hildenbrand wrote:
> >> On 12.09.25 14:19, Kiryl Shutsemau wrote:
> >>> On Thu, Sep 11, 2025 at 09:27:55PM -0600, Nico Pache wrote:
> >>>> The following series provides khugepaged with the capability to collapse
> >>>> anonymous memory regions to mTHPs.
> >>>>
> >>>> To achieve this we generalize the khugepaged functions to no longer depend
> >>>> on PMD_ORDER. Then during the PMD scan, we use a bitmap to track individual
> >>>> pages that are occupied (!none/zero). After the PMD scan is done, we do
> >>>> binary recursion on the bitmap to find the optimal mTHP sizes for the PMD
> >>>> range. The restriction on max_ptes_none is removed during the scan, to make
> >>>> sure we account for the whole PMD range. When no mTHP size is enabled, the
> >>>> legacy behavior of khugepaged is maintained. max_ptes_none will be scaled
> >>>> by the attempted collapse order to determine how full a mTHP must be to be
> >>>> eligible for the collapse to occur. If a mTHP collapse is attempted, but
> >>>> contains swapped out, or shared pages, we don't perform the collapse. It is
> >>>> now also possible to collapse to mTHPs without requiring the PMD THP size
> >>>> to be enabled.
> >>>>
> >>>> When enabling (m)THP sizes, if max_ptes_none >= HPAGE_PMD_NR/2 (255 on
> >>>> 4K page size), it will be automatically capped to HPAGE_PMD_NR/2 - 1 for
> >>>> mTHP collapses to prevent collapse "creep" behavior. This prevents
> >>>> constantly promoting mTHPs to the next available size, which would occur
> >>>> because a collapse introduces more non-zero pages that would satisfy the
> >>>> promotion condition on subsequent scans.
> >>>
> >>> Hm. Maybe instead of capping at HPAGE_PMD_NR/2 - 1 we can count
> >>> all-zeros 4k as none_or_zero? It mirrors the logic of shrinker.
> >>>
> >>
> >> I am all for not adding any more ugliness on top of all the ugliness we
> >> added in the past.
> >>
> >> I will soon propose deprecating that parameter in favor of something
> >> that makes a bit more sense.
> >>
> >> In essence, we'll likely have an "eagerness" parameter that ranges from
> >> 0 to 10. 10 is essentially "always collapse" and 0 "never collapse if
> >> not all is populated".
> >>
> >> In between we will have more flexibility on how to set these values.
> >>
> >> Likely 9 will be around 50% to not even motivate the user to set
> >> something that does not make sense (creep).
> >
> > One observation we've had from production experiments is that the
> > optimal number here isn't static. If you have plenty of memory, then
> > even very sparse THPs are beneficial.
>
> Exactly.
>
> And willy suggested something like "eagerness" similar to "swapinness"
> that gives us more flexibility when implementing it, including
> dynamically adjusting the values in the future.
I think we talked past each other a bit here. The point I was trying
to make is that the optimal behavior depends on the pressure situation
inside the kernel; it's fundamentally not something userspace can make
informed choices about.
So for max_ptes_none, the approach is basically: try a few settings
and see which one performs best. Okay, not great. But wouldn't that be
the same for an eagerness setting? What would be the mental model for
the user when configuring this? If it's the same empirical approach,
then the new knob would seem like a lateral move.
It would also be difficult to change the implementation without
risking regressions once production systems are tuned to the old
behavior.
> > An extreme example: if all your THPs have 2/512 pages populated,
> > that's still cutting TLB pressure in half!
>
> IIRC, you create more pressure on the huge entries, where you might have
> less TLB entries :) But yes, there can be cases where it is beneficial,
> if there is absolutely no memory pressure.
Ha, the TLB topology is a whole other can of worms.
We've tried deploying THP on older systems with separate TLB entries
for different page sizes and gave up. It's a nightmare to configure
and very easy to do worse than base pages.
The kernel itself is using a mix of page sizes for the identity
mapping. You basically have to complement the userspace page size
distribution in such a way that you don't compete over the wrong
entries at runtime. It's just stupid. I'm honestly not sure this is
realistically solvable.
So we're deploying THP only on newer AMD machines where TLB entries
are shared.
For split TLBs, we're sticking with hugetlb and trial-and-error.
Please don't build CPUs this way.
next prev parent reply other threads:[~2025-09-15 13:44 UTC|newest]
Thread overview: 79+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-12 3:27 Nico Pache
2025-09-12 3:27 ` [PATCH v11 01/15] khugepaged: rename hpage_collapse_* to collapse_* Nico Pache
2025-09-12 3:27 ` [PATCH v11 02/15] introduce collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
2025-09-12 3:27 ` [PATCH v11 03/15] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
2025-09-12 3:27 ` [PATCH v11 04/15] khugepaged: generalize alloc_charge_folio() Nico Pache
2025-09-12 3:28 ` [PATCH v11 05/15] khugepaged: generalize __collapse_huge_page_* for mTHP support Nico Pache
2025-09-12 3:28 ` [PATCH v11 06/15] khugepaged: introduce collapse_max_ptes_none helper function Nico Pache
2025-09-12 13:35 ` Lorenzo Stoakes
2025-09-12 23:26 ` Nico Pache
2025-09-15 10:30 ` Lorenzo Stoakes
2025-09-12 3:28 ` [PATCH v11 07/15] khugepaged: generalize collapse_huge_page for mTHP collapse Nico Pache
2025-09-12 3:28 ` [PATCH v11 08/15] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
2025-09-12 3:28 ` [PATCH v11 09/15] khugepaged: add per-order mTHP collapse failure statistics Nico Pache
2025-09-12 9:35 ` Baolin Wang
2025-09-12 3:28 ` [PATCH v11 10/15] khugepaged: improve tracepoints for mTHP orders Nico Pache
2025-09-12 3:28 ` [PATCH v11 11/15] khugepaged: introduce collapse_allowable_orders helper function Nico Pache
2025-09-12 9:24 ` Baolin Wang
2025-09-12 3:28 ` [PATCH v11 12/15] khugepaged: Introduce mTHP collapse support Nico Pache
2025-09-12 3:28 ` [PATCH v11 13/15] khugepaged: avoid unnecessary mTHP collapse attempts Nico Pache
2025-09-12 3:28 ` [PATCH v11 14/15] khugepaged: run khugepaged for all orders Nico Pache
2025-09-12 3:28 ` [PATCH v11 15/15] Documentation: mm: update the admin guide for mTHP collapse Nico Pache
2025-09-12 8:43 ` [PATCH v11 00/15] khugepaged: mTHP support Lorenzo Stoakes
2025-09-12 12:19 ` Kiryl Shutsemau
2025-09-12 12:25 ` David Hildenbrand
2025-09-12 13:37 ` Johannes Weiner
2025-09-12 13:46 ` David Hildenbrand
2025-09-12 14:01 ` Lorenzo Stoakes
2025-09-12 15:35 ` Pedro Falcato
2025-09-12 15:45 ` Lorenzo Stoakes
2025-09-12 15:15 ` Pedro Falcato
2025-09-12 15:38 ` Kiryl Shutsemau
2025-09-12 15:43 ` David Hildenbrand
2025-09-12 15:44 ` Kiryl Shutsemau
2025-09-12 15:51 ` David Hildenbrand
2025-09-15 13:43 ` Johannes Weiner [this message]
2025-09-15 14:45 ` David Hildenbrand
2025-09-12 23:31 ` Nico Pache
2025-09-15 9:22 ` Kiryl Shutsemau
2025-09-15 10:22 ` David Hildenbrand
2025-09-15 10:35 ` Lorenzo Stoakes
2025-09-15 10:39 ` David Hildenbrand
2025-09-15 10:40 ` Lorenzo Stoakes
2025-09-15 10:44 ` David Hildenbrand
2025-09-15 10:48 ` Lorenzo Stoakes
2025-09-15 10:52 ` David Hildenbrand
2025-09-15 10:59 ` Lorenzo Stoakes
2025-09-15 11:10 ` David Hildenbrand
2025-09-15 11:13 ` Lorenzo Stoakes
2025-09-15 11:16 ` David Hildenbrand
2025-09-15 12:16 ` Usama Arif
2025-09-15 10:43 ` Lorenzo Stoakes
2025-09-15 10:52 ` David Hildenbrand
2025-09-15 11:02 ` Lorenzo Stoakes
2025-09-15 11:14 ` David Hildenbrand
2025-09-15 11:23 ` Lorenzo Stoakes
2025-09-15 11:29 ` David Hildenbrand
2025-09-15 11:35 ` Lorenzo Stoakes
2025-09-15 11:45 ` David Hildenbrand
2025-09-15 12:01 ` Kiryl Shutsemau
2025-09-15 12:09 ` Lorenzo Stoakes
2025-09-15 11:41 ` Nico Pache
2025-09-15 12:59 ` David Hildenbrand
2025-09-12 13:47 ` David Hildenbrand
2025-09-12 14:28 ` David Hildenbrand
2025-09-12 14:35 ` Kiryl Shutsemau
2025-09-12 14:56 ` David Hildenbrand
2025-09-12 15:41 ` Kiryl Shutsemau
2025-09-12 15:45 ` David Hildenbrand
2025-09-12 15:51 ` Lorenzo Stoakes
2025-09-12 17:53 ` David Hildenbrand
2025-09-12 18:21 ` Lorenzo Stoakes
2025-09-13 0:28 ` Nico Pache
2025-09-15 10:44 ` Lorenzo Stoakes
2025-09-15 10:25 ` David Hildenbrand
2025-09-15 10:32 ` Lorenzo Stoakes
2025-09-15 10:37 ` David Hildenbrand
2025-09-15 10:46 ` Lorenzo Stoakes
2025-09-13 0:18 ` Nico Pache
2025-09-12 23:35 ` Nico Pache
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250915134359.GA827803@cmpxchg.org \
--to=hannes@cmpxchg.org \
--cc=Liam.Howlett@oracle.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=catalin.marinas@arm.com \
--cc=cl@gentwo.org \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=hughd@google.com \
--cc=jack@suse.cz \
--cc=jannh@google.com \
--cc=jglisse@google.com \
--cc=kas@kernel.org \
--cc=lance.yang@linux.dev \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mathieu.desnoyers@efficios.com \
--cc=mhiramat@kernel.org \
--cc=mhocko@suse.com \
--cc=npache@redhat.com \
--cc=peterx@redhat.com \
--cc=pfalcato@suse.de \
--cc=raquini@redhat.com \
--cc=rdunlap@infradead.org \
--cc=richard.weiyang@gmail.com \
--cc=rientjes@google.com \
--cc=rostedt@goodmis.org \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=sunnanyong@huawei.com \
--cc=surenb@google.com \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tiwai@suse.de \
--cc=usamaarif642@gmail.com \
--cc=vbabka@suse.cz \
--cc=vishal.moola@gmail.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=ziy@nvidia.com \
--cc=zokeefe@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox