From: Kiryl Shutsemau <kas@kernel.org>
To: Pedro Falcato <pfalcato@suse.de>
Cc: David Hildenbrand <david@redhat.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Nico Pache <npache@redhat.com>,
linux-mm@kvack.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org,
ziy@nvidia.com, baolin.wang@linux.alibaba.com,
lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com,
ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net,
rostedt@goodmis.org, mhiramat@kernel.org,
mathieu.desnoyers@efficios.com, akpm@linux-foundation.org,
baohua@kernel.org, willy@infradead.org, peterx@redhat.com,
wangkefeng.wang@huawei.com, usamaarif642@gmail.com,
sunnanyong@huawei.com, vishal.moola@gmail.com,
thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com,
aarcange@redhat.com, raquini@redhat.com,
anshuman.khandual@arm.com, catalin.marinas@arm.com,
tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com,
jack@suse.cz, cl@gentwo.org, jglisse@google.com,
surenb@google.com, zokeefe@google.com, rientjes@google.com,
mhocko@suse.com, rdunlap@infradead.org, hughd@google.com,
richard.weiyang@gmail.com, lance.yang@linux.dev, vbabka@suse.cz,
rppt@kernel.org, jannh@google.com
Subject: Re: [PATCH v11 00/15] khugepaged: mTHP support
Date: Fri, 12 Sep 2025 16:38:59 +0100 [thread overview]
Message-ID: <k54teuep6r63gbgivpka32tk47zvzmy5thik2mekl5xpycvead@fth2lv4kuicg> (raw)
In-Reply-To: <hcpxpo3xpqcppxlxhmyxkqkqnu4syohhkt5oeyh7qse7kvuwiw@qbhiubf2ubtm>
On Fri, Sep 12, 2025 at 04:15:23PM +0100, Pedro Falcato wrote:
> On Fri, Sep 12, 2025 at 03:46:36PM +0200, David Hildenbrand wrote:
> > On 12.09.25 15:37, Johannes Weiner wrote:
> > > On Fri, Sep 12, 2025 at 02:25:31PM +0200, David Hildenbrand wrote:
> > > > On 12.09.25 14:19, Kiryl Shutsemau wrote:
> > > > > On Thu, Sep 11, 2025 at 09:27:55PM -0600, Nico Pache wrote:
> > > > > > The following series provides khugepaged with the capability to collapse
> > > > > > anonymous memory regions to mTHPs.
> > > > > >
> > > > > > To achieve this we generalize the khugepaged functions to no longer depend
> > > > > > on PMD_ORDER. Then during the PMD scan, we use a bitmap to track individual
> > > > > > pages that are occupied (!none/zero). After the PMD scan is done, we do
> > > > > > binary recursion on the bitmap to find the optimal mTHP sizes for the PMD
> > > > > > range. The restriction on max_ptes_none is removed during the scan, to make
> > > > > > sure we account for the whole PMD range. When no mTHP size is enabled, the
> > > > > > legacy behavior of khugepaged is maintained. max_ptes_none will be scaled
> > > > > > by the attempted collapse order to determine how full a mTHP must be to be
> > > > > > eligible for the collapse to occur. If a mTHP collapse is attempted, but
> > > > > > contains swapped out, or shared pages, we don't perform the collapse. It is
> > > > > > now also possible to collapse to mTHPs without requiring the PMD THP size
> > > > > > to be enabled.
> > > > > >
> > > > > > When enabling (m)THP sizes, if max_ptes_none >= HPAGE_PMD_NR/2 (255 on
> > > > > > 4K page size), it will be automatically capped to HPAGE_PMD_NR/2 - 1 for
> > > > > > mTHP collapses to prevent collapse "creep" behavior. This prevents
> > > > > > constantly promoting mTHPs to the next available size, which would occur
> > > > > > because a collapse introduces more non-zero pages that would satisfy the
> > > > > > promotion condition on subsequent scans.
> > > > >
> > > > > Hm. Maybe instead of capping at HPAGE_PMD_NR/2 - 1 we can count
> > > > > all-zeros 4k as none_or_zero? It mirrors the logic of shrinker.
> > > > >
> > > >
> > > > I am all for not adding any more ugliness on top of all the ugliness we
> > > > added in the past.
> > > >
> > > > I will soon propose deprecating that parameter in favor of something
> > > > that makes a bit more sense.
> > > >
> > > > In essence, we'll likely have an "eagerness" parameter that ranges from
> > > > 0 to 10. 10 is essentially "always collapse" and 0 "never collapse if
> > > > not all is populated".
> > > >
> > > > In between we will have more flexibility on how to set these values.
> > > >
> > > > Likely 9 will be around 50% to not even motivate the user to set
> > > > something that does not make sense (creep).
> > >
> > > One observation we've had from production experiments is that the
> > > optimal number here isn't static. If you have plenty of memory, then
> > > even very sparse THPs are beneficial.
> >
> > Exactly.
> >
> > And willy suggested something like "eagerness" similar to "swapinness" that
> > gives us more flexibility when implementing it, including dynamically
> > adjusting the values in the future.
> >
>
> Ideally we would be able to also apply this to the page faulting paths.
> In many cases, there's no good reason to create a THP on the first fault...
>
> > >
> > > An extreme example: if all your THPs have 2/512 pages populated,
> > > that's still cutting TLB pressure in half!
> >
> > IIRC, you create more pressure on the huge entries, where you might have
> > less TLB entries :) But yes, there can be cases where it is beneficial, if
> > there is absolutely no memory pressure.
> >
>
> Correct, but it depends on the microarchitecture. For modern x86_64 AMD, it
> happens that the L1 TLB entries are shared between 4K/2M/1G. This was not
> (is not?) the case for Intel, where e.g back on kabylake, you had separate
> entries for 4K/2MB/1GB.
On Intel secondary TLB is shared between 4k and 2M. L2 TLB for 1G is
separate.
> Maybe in the Great Glorious Future (how many of those do we have?!) it would
> be a good idea to take this kinds of things into account. Just because we can
> map a THP, doesn't mean we should.
>
> Shower thought: it might be in these cases especially where the FreeBSD
> reservation system comes in handy - best effort allocating a THP, but not
> actually mapping it as such until you really _know_ it is hot - and until
> then, memory reclaim can just break your THP down if it really needs to.
This is just silly. All downsides without benefit until maybe later. And
for short-lived processes the "later" never comes.
--
Kiryl Shutsemau / Kirill A. Shutemov
next prev parent reply other threads:[~2025-09-12 15:39 UTC|newest]
Thread overview: 79+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-12 3:27 Nico Pache
2025-09-12 3:27 ` [PATCH v11 01/15] khugepaged: rename hpage_collapse_* to collapse_* Nico Pache
2025-09-12 3:27 ` [PATCH v11 02/15] introduce collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
2025-09-12 3:27 ` [PATCH v11 03/15] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
2025-09-12 3:27 ` [PATCH v11 04/15] khugepaged: generalize alloc_charge_folio() Nico Pache
2025-09-12 3:28 ` [PATCH v11 05/15] khugepaged: generalize __collapse_huge_page_* for mTHP support Nico Pache
2025-09-12 3:28 ` [PATCH v11 06/15] khugepaged: introduce collapse_max_ptes_none helper function Nico Pache
2025-09-12 13:35 ` Lorenzo Stoakes
2025-09-12 23:26 ` Nico Pache
2025-09-15 10:30 ` Lorenzo Stoakes
2025-09-12 3:28 ` [PATCH v11 07/15] khugepaged: generalize collapse_huge_page for mTHP collapse Nico Pache
2025-09-12 3:28 ` [PATCH v11 08/15] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
2025-09-12 3:28 ` [PATCH v11 09/15] khugepaged: add per-order mTHP collapse failure statistics Nico Pache
2025-09-12 9:35 ` Baolin Wang
2025-09-12 3:28 ` [PATCH v11 10/15] khugepaged: improve tracepoints for mTHP orders Nico Pache
2025-09-12 3:28 ` [PATCH v11 11/15] khugepaged: introduce collapse_allowable_orders helper function Nico Pache
2025-09-12 9:24 ` Baolin Wang
2025-09-12 3:28 ` [PATCH v11 12/15] khugepaged: Introduce mTHP collapse support Nico Pache
2025-09-12 3:28 ` [PATCH v11 13/15] khugepaged: avoid unnecessary mTHP collapse attempts Nico Pache
2025-09-12 3:28 ` [PATCH v11 14/15] khugepaged: run khugepaged for all orders Nico Pache
2025-09-12 3:28 ` [PATCH v11 15/15] Documentation: mm: update the admin guide for mTHP collapse Nico Pache
2025-09-12 8:43 ` [PATCH v11 00/15] khugepaged: mTHP support Lorenzo Stoakes
2025-09-12 12:19 ` Kiryl Shutsemau
2025-09-12 12:25 ` David Hildenbrand
2025-09-12 13:37 ` Johannes Weiner
2025-09-12 13:46 ` David Hildenbrand
2025-09-12 14:01 ` Lorenzo Stoakes
2025-09-12 15:35 ` Pedro Falcato
2025-09-12 15:45 ` Lorenzo Stoakes
2025-09-12 15:15 ` Pedro Falcato
2025-09-12 15:38 ` Kiryl Shutsemau [this message]
2025-09-12 15:43 ` David Hildenbrand
2025-09-12 15:44 ` Kiryl Shutsemau
2025-09-12 15:51 ` David Hildenbrand
2025-09-15 13:43 ` Johannes Weiner
2025-09-15 14:45 ` David Hildenbrand
2025-09-12 23:31 ` Nico Pache
2025-09-15 9:22 ` Kiryl Shutsemau
2025-09-15 10:22 ` David Hildenbrand
2025-09-15 10:35 ` Lorenzo Stoakes
2025-09-15 10:39 ` David Hildenbrand
2025-09-15 10:40 ` Lorenzo Stoakes
2025-09-15 10:44 ` David Hildenbrand
2025-09-15 10:48 ` Lorenzo Stoakes
2025-09-15 10:52 ` David Hildenbrand
2025-09-15 10:59 ` Lorenzo Stoakes
2025-09-15 11:10 ` David Hildenbrand
2025-09-15 11:13 ` Lorenzo Stoakes
2025-09-15 11:16 ` David Hildenbrand
2025-09-15 12:16 ` Usama Arif
2025-09-15 10:43 ` Lorenzo Stoakes
2025-09-15 10:52 ` David Hildenbrand
2025-09-15 11:02 ` Lorenzo Stoakes
2025-09-15 11:14 ` David Hildenbrand
2025-09-15 11:23 ` Lorenzo Stoakes
2025-09-15 11:29 ` David Hildenbrand
2025-09-15 11:35 ` Lorenzo Stoakes
2025-09-15 11:45 ` David Hildenbrand
2025-09-15 12:01 ` Kiryl Shutsemau
2025-09-15 12:09 ` Lorenzo Stoakes
2025-09-15 11:41 ` Nico Pache
2025-09-15 12:59 ` David Hildenbrand
2025-09-12 13:47 ` David Hildenbrand
2025-09-12 14:28 ` David Hildenbrand
2025-09-12 14:35 ` Kiryl Shutsemau
2025-09-12 14:56 ` David Hildenbrand
2025-09-12 15:41 ` Kiryl Shutsemau
2025-09-12 15:45 ` David Hildenbrand
2025-09-12 15:51 ` Lorenzo Stoakes
2025-09-12 17:53 ` David Hildenbrand
2025-09-12 18:21 ` Lorenzo Stoakes
2025-09-13 0:28 ` Nico Pache
2025-09-15 10:44 ` Lorenzo Stoakes
2025-09-15 10:25 ` David Hildenbrand
2025-09-15 10:32 ` Lorenzo Stoakes
2025-09-15 10:37 ` David Hildenbrand
2025-09-15 10:46 ` Lorenzo Stoakes
2025-09-13 0:18 ` Nico Pache
2025-09-12 23:35 ` Nico Pache
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=k54teuep6r63gbgivpka32tk47zvzmy5thik2mekl5xpycvead@fth2lv4kuicg \
--to=kas@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=catalin.marinas@arm.com \
--cc=cl@gentwo.org \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=jack@suse.cz \
--cc=jannh@google.com \
--cc=jglisse@google.com \
--cc=lance.yang@linux.dev \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mathieu.desnoyers@efficios.com \
--cc=mhiramat@kernel.org \
--cc=mhocko@suse.com \
--cc=npache@redhat.com \
--cc=peterx@redhat.com \
--cc=pfalcato@suse.de \
--cc=raquini@redhat.com \
--cc=rdunlap@infradead.org \
--cc=richard.weiyang@gmail.com \
--cc=rientjes@google.com \
--cc=rostedt@goodmis.org \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=sunnanyong@huawei.com \
--cc=surenb@google.com \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tiwai@suse.de \
--cc=usamaarif642@gmail.com \
--cc=vbabka@suse.cz \
--cc=vishal.moola@gmail.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=ziy@nvidia.com \
--cc=zokeefe@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox