linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Yafang Shao <laoar.shao@gmail.com>
To: David Hildenbrand <david@redhat.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	akpm@linux-foundation.org, ziy@nvidia.com,
	 baolin.wang@linux.alibaba.com, Liam.Howlett@oracle.com,
	npache@redhat.com,  ryan.roberts@arm.com, dev.jain@arm.com,
	hannes@cmpxchg.org,  usamaarif642@gmail.com,
	gutierrez.asier@huawei-partners.com,  willy@infradead.org,
	ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org,
	 bpf@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [RFC PATCH v2 0/5] mm, bpf: BPF based THP adjustment
Date: Wed, 21 May 2025 12:02:09 +0800	[thread overview]
Message-ID: <CALOAHbAQ49iY3X91nOrsTGJO7v31j8KiCe7XJy6q3iyio_sxdA@mail.gmail.com> (raw)
In-Reply-To: <9b44fe43-155d-457d-81ce-a2c1fb86521a@redhat.com>

On Tue, May 20, 2025 at 11:54 PM David Hildenbrand <david@redhat.com> wrote:
>
> >> I totally agree with you that the key point here is how to define the
> >> API. As I replied to David, I believe we have two fundamental
> >> principles to adjust the THP policies:
> >> 1. Selective Benefit: Some tasks benefit from THP, while others do not.
> >> 2. Conditional Safety: THP allocation is safe under certain conditions
> >> but not others.
> >>
> >> Therefore, I believe we can define these APIs based on the established
> >> principles - everything else constitutes implementation details, even
> >> if core MM internals need to change.
> >
> > But if we're looking to make the concept of THP go away, we really need to
> > go further than this.
>
> Yeah. I might be wrong, but I also don't think doing control on a
> per-process level etc would be the right solution long-term.

The reality is that achieving truly 'automatic' THP behavior requires
process-level control. Given that THP provides no benefit for certain
workloads, there's no justification for incurring the overhead of
allocating higher-order pages in those cases.

>
> In a world where we do stuff automatically ("auto" mode), we would be
> much smarter about where to place a (m)THP, and which size we would use.

We still have considerable ground to cover before reaching this goal.

>
> One might use bpf to control the allocation policy. But I don't think
> this would be per-process or even per-VMA etc. Sure, we might give
> hints, but placement decisions should happen on another level (e.g.,
> during page faults, during khugepaged etc).

Nico has proposed introducing a new 'defer' mode to address this.
However, I argue that we could achieve the same functionality through
BPF instead of adding a dedicated policy mode. [0]

[0]. https://lore.kernel.org/linux-mm/CALOAHbAa7DY6+hO4RJtjg-MS+cnUmsiPXX8KS1MKSfgy6HLYAQ@mail.gmail.com/

>
> >
> > The second we have 'bpf program that figures out whether THP should be
> > used' we are permanently tied to the idea of THP on/off being a thing.
> >
> > I mean any future stuff that makes THP more automagic will probably involve
> > having new modes for the legacy THP
> > /sys/kernel/mm/transparent_hugepage/enabled and
> > /sys/kernel/mm/transparent_hugepage/hugepages-xxkB/enabled
>
> Yeah, the plan is to have "auto" in
> /sys/kernel/mm/transparent_hugepage/enabled and just have all other
> sizes "inherit" that option. And have a Kconfig that just enables that
> as default. Once we're there, just phase out the interface long-term.
>
> That's the plan. Now we "only" have to figure out how to make the
> placement actually better ;)
>
> >
> > But if people are super reliant on this stuff it's potentially really
> > limiting.
> >
> > I think you said in another post here that you were toying with the notion
> > of exposing somehow the madvise() interface and having that be the 'stable
> > API' of sorts?
> >
> > That definitely sounds more sensible than something that very explicitly
> > interacts with THP.
> >
> > Of course we have Usama's series and my proposed series for extending
> > process_madvise() along those lines also.
>
> Yes.
>


-- 
Regards
Yafang


  reply	other threads:[~2025-05-21  4:02 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-20  6:04 Yafang Shao
2025-05-20  6:04 ` [RFC PATCH v2 1/5] mm: thp: Add a new mode "bpf" Yafang Shao
2025-05-20  6:05 ` [RFC PATCH v2 2/5] mm: thp: Add hook for BPF based THP adjustment Yafang Shao
2025-05-20  6:05 ` [RFC PATCH v2 3/5] mm: thp: add struct ops " Yafang Shao
2025-05-20  6:05 ` [RFC PATCH v2 4/5] bpf: Add get_current_comm to bpf_base_func_proto Yafang Shao
2025-05-20 23:32   ` Andrii Nakryiko
2025-05-20  6:05 ` [RFC PATCH v2 5/5] selftests/bpf: Add selftest for THP adjustment Yafang Shao
2025-05-20  6:52 ` [RFC PATCH v2 0/5] mm, bpf: BPF based " Nico Pache
2025-05-20  7:25   ` Yafang Shao
2025-05-20 13:10     ` Matthew Wilcox
2025-05-20 14:08       ` Yafang Shao
2025-05-20 14:22         ` Lorenzo Stoakes
2025-05-20 14:32           ` Usama Arif
2025-05-20 14:35             ` Lorenzo Stoakes
2025-05-20 14:42               ` Matthew Wilcox
2025-05-20 14:56                 ` David Hildenbrand
2025-05-21  4:28                 ` Yafang Shao
2025-05-20 14:46               ` Usama Arif
2025-05-20 15:00             ` David Hildenbrand
2025-05-20  9:43 ` David Hildenbrand
2025-05-20  9:49   ` Lorenzo Stoakes
2025-05-20 12:06     ` Yafang Shao
2025-05-20 13:45       ` Lorenzo Stoakes
2025-05-20 15:54         ` David Hildenbrand
2025-05-21  4:02           ` Yafang Shao [this message]
2025-05-21  3:52         ` Yafang Shao
2025-05-20 11:59   ` Yafang Shao
2025-05-25  3:01 ` Yafang Shao
2025-05-26  7:41   ` Gutierrez Asier
2025-05-26  9:37     ` Yafang Shao
2025-05-26  8:14   ` David Hildenbrand
2025-05-26  9:37     ` Yafang Shao
2025-05-26 10:49       ` David Hildenbrand
2025-05-26 14:53         ` Liam R. Howlett
2025-05-26 15:54           ` Liam R. Howlett
2025-05-26 16:51             ` David Hildenbrand
2025-05-26 17:07               ` Liam R. Howlett
2025-05-26 17:12                 ` David Hildenbrand
2025-05-26 20:30               ` Gutierrez Asier
2025-05-26 20:37                 ` David Hildenbrand
2025-05-27  5:46         ` Yafang Shao
2025-05-27  7:57           ` David Hildenbrand
2025-05-27  8:13             ` Yafang Shao
2025-05-27  8:30               ` David Hildenbrand
2025-05-27  8:40                 ` Yafang Shao
2025-05-27  9:27                   ` David Hildenbrand
2025-05-27  9:43                     ` Yafang Shao
2025-05-27 12:19                       ` David Hildenbrand
2025-05-28  2:04                         ` Yafang Shao
2025-05-28 20:32                           ` David Hildenbrand
2025-05-26 14:32   ` Zi Yan
2025-05-27  5:53     ` Yafang Shao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALOAHbAQ49iY3X91nOrsTGJO7v31j8KiCe7XJy6q3iyio_sxdA@mail.gmail.com \
    --to=laoar.shao@gmail.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=gutierrez.asier@huawei-partners.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=npache@redhat.com \
    --cc=ryan.roberts@arm.com \
    --cc=usamaarif642@gmail.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox