From: Yafang Shao <laoar.shao@gmail.com>
To: Zi Yan <ziy@nvidia.com>
Cc: David Hildenbrand <david@redhat.com>,
Alexei Starovoitov <alexei.starovoitov@gmail.com>,
Johannes Weiner <hannes@cmpxchg.org>,
Andrew Morton <akpm@linux-foundation.org>,
baolin.wang@linux.alibaba.com,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Liam Howlett <Liam.Howlett@oracle.com>,
npache@redhat.com, ryan.roberts@arm.com, dev.jain@arm.com,
usamaarif642@gmail.com, gutierrez.asier@huawei-partners.com,
Matthew Wilcox <willy@infradead.org>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>,
Andrii Nakryiko <andrii@kernel.org>,
Amery Hung <ameryhung@gmail.com>,
David Rientjes <rientjes@google.com>,
Jonathan Corbet <corbet@lwn.net>,
21cnbao@gmail.com, Shakeel Butt <shakeel.butt@linux.dev>,
Tejun Heo <tj@kernel.org>,
lance.yang@linux.dev, Randy Dunlap <rdunlap@infradead.org>,
bpf <bpf@vger.kernel.org>, linux-mm <linux-mm@kvack.org>,
"open list:DOCUMENTATION" <linux-doc@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v9 mm-new 03/11] mm: thp: add support for BPF based THP order selection
Date: Wed, 8 Oct 2025 20:06:40 +0800 [thread overview]
Message-ID: <CALOAHbCS0WvUSsK_rbtU8LTLuz_eynVEa1ULyYmyRcMW_hfZWg@mail.gmail.com> (raw)
In-Reply-To: <96AE1C18-3833-4EB8-9145-202517331DF5@nvidia.com>
On Wed, Oct 8, 2025 at 7:27 PM Zi Yan <ziy@nvidia.com> wrote:
>
> On 8 Oct 2025, at 5:04, Yafang Shao wrote:
>
> > On Wed, Oct 8, 2025 at 4:28 PM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 08.10.25 10:18, Yafang Shao wrote:
> >>> On Wed, Oct 8, 2025 at 4:08 PM David Hildenbrand <david@redhat.com> wrote:
> >>>>
> >>>> On 03.10.25 04:18, Alexei Starovoitov wrote:
> >>>>> On Mon, Sep 29, 2025 at 10:59 PM Yafang Shao <laoar.shao@gmail.com> wrote:
> >>>>>>
> >>>>>> +unsigned long bpf_hook_thp_get_orders(struct vm_area_struct *vma,
> >>>>>> + enum tva_type type,
> >>>>>> + unsigned long orders)
> >>>>>> +{
> >>>>>> + thp_order_fn_t *bpf_hook_thp_get_order;
> >>>>>> + int bpf_order;
> >>>>>> +
> >>>>>> + /* No BPF program is attached */
> >>>>>> + if (!test_bit(TRANSPARENT_HUGEPAGE_BPF_ATTACHED,
> >>>>>> + &transparent_hugepage_flags))
> >>>>>> + return orders;
> >>>>>> +
> >>>>>> + rcu_read_lock();
> >>>>>> + bpf_hook_thp_get_order = rcu_dereference(bpf_thp.thp_get_order);
> >>>>>> + if (WARN_ON_ONCE(!bpf_hook_thp_get_order))
> >>>>>> + goto out;
> >>>>>> +
> >>>>>> + bpf_order = bpf_hook_thp_get_order(vma, type, orders);
> >>>>>> + orders &= BIT(bpf_order);
> >>>>>> +
> >>>>>> +out:
> >>>>>> + rcu_read_unlock();
> >>>>>> + return orders;
> >>>>>> +}
> >>>>>
> >>>>> I thought I explained it earlier.
> >>>>> Nack to a single global prog approach.
> >>>>
> >>>> I agree. We should have the option to either specify a policy globally,
> >>>> or more refined for cgroups/processes.
> >>>>
> >>>> It's an interesting question if a program would ever want to ship its
> >>>> own policy: I can see use cases for that.
> >>>>
> >>>> So I agree that we should make it more flexible right from the start.
> >>>
> >>> To achieve per-process granularity, the struct-ops must be embedded
> >>> within the mm_struct as follows:
> >>>
> >>> +#ifdef CONFIG_BPF_MM
> >>> +struct bpf_mm_ops {
> >>> +#ifdef CONFIG_BPF_THP
> >>> + struct bpf_thp_ops bpf_thp;
> >>> +#endif
> >>> +};
> >>> +#endif
> >>> +
> >>> /*
> >>> * Opaque type representing current mm_struct flag state. Must be accessed via
> >>> * mm_flags_xxx() helper functions.
> >>> @@ -1268,6 +1281,10 @@ struct mm_struct {
> >>> #ifdef CONFIG_MM_ID
> >>> mm_id_t mm_id;
> >>> #endif /* CONFIG_MM_ID */
> >>> +
> >>> +#ifdef CONFIG_BPF_MM
> >>> + struct bpf_mm_ops bpf_mm;
> >>> +#endif
> >>> } __randomize_layout;
> >>>
> >>> We should be aware that this will involve extensive changes in mm/.
> >>
> >> That's what we do on linux-mm :)
> >>
> >> It would be great to use Alexei's feedback/experience to come up with
> >> something that is flexible for various use cases.
> >
> > I'm still not entirely convinced that allowing individual processes or
> > cgroups to run independent progs is a valid use case. However, since
> > we have a consensus that this is the right direction, I will proceed
> > with this approach.
> >
> >>
> >> So I think this is likely the right direction.
> >>
> >> It would be great to evaluate which scenarios we could unlock with this
> >> (global vs. per-process vs. per-cgroup) approach, and how
> >> extensive/involved the changes will be.
> >
> > 1. Global Approach
> > - Pros:
> > Simple;
> > Can manage different THP policies for different cgroups or processes.
> > - Cons:
> > Does not allow individual processes to run their own BPF programs.
> >
> > 2. Per-Process Approach
> > - Pros:
> > Enables each process to run its own BPF program.
> > - Cons:
> > Introduces significant complexity, as it requires handling the
> > BPF program's lifecycle (creation, destruction, inheritance) within
> > every mm_struct.
> >
> > 3. Per-Cgroup Approach
> > - Pros:
> > Allows individual cgroups to run their own BPF programs.
> > Less complex than the per-process model, as it can leverage the
> > existing cgroup operations structure.
> > - Cons:
> > Creates a dependency on the cgroup subsystem.
> > might not be easy to control at the per-process level.
>
> Another issue is that how and who to deal with hierarchical cgroup, where one
> cgroup is a parent of another. Should bpf program to do that or mm code
> to do that?
The cgroup subsystem handles this propagation automatically. When a
BPF program is attached to a cgroup via cgroup_bpf_attach(), it's
automatically inherited by all descendant cgroups.
Note: struct-ops programs aren't supported by cgroup_bpf_attach(),
requiring us to build new attachment mechanisms for cgroup-based
struct-ops.
> I remember hierarchical cgroup is the main reason THP control
> at cgroup level is rejected. If we do per-cgroup bpf control, wouldn't we
> get the same rejection from cgroup folks?
Right, it was rejected by the cgroup maintainers [0]
[0]. https://lore.kernel.org/linux-mm/20241030150851.GB706616@cmpxchg.org/
--
Regards
Yafang
next prev parent reply other threads:[~2025-10-08 12:07 UTC|newest]
Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-30 5:58 [PATCH v9 mm-new 00/11] mm, bpf: " Yafang Shao
2025-09-30 5:58 ` [PATCH v9 mm-new 01/11] mm: thp: remove vm_flags parameter from khugepaged_enter_vma() Yafang Shao
2025-09-30 5:58 ` [PATCH v9 mm-new 02/11] mm: thp: remove vm_flags parameter from thp_vma_allowable_order() Yafang Shao
2025-09-30 5:58 ` [PATCH v9 mm-new 03/11] mm: thp: add support for BPF based THP order selection Yafang Shao
2025-10-03 2:18 ` Alexei Starovoitov
2025-10-07 8:47 ` Yafang Shao
2025-10-08 3:25 ` Alexei Starovoitov
2025-10-08 3:50 ` Yafang Shao
2025-10-08 4:10 ` Alexei Starovoitov
2025-10-08 4:25 ` Yafang Shao
2025-10-08 4:39 ` Alexei Starovoitov
2025-10-08 6:02 ` Yafang Shao
2025-10-08 8:08 ` David Hildenbrand
2025-10-08 8:18 ` Yafang Shao
2025-10-08 8:28 ` David Hildenbrand
2025-10-08 9:04 ` Yafang Shao
2025-10-08 11:27 ` Zi Yan
2025-10-08 12:06 ` Yafang Shao [this message]
2025-10-08 12:49 ` Gutierrez Asier
2025-10-08 12:07 ` David Hildenbrand
2025-10-08 13:11 ` Yafang Shao
2025-10-09 9:19 ` David Hildenbrand
2025-10-09 9:59 ` Yafang Shao
2025-10-10 7:54 ` David Hildenbrand
2025-10-11 2:13 ` Yafang Shao
2025-10-13 12:41 ` David Hildenbrand
2025-10-13 13:07 ` Yafang Shao
2025-09-30 5:58 ` [PATCH v9 mm-new 04/11] mm: thp: decouple THP allocation between swap and page fault paths Yafang Shao
2025-09-30 5:58 ` [PATCH v9 mm-new 05/11] mm: thp: enable THP allocation exclusively through khugepaged Yafang Shao
2025-09-30 5:58 ` [PATCH v9 mm-new 06/11] bpf: mark mm->owner as __safe_rcu_or_null Yafang Shao
2025-09-30 5:58 ` [PATCH v9 mm-new 07/11] bpf: mark vma->vm_mm as __safe_trusted_or_null Yafang Shao
2025-10-06 21:06 ` Andrii Nakryiko
2025-10-07 9:05 ` Yafang Shao
2025-09-30 5:58 ` [PATCH v9 mm-new 08/11] selftests/bpf: add a simple BPF based THP policy Yafang Shao
2025-09-30 5:58 ` [PATCH v9 mm-new 09/11] selftests/bpf: add test case to update " Yafang Shao
2025-09-30 5:58 ` [PATCH v9 mm-new 10/11] selftests/bpf: add test cases for invalid thp_adjust usage Yafang Shao
2025-09-30 5:58 ` [PATCH v9 mm-new 11/11] Documentation: add BPF-based THP policy management Yafang Shao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALOAHbCS0WvUSsK_rbtU8LTLuz_eynVEa1ULyYmyRcMW_hfZWg@mail.gmail.com \
--to=laoar.shao@gmail.com \
--cc=21cnbao@gmail.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=alexei.starovoitov@gmail.com \
--cc=ameryhung@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bpf@vger.kernel.org \
--cc=corbet@lwn.net \
--cc=daniel@iogearbox.net \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=gutierrez.asier@huawei-partners.com \
--cc=hannes@cmpxchg.org \
--cc=lance.yang@linux.dev \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=npache@redhat.com \
--cc=rdunlap@infradead.org \
--cc=rientjes@google.com \
--cc=ryan.roberts@arm.com \
--cc=shakeel.butt@linux.dev \
--cc=tj@kernel.org \
--cc=usamaarif642@gmail.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox