linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Yafang Shao <laoar.shao@gmail.com>
Cc: akpm@linux-foundation.org, ziy@nvidia.com,
	baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com,
	Liam.Howlett@oracle.com, npache@redhat.com, ryan.roberts@arm.com,
	dev.jain@arm.com, hannes@cmpxchg.org, usamaarif642@gmail.com,
	gutierrez.asier@huawei-partners.com, willy@infradead.org,
	ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org,
	bpf@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [RFC PATCH v2 0/5] mm, bpf: BPF based THP adjustment
Date: Tue, 27 May 2025 14:19:16 +0200	[thread overview]
Message-ID: <aa7ea2f4-da94-4850-8225-0fb6e0e32767@redhat.com> (raw)
In-Reply-To: <CALOAHbBUK=oPihkG22Z7L92rHNw-fB=p8zSk+1NFmzBjBENmVg@mail.gmail.com>

On 27.05.25 11:43, Yafang Shao wrote:
> On Tue, May 27, 2025 at 5:27 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 27.05.25 10:40, Yafang Shao wrote:
>>> On Tue, May 27, 2025 at 4:30 PM David Hildenbrand <david@redhat.com> wrote:
>>>>
>>>>>> I don't think we want to add such a mechanism (new mode) where the
>>>>>> primary configuration mechanism is through bpf.
>>>>>>
>>>>>> Maybe bpf could be used as an alternative, but we should look into a
>>>>>> reasonable alternative first, like the discussed mctrl()/.../ raised in
>>>>>> the process_madvise() series.
>>>>>>
>>>>>> No "bpf" mode in disguise, please :)
>>>>>
>>>>> This goal can be readily achieved using a BPF program. In any case, it
>>>>> is a feasible solution.
>>>>
>>>> No BPF-only solution.
>>>>
>>>>>
>>>>>>
>>>>>>> We could define
>>>>>>> the API as follows:
>>>>>>>
>>>>>>> struct bpf_thp_ops {
>>>>>>>            /**
>>>>>>>             * @task_thp_mode: Get the THP mode for a specific task
>>>>>>>             *
>>>>>>>             * Return:
>>>>>>>             * - TASK_THP_ALWAYS: "always" mode
>>>>>>>             * - TASK_THP_MADVISE: "madvise" mode
>>>>>>>             * - TASK_THP_NEVER: "never" mode
>>>>>>>             * Future modes can also be added.
>>>>>>>             */
>>>>>>>            int (*task_thp_mode)(struct task_struct *p);
>>>>>>> };
>>>>>>>
>>>>>>> For observability, we could add a "THP mode" field to
>>>>>>> /proc/[pid]/status. For example:
>>>>>>>
>>>>>>> $ grep "THP mode" /proc/123/status
>>>>>>> always
>>>>>>> $ grep "THP mode" /proc/456/status
>>>>>>> madvise
>>>>>>> $ grep "THP mode" /proc/789/status
>>>>>>> never
>>>>>>>
>>>>>>> The THP mode for each task would be determined by the attached BPF
>>>>>>> program based on the task's attributes. We would place the BPF hook in
>>>>>>> appropriate kernel functions. Note that this setting wouldn't be
>>>>>>> inherited during fork/exec - the BPF program would make the decision
>>>>>>> dynamically for each task.
>>>>>>
>>>>>> What would be the mode (default) when the bpf program would not be active?
>>>>>>
>>>>>>> This approach also enables runtime adjustments to THP modes based on
>>>>>>> system-wide conditions, such as memory fragmentation or other
>>>>>>> performance overheads. The BPF program could adapt policies
>>>>>>> dynamically, optimizing THP behavior in response to changing
>>>>>>> workloads.
>>>>>>
>>>>>> I am not sure that is the proper way to handle these scenarios: I never
>>>>>> heard that people would be adjusting the system-wide policy dynamically
>>>>>> in that way either.
>>>>>>
>>>>>> Whatever we do, we have to make sure that what we add won't
>>>>>> over-complicate things in the future. Having tooling dynamically adjust
>>>>>> the THP policy of processes that coarsely sounds ... very wrong long-term.
>>>>>
>>>>> This is just an example demonstrating how BPF can be used to adjust
>>>>> its flexibility. Notably, all these policies can be implemented
>>>>> without modifying the kernel.
>>>>
>>>> See below on "policy".
>>>>
>>>>>
>>>>>>
>>>>>>     > > As Liam pointed out in another thread, naming is challenging here -
>>>>>>> "process" might not be the most accurate term for this context.
>>>>>>
>>>>>> No, it's not even a per-process thing. It is per MM, and a MM might be
>>>>>> used by multiple processes ...
>>>>>
>>>>> I consistently use 'thread' for the latter case.
>>>>
>>>> You can use CLONE_VM without CLONE_THREAD ...
>>>
>>> If I understand correctly, this can only occur for shared THP but not
>>> anonymous THP. For instance, if either process allocates an anonymous
>>> THP, it would trigger the creation of a new MM. Please correct me if
>>> I'm mistaken.
>>
>> What clone(CLONE_VM) will do is essentially create a new process, that
>> shares the MM with the original process. Similar to a thread, just that
>> the new process will show up in /proc/ as ... a new process, not as a
>> thread under /prod/$pid/tasks of the original process.
>>
>> Both processes will operate on the shared MM struct as if they were
>> ordinary threads. No Copy-on-Write involved.
>>
>> One example use case I've been involved in is async teardown in QEMU [1].
>>
>> [1] https://kvm-forum.qemu.org/2022/ibm_async_destroy.pdf
> 
> I understand what you mean, but what I'm really confused about is how
> this relates to allocating anonymous THP.  If either one allocates
> anon THP, it will definitely create a new MM, right ?

No. They work on the same address space - same MM. Either can allocate a 
new anon THP and the other one would be able to modify it. No fork/CoW.

I only bring it up because it's two "processes" sharing the same MM. And 
the THP mode in your proposal would actually be per-MM and not per process.

It's confusing ... :)

> 
>>
>>>
>>>>
>>>> Additionally, this
>>>>> can be implemented per-MM without kernel code modifications.
>>>>> With a well-designed API, users can even implement custom THP
>>>>> policies—all without altering kernel code.
>>>>
>>>> You can switch between modes, that' all you can do. I wouldn't really
>>>> call that "custom policy" as it is extremely limited.
>>>>
>>>> And that's exactly my point: it's basic switching between modes ... a
>>>> reasonable policy in the future will make placement decisions and not
>>>> just state "always/never/madvise".
>>>
>>> Could you please elaborate further on 'make placement decisions'? As
>>> previously mentioned, we (including the broader community) really need
>>> the user input to determine whether THP allocation is appropriate in a
>>> given case.
>>
>> The glorious future were we make smarter decisions where to actually
>> place THPs even in the "always" mode.
>>
>> E.g., just because we enable "always" for a process does not mean that
>> we really want a THP everywhere; quite the opposite.
> 
> So 'always' simply means "the system doesn't guarantee THP allocation
> will succeed" ?

I mean, with THPs, there are no guarantees, ever :(

> If that's the case, we should revisit RFC v1 [0],
> where we proposed rejecting THP allocations in certain scenarios for
> specific tasks.

Hooking into actual page allocation during page faults (e.g., THP size, 
khugepaged collapse decisions) is IMHO a much better application of ebpf 
than setting a THP mode per process (or MM ... ) using epbf.

So yes, you could drive the system in "always" mode and decide to not 
allocate THPs during page faults / khugepaged for specific processes.

IMHO that also does not contradict the VM_HUGEPAGE / VM_NOHUGEPAGE 
default setting proposal: VM_HUGEPAGE could feed into the epbf program 
as yet another parameter to make a decision.

-- 
Cheers,

David / dhildenb



  reply	other threads:[~2025-05-27 12:19 UTC|newest]

Thread overview: 52+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-05-20  6:04 Yafang Shao
2025-05-20  6:04 ` [RFC PATCH v2 1/5] mm: thp: Add a new mode "bpf" Yafang Shao
2025-05-20  6:05 ` [RFC PATCH v2 2/5] mm: thp: Add hook for BPF based THP adjustment Yafang Shao
2025-05-20  6:05 ` [RFC PATCH v2 3/5] mm: thp: add struct ops " Yafang Shao
2025-05-20  6:05 ` [RFC PATCH v2 4/5] bpf: Add get_current_comm to bpf_base_func_proto Yafang Shao
2025-05-20 23:32   ` Andrii Nakryiko
2025-05-20  6:05 ` [RFC PATCH v2 5/5] selftests/bpf: Add selftest for THP adjustment Yafang Shao
2025-05-20  6:52 ` [RFC PATCH v2 0/5] mm, bpf: BPF based " Nico Pache
2025-05-20  7:25   ` Yafang Shao
2025-05-20 13:10     ` Matthew Wilcox
2025-05-20 14:08       ` Yafang Shao
2025-05-20 14:22         ` Lorenzo Stoakes
2025-05-20 14:32           ` Usama Arif
2025-05-20 14:35             ` Lorenzo Stoakes
2025-05-20 14:42               ` Matthew Wilcox
2025-05-20 14:56                 ` David Hildenbrand
2025-05-21  4:28                 ` Yafang Shao
2025-05-20 14:46               ` Usama Arif
2025-05-20 15:00             ` David Hildenbrand
2025-05-20  9:43 ` David Hildenbrand
2025-05-20  9:49   ` Lorenzo Stoakes
2025-05-20 12:06     ` Yafang Shao
2025-05-20 13:45       ` Lorenzo Stoakes
2025-05-20 15:54         ` David Hildenbrand
2025-05-21  4:02           ` Yafang Shao
2025-05-21  3:52         ` Yafang Shao
2025-05-20 11:59   ` Yafang Shao
2025-05-25  3:01 ` Yafang Shao
2025-05-26  7:41   ` Gutierrez Asier
2025-05-26  9:37     ` Yafang Shao
2025-05-26  8:14   ` David Hildenbrand
2025-05-26  9:37     ` Yafang Shao
2025-05-26 10:49       ` David Hildenbrand
2025-05-26 14:53         ` Liam R. Howlett
2025-05-26 15:54           ` Liam R. Howlett
2025-05-26 16:51             ` David Hildenbrand
2025-05-26 17:07               ` Liam R. Howlett
2025-05-26 17:12                 ` David Hildenbrand
2025-05-26 20:30               ` Gutierrez Asier
2025-05-26 20:37                 ` David Hildenbrand
2025-05-27  5:46         ` Yafang Shao
2025-05-27  7:57           ` David Hildenbrand
2025-05-27  8:13             ` Yafang Shao
2025-05-27  8:30               ` David Hildenbrand
2025-05-27  8:40                 ` Yafang Shao
2025-05-27  9:27                   ` David Hildenbrand
2025-05-27  9:43                     ` Yafang Shao
2025-05-27 12:19                       ` David Hildenbrand [this message]
2025-05-28  2:04                         ` Yafang Shao
2025-05-28 20:32                           ` David Hildenbrand
2025-05-26 14:32   ` Zi Yan
2025-05-27  5:53     ` Yafang Shao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aa7ea2f4-da94-4850-8225-0fb6e0e32767@redhat.com \
    --to=david@redhat.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=dev.jain@arm.com \
    --cc=gutierrez.asier@huawei-partners.com \
    --cc=hannes@cmpxchg.org \
    --cc=laoar.shao@gmail.com \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=npache@redhat.com \
    --cc=ryan.roberts@arm.com \
    --cc=usamaarif642@gmail.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox