From: Yafang Shao <laoar.shao@gmail.com>
To: Zi Yan <ziy@nvidia.com>
Cc: akpm@linux-foundation.org, ast@kernel.org, daniel@iogearbox.net,
andrii@kernel.org, David Hildenbrand <david@redhat.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
bpf@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [RFC PATCH 0/4] mm, bpf: BPF based THP adjustment
Date: Wed, 30 Apr 2025 10:33:33 +0800 [thread overview]
Message-ID: <CALOAHbBfSat7-qOjKseEJy=w5MVF7um3vYKPCb0VMbEgw-KAuw@mail.gmail.com> (raw)
In-Reply-To: <D9J7UWF1S5WH.285Y0GXSUD30W@nvidia.com>
On Tue, Apr 29, 2025 at 11:09 PM Zi Yan <ziy@nvidia.com> wrote:
>
> Hi Yafang,
>
> We recently added a new THP entry in MAINTAINERS file[1], do you mind ccing
> people there in your next version? (I added them here)
>
> [1] https://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm.git/tree/MAINTAINERS?h=mm-everything#n15589
Thanks for your reminder.
I will add the maintainers and reviewers in the next version.
>
> On Mon Apr 28, 2025 at 10:41 PM EDT, Yafang Shao wrote:
> > In our container environment, we aim to enable THP selectively—allowing
> > specific services to use it while restricting others. This approach is
> > driven by the following considerations:
> >
> > 1. Memory Fragmentation
> > THP can lead to increased memory fragmentation, so we want to limit its
> > use across services.
> > 2. Performance Impact
> > Some services see no benefit from THP, making its usage unnecessary.
> > 3. Performance Gains
> > Certain workloads, such as machine learning services, experience
> > significant performance improvements with THP, so we enable it for them
> > specifically.
> >
> > Since multiple services run on a single host in a containerized environment,
> > enabling THP globally is not ideal. Previously, we set THP to madvise,
> > allowing selected services to opt in via MADV_HUGEPAGE. However, this
> > approach had limitation:
> >
> > - Some services inadvertently used madvise(MADV_HUGEPAGE) through
> > third-party libraries, bypassing our restrictions.
>
> Basically, you want more precise control of THP enablement and the
> ability of overriding madvise() from userspace.
>
> In terms of overriding madvise(), do you have any concrete example of
> these third-party libraries? madvise() users are supposed to know what
> they are doing, so I wonder why they are causing trouble in your
> environment.
To my knowledge, jemalloc [0] supports THP.
Applications using jemalloc typically rely on its default
configurations rather than explicitly enabling or disabling THP. If
the system is configured with THP=madvise, these applications may
automatically leverage THP where appropriate
[0]. https://github.com/jemalloc/jemalloc
>
> >
> > To address this issue, we initially hooked the __x64_sys_madvise() syscall,
> > which is error-injectable, to blacklist unwanted services. While this
> > worked, it was error-prone and ineffective for services needing always mode,
> > as modifying their code to use madvise was impractical.
> >
> > To achieve finer-grained control, we introduced an fmod_ret-based solution.
> > Now, we dynamically adjust THP settings per service by hooking
> > hugepage_global_{enabled,always}() via BPF. This allows us to set THP to
> > enable or disable on a per-service basis without global impact.
>
> hugepage_global_*() are whole system knobs. How did you use it to
> achieve per-service control? In terms of per-service, does it mean
> you need per-memcg group (I assume each service has its own memcg) THP
> configuration?
With this new BPF hook, we can manage THP behavior either per-service
or per-memory.
In our use case, we’ve chosen memcg-based control for finer-grained
management. Below is a simplified example of our implementation:
struct{
__uint(type, BPF_MAP_TYPE_HASH);
__uint(max_entries, 4096); /* usually there won't too
many cgroups */
__type(key, u64);
__type(value, u32);
__uint(map_flags, BPF_F_NO_PREALLOC);
} thp_whitelist SEC(".maps");
SEC("fmod_ret/mm_bpf_thp_vma_allowable")
int BPF_PROG(thp_vma_allowable, struct vm_area_struct *vma)
{
struct cgroup_subsys_state *css;
struct css_set *cgroups;
struct mm_struct *mm;
struct cgroup *cgroup;
struct cgroup *parent;
struct task_struct *p;
u64 cgrp_id;
if (!vma)
return 0;
mm = vma->vm_mm;
if (!mm)
return 0;
p = mm->owner;
cgroups = p->cgroups;
cgroup = cgroups->subsys[memory_cgrp_id]->cgroup;
cgrp_id = cgroup->kn->id;
/* Allow the tasks in the thp_whiltelist to use THP. */
if (bpf_map_lookup_elem(&thp_whitelist, &cgrp_id))
return 1;
return 0;
}
I chose not to include this in the self-tests to avoid the complexity
of setting up cgroups for testing purposes. However, in patch #4 of
this series, I've included a simpler example demonstrating task-level
control.
For service-level control, we could potentially utilize BPF task local
storage as an alternative approach.
--
Regards
Yafang
next prev parent reply other threads:[~2025-04-30 2:34 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-29 2:41 Yafang Shao
2025-04-29 2:41 ` [RFC PATCH 1/4] mm: move hugepage_global_{enabled,always}() to internal.h Yafang Shao
2025-04-29 15:13 ` Zi Yan
2025-04-30 2:40 ` Yafang Shao
2025-04-30 12:11 ` Zi Yan
2025-04-30 14:43 ` Yafang Shao
2025-04-29 2:41 ` [RFC PATCH 2/4] mm: pass VMA parameter to hugepage_global_{enabled,always}() Yafang Shao
2025-04-29 15:31 ` Zi Yan
2025-04-30 2:46 ` Yafang Shao
2025-04-29 2:41 ` [RFC PATCH 3/4] mm: add BPF hook for THP adjustment Yafang Shao
2025-04-29 15:19 ` Alexei Starovoitov
2025-04-30 2:48 ` Yafang Shao
2025-04-29 2:41 ` [RFC PATCH 4/4] selftests/bpf: Add selftest " Yafang Shao
2025-04-29 3:11 ` [RFC PATCH 0/4] mm, bpf: BPF based " Matthew Wilcox
2025-04-29 4:53 ` Yafang Shao
2025-04-29 15:09 ` Zi Yan
2025-04-30 2:33 ` Yafang Shao [this message]
2025-04-30 13:19 ` Zi Yan
2025-04-30 14:38 ` Yafang Shao
2025-04-30 15:00 ` Zi Yan
2025-04-30 15:16 ` Yafang Shao
2025-04-30 15:21 ` Liam R. Howlett
2025-04-30 15:37 ` Yafang Shao
2025-04-30 15:53 ` Liam R. Howlett
2025-04-30 16:06 ` Yafang Shao
2025-04-30 17:45 ` Johannes Weiner
2025-04-30 17:53 ` Zi Yan
2025-05-01 19:36 ` Gutierrez Asier
2025-05-02 5:48 ` Yafang Shao
2025-05-02 12:00 ` Zi Yan
2025-05-02 12:18 ` Yafang Shao
2025-05-02 13:04 ` David Hildenbrand
2025-05-02 13:06 ` Matthew Wilcox
2025-05-02 13:34 ` Zi Yan
2025-05-05 2:35 ` Yafang Shao
2025-05-05 9:11 ` Gutierrez Asier
2025-05-05 9:38 ` Yafang Shao
2025-04-30 17:59 ` Johannes Weiner
2025-05-01 0:40 ` Yafang Shao
2025-04-30 14:40 ` Liam R. Howlett
2025-04-30 14:49 ` Yafang Shao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CALOAHbBfSat7-qOjKseEJy=w5MVF7um3vYKPCb0VMbEgw-KAuw@mail.gmail.com' \
--to=laoar.shao@gmail.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=npache@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox