From: Yafang Shao <laoar.shao@gmail.com>
To: Zi Yan <ziy@nvidia.com>
Cc: akpm@linux-foundation.org, david@redhat.com,
baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com,
Liam.Howlett@oracle.com, npache@redhat.com,
ryan.roberts@arm.com, dev.jain@arm.com, hannes@cmpxchg.org,
usamaarif642@gmail.com, gutierrez.asier@huawei-partners.com,
willy@infradead.org, ast@kernel.org, daniel@iogearbox.net,
andrii@kernel.org, ameryhung@gmail.com, bpf@vger.kernel.org,
linux-mm@kvack.org
Subject: Re: [RFC PATCH v4 0/4] mm, bpf: BPF based THP order selection
Date: Wed, 30 Jul 2025 10:31:37 +0800 [thread overview]
Message-ID: <CALOAHbDRBs8bdXB_LJjx-7gALOCLvmMxFD+c7MbHAiQ3htXawA@mail.gmail.com> (raw)
In-Reply-To: <08D7155B-84F0-4575-B192-96901CFE690A@nvidia.com>
On Tue, Jul 29, 2025 at 11:08 PM Zi Yan <ziy@nvidia.com> wrote:
>
> On 29 Jul 2025, at 5:18, Yafang Shao wrote:
>
> > Background
> > ----------
> >
> > Our production servers consistently configure THP to "never" due to
> > historical incidents caused by its behavior. Key issues include:
> > - Increased Memory Consumption
> > THP significantly raises overall memory usage, reducing available memory
> > for workloads.
> >
> > - Latency Spikes
> > Random latency spikes occur due to frequent memory compaction triggered
> > by THP.
> >
> > - Lack of Fine-Grained Control
> > THP tuning is globally configured, making it unsuitable for containerized
> > environments. When multiple workloads share a host, enabling THP without
> > per-workload control leads to unpredictable behavior.
> >
> > Due to these issues, administrators avoid switching to madvise or always
> > modes—unless per-workload THP control is implemented.
> >
> > To address this, we propose BPF-based THP policy for flexible adjustment.
> > Additionally, as David mentioned [0], this mechanism can also serve as a
>
> The link to [0] is missing. :)
I forgot to add it:
https://lwn.net/ml/all/9bc57721-5287-416c-aa30-46932d605f63@redhat.com/
>
> > policy prototyping tool (test policies via BPF before upstreaming them).
> >
> > Proposed Solution
> > -----------------
> >
> > As suggested by David [0], we introduce a new BPF interface:
> >
> > /**
> > * @get_suggested_order: Get the suggested highest THP order for allocation
> > * @mm: mm_struct associated with the THP allocation
> > * @tva_flags: TVA flags for current context
> > * %TVA_IN_PF: Set when in page fault context
> > * Other flags: Reserved for future use
> > * @order: The highest order being considered for this THP allocation.
> > * %PUD_ORDER for PUD-mapped allocations
>
> There is no PUD THP yet and the highest THP order is PMD_ORDER. It is better
> to remove the line above to avoid confusion.
Thanks for catching that. I’ll remove it.
>
> > * %PMD_ORDER for PMD-mapped allocations
> > * %PMD_ORDER - 1 for mTHP allocations
> > *
> > * Rerurn: Suggested highest THP order to use for allocation. The returned
> > * order will never exceed the input @order value.
> > */
> > int (*get_suggested_order)(struct mm_struct *mm, unsigned long tva_flags, int order);
> >
> > This interface:
> > - Supports both use cases (per-workload tuning + policy prototyping).
> > - Can be extended with BPF helpers (e.g., for memory pressure awareness).
>
> IIRC, your initial RFC works at VMA level, but this patch targets mm level.
> Is mm sufficient for your use case?
Yes, mm is sufficient for our use cases.
We've already deployed a variant of this patchset in our production
environment, and it has been performing well under our workloads.
> Are you planning to extend the
> BFP interface to VMA in the future? Just curious.
Our use cases don’t currently require the VMA.
We can add it later if a clear need arises.
--
Regards
Yafang
next prev parent reply other threads:[~2025-07-30 2:32 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-29 9:18 Yafang Shao
2025-07-29 9:18 ` [RFC PATCH v4 1/4] mm: thp: add support for " Yafang Shao
2025-07-29 15:32 ` Zi Yan
2025-07-30 2:36 ` Yafang Shao
2025-07-29 9:18 ` [RFC PATCH v4 2/4] mm: thp: add a new kfunc bpf_mm_get_mem_cgroup() Yafang Shao
2025-07-29 9:18 ` [RFC PATCH v4 3/4] mm: thp: add a new kfunc bpf_mm_get_task() Yafang Shao
2025-07-29 9:18 ` [RFC PATCH v4 4/4] selftest/bpf: add selftest for BPF based THP order seletection Yafang Shao
2025-07-29 15:36 ` Zi Yan
2025-07-30 2:38 ` Yafang Shao
2025-07-29 15:07 ` [RFC PATCH v4 0/4] mm, bpf: BPF based THP order selection Zi Yan
2025-07-30 2:31 ` Yafang Shao [this message]
2025-07-30 9:58 ` David Hildenbrand
2025-07-31 2:07 ` Yafang Shao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALOAHbDRBs8bdXB_LJjx-7gALOCLvmMxFD+c7MbHAiQ3htXawA@mail.gmail.com \
--to=laoar.shao@gmail.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=ameryhung@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=gutierrez.asier@huawei-partners.com \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=npache@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=usamaarif642@gmail.com \
--cc=willy@infradead.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox