From: Yafang Shao <laoar.shao@gmail.com>
To: Gutierrez Asier <gutierrez.asier@huawei-partners.com>
Cc: Zi Yan <ziy@nvidia.com>, Johannes Weiner <hannes@cmpxchg.org>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
akpm@linux-foundation.org, ast@kernel.org, daniel@iogearbox.net,
andrii@kernel.org, David Hildenbrand <david@redhat.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
bpf@vger.kernel.org, linux-mm@kvack.org,
Michal Hocko <mhocko@suse.com>
Subject: Re: [RFC PATCH 0/4] mm, bpf: BPF based THP adjustment
Date: Mon, 5 May 2025 17:38:36 +0800 [thread overview]
Message-ID: <CALOAHbCmDa90+6KmikP-6L93FG+ri5yYyMDuuMPW7K4WhKGn0A@mail.gmail.com> (raw)
In-Reply-To: <88dd89b9-b2a2-47f7-bc53-1b85004e71da@huawei-partners.com>
On Mon, May 5, 2025 at 5:11 PM Gutierrez Asier
<gutierrez.asier@huawei-partners.com> wrote:
>
>
>
> On 5/2/2025 8:48 AM, Yafang Shao wrote:
> > On Fri, May 2, 2025 at 3:36 AM Gutierrez Asier
> > <gutierrez.asier@huawei-partners.com> wrote:
> >>
> >>
> >> On 4/30/2025 8:53 PM, Zi Yan wrote:
> >>> On 30 Apr 2025, at 13:45, Johannes Weiner wrote:
> >>>
> >>>> On Thu, May 01, 2025 at 12:06:31AM +0800, Yafang Shao wrote:
> >>>>>>>> If it isn't, can you state why?
> >>>>>>>>
> >>>>>>>> The main difference is that you are saying it's in a container that you
> >>>>>>>> don't control. Your plan is to violate the control the internal
> >>>>>>>> applications have over THP because you know better. I'm not sure how
> >>>>>>>> people might feel about you messing with workloads,
> >>>>>>>
> >>>>>>> It’s not a mess. They have the option to deploy their services on
> >>>>>>> dedicated servers, but they would need to pay more for that choice.
> >>>>>>> This is a two-way decision.
> >>>>>>
> >>>>>> This implies you want a container-level way of controlling the setting
> >>>>>> and not a system service-level?
> >>>>>
> >>>>> Right. We want to control the THP per container.
> >>>>
> >>>> This does strike me as a reasonable usecase.
> >>>>
> >>>> I think there is consensus that in the long-term we want this stuff to
> >>>> just work and truly be transparent to userspace.
> >>>>
> >>>> In the short-to-medium term, however, there are still quite a few
> >>>> caveats. thp=always can significantly increase the memory footprint of
> >>>> sparse virtual regions. Huge allocations are not as cheap and reliable
> >>>> as we would like them to be, which for real production systems means
> >>>> having to make workload-specifcic choices and tradeoffs.
> >>>>
> >>>> There is ongoing work in these areas, but we do have a bit of a
> >>>> chicken-and-egg problem: on the one hand, huge page adoption is slow
> >>>> due to limitations in how they can be deployed. For example, we can't
> >>>> do thp=always on a DC node that runs arbitary combinations of jobs
> >>>> from a wide array of services. Some might benefit, some might hurt.
> >>>>
> >>>> Yet, it's much easier to improve the kernel based on exactly such
> >>>> production experience and data from real-world usecases. We can't
> >>>> improve the THP shrinker if we can't run THP.
> >>>>
> >>>> So I don't see it as overriding whoever wrote the software running
> >>>> inside the container. They don't know, and they shouldn't have to care
> >>>> about page sizes. It's about letting admins and kernel teams get
> >>>> started on using and experimenting with this stuff, given the very
> >>>> real constraints right now, so we can get the feedback necessary to
> >>>> improve the situation.
> >>>
> >>> Since you think it is reasonable to control THP at container-level,
> >>> namely per-cgroup. Should we reconsider cgroup-based THP control[1]?
> >>> (Asier cc'd)
> >>>
> >>> In this patchset, Yafang uses BPF to adjust THP global configs based
> >>> on VMA, which does not look a good approach to me. WDYT?
> >>>
> >>>
> >>> [1] https://lore.kernel.org/linux-mm/20241030083311.965933-1-gutierrez.asier@huawei-partners.com/
> >>>
> >>> --
> >>> Best Regards,
> >>> Yan, Zi
> >>
> >> Hi,
> >>
> >> I believe cgroup is a better approach for containers, since this
> >> approach can be easily integrated with the user space stack like
> >> containerd and kubernets, which use cgroup to control system resources.
> >
> > The integration of BPF with containerd and Kubernetes is emerging as a
> > clear trend.
> >
>
> No, eBPF is not used for resource management, it is mainly used by the
> network stack (CNI), monitoring and security.
This is the most well-known use case of BPF in Kubernetes, thanks to Cilium.
> All the resource
> management by Kubernetes is done using cgroups.
The landscape has shifted. As Johannes (the memcg maintainer)
noted[0], "Cgroups are for nested trees dividing up resources. They're
not a good fit for arbitrary, non-hierarchical policy settings."
[0]. https://lore.kernel.org/linux-mm/20250430175954.GD2020@cmpxchg.org/
> You are very unlikely
> to convince the Kubernetes community to manage memory resources using
> eBPF.
Kubernetes already natively supports this capability. As documented in
the Container Lifecycle Hooks guide[1], you can easily load BPF
programs as plugins using these hooks. This is exactly the approach
we've successfully implemented in our production environments.
[1]. https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
--
Regards
Yafang
next prev parent reply other threads:[~2025-05-05 9:39 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-29 2:41 Yafang Shao
2025-04-29 2:41 ` [RFC PATCH 1/4] mm: move hugepage_global_{enabled,always}() to internal.h Yafang Shao
2025-04-29 15:13 ` Zi Yan
2025-04-30 2:40 ` Yafang Shao
2025-04-30 12:11 ` Zi Yan
2025-04-30 14:43 ` Yafang Shao
2025-04-29 2:41 ` [RFC PATCH 2/4] mm: pass VMA parameter to hugepage_global_{enabled,always}() Yafang Shao
2025-04-29 15:31 ` Zi Yan
2025-04-30 2:46 ` Yafang Shao
2025-04-29 2:41 ` [RFC PATCH 3/4] mm: add BPF hook for THP adjustment Yafang Shao
2025-04-29 15:19 ` Alexei Starovoitov
2025-04-30 2:48 ` Yafang Shao
2025-04-29 2:41 ` [RFC PATCH 4/4] selftests/bpf: Add selftest " Yafang Shao
2025-04-29 3:11 ` [RFC PATCH 0/4] mm, bpf: BPF based " Matthew Wilcox
2025-04-29 4:53 ` Yafang Shao
2025-04-29 15:09 ` Zi Yan
2025-04-30 2:33 ` Yafang Shao
2025-04-30 13:19 ` Zi Yan
2025-04-30 14:38 ` Yafang Shao
2025-04-30 15:00 ` Zi Yan
2025-04-30 15:16 ` Yafang Shao
2025-04-30 15:21 ` Liam R. Howlett
2025-04-30 15:37 ` Yafang Shao
2025-04-30 15:53 ` Liam R. Howlett
2025-04-30 16:06 ` Yafang Shao
2025-04-30 17:45 ` Johannes Weiner
2025-04-30 17:53 ` Zi Yan
2025-05-01 19:36 ` Gutierrez Asier
2025-05-02 5:48 ` Yafang Shao
2025-05-02 12:00 ` Zi Yan
2025-05-02 12:18 ` Yafang Shao
2025-05-02 13:04 ` David Hildenbrand
2025-05-02 13:06 ` Matthew Wilcox
2025-05-02 13:34 ` Zi Yan
2025-05-05 2:35 ` Yafang Shao
2025-05-05 9:11 ` Gutierrez Asier
2025-05-05 9:38 ` Yafang Shao [this message]
2025-04-30 17:59 ` Johannes Weiner
2025-05-01 0:40 ` Yafang Shao
2025-04-30 14:40 ` Liam R. Howlett
2025-04-30 14:49 ` Yafang Shao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CALOAHbCmDa90+6KmikP-6L93FG+ri5yYyMDuuMPW7K4WhKGn0A@mail.gmail.com \
--to=laoar.shao@gmail.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=gutierrez.asier@huawei-partners.com \
--cc=hannes@cmpxchg.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=npache@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox