From: Kyle Meyer <kyle.meyer@hpe.com>
To: David Hildenbrand <david@redhat.com>
Cc: "Luck, Tony" <tony.luck@intel.com>,
akpm@linux-foundation.org, corbet@lwn.net, linmiaohe@huawei.com,
shuah@kernel.org, Liam.Howlett@oracle.com, bp@alien8.de,
hannes@cmpxchg.org, jack@suse.cz, jane.chu@oracle.com,
jiaqiyan@google.com, joel.granados@kernel.org,
laoar.shao@gmail.com, lorenzo.stoakes@oracle.com,
mclapinski@google.com, mhocko@suse.com, nao.horiguchi@gmail.com,
osalvador@suse.de, rafael.j.wysocki@intel.com, rppt@kernel.org,
russ.anderson@hpe.com, shawn.fan@intel.com, surenb@google.com,
vbabka@suse.cz, linux-acpi@vger.kernel.org,
linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-kselftest@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH] mm/memory-failure: Disable soft offline for HugeTLB pages by default
Date: Fri, 12 Sep 2025 10:17:39 -0500 [thread overview]
Message-ID: <aMQ5kyWElcW4Z8QE@hpe.com> (raw)
In-Reply-To: <a0e586dc-dce6-41a2-9607-f2f64b752df1@redhat.com>
On Fri, Sep 12, 2025 at 09:53:02AM +0200, David Hildenbrand wrote:
> On 11.09.25 19:56, Luck, Tony wrote:
> > On Thu, Sep 11, 2025 at 10:46:10AM +0200, David Hildenbrand wrote:
> > > On 10.09.25 18:15, Kyle Meyer wrote:
> > > > Soft offlining a HugeTLB page reduces the available HugeTLB page pool.
> > > > Since HugeTLB pages are preallocated, reducing the available HugeTLB
> > > > page pool can cause allocation failures.
> > > >
> > > > /proc/sys/vm/enable_soft_offline provides a sysctl interface to
> > > > disable/enable soft offline:
> > > >
> > > > 0 - Soft offline is disabled.
> > > > 1 - Soft offline is enabled.
> > > >
> > > > The current sysctl interface does not distinguish between HugeTLB pages
> > > > and other page types.
> > > >
> > > > Disable soft offline for HugeTLB pages by default (1) and extend the
> > > > sysctl interface to preserve existing behavior (2):
> > > >
> > > > 0 - Soft offline is disabled.
> > > > 1 - Soft offline is enabled (excluding HugeTLB pages).
> > > > 2 - Soft offline is enabled (including HugeTLB pages).
> > > >
> > > > Update documentation for the sysctl interface, reference the sysctl
> > > > interface in the sysfs ABI documentation, and update HugeTLB soft
> > > > offline selftests.
> > >
> > > I'm sure you spotted that the documentation for
> > > "/sys/devices/system/memory/soft_offline_pag" resides under "testing".
> >
> > But that is only one of several places in the kernel that
> > feed into the page offline code.
>
> Right, I can see one more call to soft_offline_page() from
> arch/parisc/kernel/pdt.c.
>
> And there is memory_failure_work_func() that I missed.
>
> So agreed that this goes beyond testing.
>
> It caught my attention because you ended up modifying documentation residing
> in Documentation/ABI/testing/sysfs-memory-page-offline.
>
> Reading 56374430c5dfc that Kyle pointed out is gets clearer.
>
> So the patch motivation/idea makes sense to me.
>
>
> I'll note two things:
>
> (1) The interface design is not really extensible. Imagine if we want to
> exclude yet another page type.
>
> Can we maybe add a second interface that defines a filter for types?
>
> Alternatively, you could use all the remaining flags as such a filter.
>
> 0 - Soft offline is completely disabled.
> 1 - Soft offline is enabled except for manually disabled types.
>
> Filter
>
> 2 - disable hugetlb.
>
> So value 3 would give you "enable all except hugetlb" etc.
>
> We could add in the future
>
> 4 - disable guest_memfd (just some random example)
>
>
> Then you
>
> 2) Changing the semantics of the value "1"
>
> IIUC, you are changing the semantics of value "1". It used to mean
> "SOFT_OFFLINE_ENABLED" now it is "SOFT_OFFLINE_ENABLED_SKIP_HUGETLB", which
> is a change in behavior.
>
> If that is the case, I don't think that's okay.
>
>
> 2) I am not sure about changing the default. That should be an admin/
> distro decision.
Thank you, that sounds good to me. I'll put something together.
Thanks,
Kyle Meyer
prev parent reply other threads:[~2025-09-12 15:19 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-09-10 16:15 Kyle Meyer
2025-09-10 16:44 ` Jiaqi Yan
2025-09-10 17:50 ` Kyle Meyer
2025-09-11 21:26 ` Jiaqi Yan
2025-09-10 18:05 ` jane.chu
2025-09-11 8:46 ` David Hildenbrand
2025-09-11 17:56 ` Luck, Tony
2025-09-11 20:56 ` Kyle Meyer
2025-09-12 7:53 ` David Hildenbrand
2025-09-12 15:17 ` Kyle Meyer [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aMQ5kyWElcW4Z8QE@hpe.com \
--to=kyle.meyer@hpe.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=bp@alien8.de \
--cc=corbet@lwn.net \
--cc=david@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=jack@suse.cz \
--cc=jane.chu@oracle.com \
--cc=jiaqiyan@google.com \
--cc=joel.granados@kernel.org \
--cc=laoar.shao@gmail.com \
--cc=linmiaohe@huawei.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mclapinski@google.com \
--cc=mhocko@suse.com \
--cc=nao.horiguchi@gmail.com \
--cc=osalvador@suse.de \
--cc=rafael.j.wysocki@intel.com \
--cc=rppt@kernel.org \
--cc=russ.anderson@hpe.com \
--cc=shawn.fan@intel.com \
--cc=shuah@kernel.org \
--cc=surenb@google.com \
--cc=tony.luck@intel.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox