From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: Breno Leitao <leitao@debian.org>,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Zi Yan <ziy@nvidia.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>, Lance Yang <lance.yang@linux.dev>,
Vlastimil Babka <vbabka@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Brendan Jackman <jackmanb@google.com>,
Johannes Weiner <hannes@cmpxchg.org>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
usamaarif642@gmail.com, kas@kernel.org, kernel-team@meta.com
Subject: Re: [PATCH v2 2/3] mm: huge_memory: refactor anon_enabled_store() with change_anon_orders()
Date: Fri, 6 Mar 2026 14:07:50 +0800 [thread overview]
Message-ID: <8b238664-4375-413c-a8cc-105d98430362@linux.alibaba.com> (raw)
In-Reply-To: <20260305-thp_logs-v2-2-96b3ad795894@debian.org>
On 3/5/26 10:04 PM, Breno Leitao wrote:
> Consolidate the repeated spin_lock/set_bit/clear_bit pattern in
> anon_enabled_store() into a new change_anon_orders() helper that
> loops over an orders[] array, setting the bit for the selected mode
> and clearing the others.
>
> Introduce enum enabled_mode and enabled_mode_strings[] to be shared
> with enabled_store() in a subsequent patch.
>
> Use sysfs_match_string() with the enabled_mode_strings[] table to
> replace the if/else chain of sysfs_streq() calls.
>
> The helper uses test_and_set_bit()/test_and_clear_bit() to track
> whether the state actually changed, so start_stop_khugepaged() is
> only called when needed. When the mode is unchanged,
> set_recommended_min_free_kbytes() is called directly to preserve
> the watermark recalculation behavior of the original code.
>
> Signed-off-by: Breno Leitao <leitao@debian.org>
> ---
> mm/huge_memory.c | 84 +++++++++++++++++++++++++++++++++++---------------------
> 1 file changed, 52 insertions(+), 32 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 8e2746ea74adf..19619213f54d1 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -316,6 +316,20 @@ static ssize_t enabled_show(struct kobject *kobj,
> return sysfs_emit(buf, "%s\n", output);
> }
>
> +enum enabled_mode {
> + ENABLED_ALWAYS,
> + ENABLED_MADVISE,
> + ENABLED_INHERIT,
> + ENABLED_NEVER,
> +};
> +
> +static const char * const enabled_mode_strings[] = {
> + [ENABLED_ALWAYS] = "always",
> + [ENABLED_MADVISE] = "madvise",
> + [ENABLED_INHERIT] = "inherit",
> + [ENABLED_NEVER] = "never",
> +};
> +
> static ssize_t enabled_store(struct kobject *kobj,
> struct kobj_attribute *attr,
> const char *buf, size_t count)
> @@ -515,48 +529,54 @@ static ssize_t anon_enabled_show(struct kobject *kobj,
> return sysfs_emit(buf, "%s\n", output);
> }
>
> +static bool change_anon_orders(int order, enum enabled_mode mode)
> +{
> + static unsigned long *orders[] = {
> + &huge_anon_orders_always,
> + &huge_anon_orders_madvise,
> + &huge_anon_orders_inherit,
> + };
> + bool changed = false;
> + int i;
> +
> + spin_lock(&huge_anon_orders_lock);
> + for (i = 0; i < ARRAY_SIZE(orders); i++) {
> + if (i == mode)
> + changed |= !test_and_set_bit(order, orders[i]);
> + else
> + changed |= test_and_clear_bit(order, orders[i]);
> + }
> + spin_unlock(&huge_anon_orders_lock);
> +
> + return changed;
> +}
> +
> static ssize_t anon_enabled_store(struct kobject *kobj,
> struct kobj_attribute *attr,
> const char *buf, size_t count)
> {
> int order = to_thpsize(kobj)->order;
> - ssize_t ret = count;
> + int mode;
>
> - if (sysfs_streq(buf, "always")) {
> - spin_lock(&huge_anon_orders_lock);
> - clear_bit(order, &huge_anon_orders_inherit);
> - clear_bit(order, &huge_anon_orders_madvise);
> - set_bit(order, &huge_anon_orders_always);
> - spin_unlock(&huge_anon_orders_lock);
> - } else if (sysfs_streq(buf, "inherit")) {
> - spin_lock(&huge_anon_orders_lock);
> - clear_bit(order, &huge_anon_orders_always);
> - clear_bit(order, &huge_anon_orders_madvise);
> - set_bit(order, &huge_anon_orders_inherit);
> - spin_unlock(&huge_anon_orders_lock);
> - } else if (sysfs_streq(buf, "madvise")) {
> - spin_lock(&huge_anon_orders_lock);
> - clear_bit(order, &huge_anon_orders_always);
> - clear_bit(order, &huge_anon_orders_inherit);
> - set_bit(order, &huge_anon_orders_madvise);
> - spin_unlock(&huge_anon_orders_lock);
> - } else if (sysfs_streq(buf, "never")) {
> - spin_lock(&huge_anon_orders_lock);
> - clear_bit(order, &huge_anon_orders_always);
> - clear_bit(order, &huge_anon_orders_inherit);
> - clear_bit(order, &huge_anon_orders_madvise);
> - spin_unlock(&huge_anon_orders_lock);
> - } else
> - ret = -EINVAL;
> + mode = sysfs_match_string(enabled_mode_strings, buf);
> + if (mode < 0)
> + return -EINVAL;
>
> - if (ret > 0) {
> - int err;
> + if (change_anon_orders(order, mode)) {
> + int err = start_stop_khugepaged();
Thanks for the cleanup, and the code looks better.
>
> - err = start_stop_khugepaged();
> if (err)
> - ret = err;
> + return err;
> + } else {
> + /*
> + * Recalculate watermarks even when the mode didn't
> + * change, as the previous code always called
> + * start_stop_khugepaged() which does this internally.
> + */
> + set_recommended_min_free_kbytes();
However, this won’t fix your issue. You will still get lots of warning
messages even if no hugepage options are changed.
next prev parent reply other threads:[~2026-03-06 6:08 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-05 14:04 [PATCH v2 0/3] mm: thp: reduce unnecessary start_stop_khugepaged() calls Breno Leitao
2026-03-05 14:04 ` [PATCH v2 1/3] mm: khugepaged: export set_recommended_min_free_kbytes() Breno Leitao
2026-03-06 11:20 ` Lorenzo Stoakes (Oracle)
2026-03-06 11:33 ` Lorenzo Stoakes (Oracle)
2026-03-05 14:04 ` [PATCH v2 2/3] mm: huge_memory: refactor anon_enabled_store() with change_anon_orders() Breno Leitao
2026-03-06 6:07 ` Baolin Wang [this message]
2026-03-06 10:30 ` Breno Leitao
2026-03-06 11:22 ` Lorenzo Stoakes (Oracle)
2026-03-06 11:32 ` Lorenzo Stoakes (Oracle)
2026-03-05 14:04 ` [PATCH v2 3/3] mm: huge_memory: refactor enabled_store() with change_enabled() Breno Leitao
2026-03-06 11:39 ` Lorenzo Stoakes (Oracle)
2026-03-06 6:14 ` [PATCH v2 0/3] mm: thp: reduce unnecessary start_stop_khugepaged() calls Baolin Wang
2026-03-06 10:26 ` Breno Leitao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=8b238664-4375-413c-a8cc-105d98430362@linux.alibaba.com \
--to=baolin.wang@linux.alibaba.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=hannes@cmpxchg.org \
--cc=jackmanb@google.com \
--cc=kas@kernel.org \
--cc=kernel-team@meta.com \
--cc=lance.yang@linux.dev \
--cc=leitao@debian.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=npache@redhat.com \
--cc=ryan.roberts@arm.com \
--cc=surenb@google.com \
--cc=usamaarif642@gmail.com \
--cc=vbabka@kernel.org \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox