* [PATCH 1/2] mm: thp: avoid calling start_stop_khugepaged() in anon_enabled_store()
2026-03-04 10:22 [PATCH 0/2] mm: thp: reduce unnecessary start_stop_khugepaged() calls Breno Leitao
@ 2026-03-04 10:22 ` Breno Leitao
2026-03-04 10:22 ` [PATCH 2/2] mm: thp: avoid calling start_stop_khugepaged() in enabled_store() Breno Leitao
2026-03-04 11:18 ` [PATCH 0/2] mm: thp: reduce unnecessary start_stop_khugepaged() calls Kiryl Shutsemau
2 siblings, 0 replies; 5+ messages in thread
From: Breno Leitao @ 2026-03-04 10:22 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Zi Yan,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Vlastimil Babka, Suren Baghdasaryan,
Michal Hocko, Brendan Jackman, Johannes Weiner
Cc: linux-mm, linux-kernel, usamaarif642, kas, kernel-team, Breno Leitao
Writing "never" (or any other value) multiple times to
/sys/kernel/mm/transparent_hugepage/hugepages-*/enabled calls
start_stop_khugepaged() each time, even when nothing actually changed.
This causes set_recommended_min_free_kbytes() to run unconditionally,
which is unnecessary and floods the printk buffer with "raising
min_free_kbytes" messages. Example:
# for i in $(seq 100); do
# echo never > /sys/kernel/mm/transparent_hugepage/enabled
# done
# dmesg | grep "min_free_kbytes is not updated" | wc -l
100
Use test_and_set_bit()/test_and_clear_bit() instead of the plain
variants to detect whether any bit actually flipped, and skip the
start_stop_khugepaged() call entirely when the configuration is
unchanged.
With this patch, redoing the same operation becomes a no-op.
Signed-off-by: Breno Leitao <leitao@debian.org>
---
mm/huge_memory.c | 27 ++++++++++++++-------------
1 file changed, 14 insertions(+), 13 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 8e2746ea74adf..9abfb115e9329 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -520,36 +520,37 @@ static ssize_t anon_enabled_store(struct kobject *kobj,
const char *buf, size_t count)
{
int order = to_thpsize(kobj)->order;
+ bool changed = false;
ssize_t ret = count;
if (sysfs_streq(buf, "always")) {
spin_lock(&huge_anon_orders_lock);
- clear_bit(order, &huge_anon_orders_inherit);
- clear_bit(order, &huge_anon_orders_madvise);
- set_bit(order, &huge_anon_orders_always);
+ changed = test_and_clear_bit(order, &huge_anon_orders_inherit);
+ changed |= test_and_clear_bit(order, &huge_anon_orders_madvise);
+ changed |= !test_and_set_bit(order, &huge_anon_orders_always);
spin_unlock(&huge_anon_orders_lock);
} else if (sysfs_streq(buf, "inherit")) {
spin_lock(&huge_anon_orders_lock);
- clear_bit(order, &huge_anon_orders_always);
- clear_bit(order, &huge_anon_orders_madvise);
- set_bit(order, &huge_anon_orders_inherit);
+ changed = test_and_clear_bit(order, &huge_anon_orders_always);
+ changed |= test_and_clear_bit(order, &huge_anon_orders_madvise);
+ changed |= !test_and_set_bit(order, &huge_anon_orders_inherit);
spin_unlock(&huge_anon_orders_lock);
} else if (sysfs_streq(buf, "madvise")) {
spin_lock(&huge_anon_orders_lock);
- clear_bit(order, &huge_anon_orders_always);
- clear_bit(order, &huge_anon_orders_inherit);
- set_bit(order, &huge_anon_orders_madvise);
+ changed = test_and_clear_bit(order, &huge_anon_orders_always);
+ changed |= test_and_clear_bit(order, &huge_anon_orders_inherit);
+ changed |= !test_and_set_bit(order, &huge_anon_orders_madvise);
spin_unlock(&huge_anon_orders_lock);
} else if (sysfs_streq(buf, "never")) {
spin_lock(&huge_anon_orders_lock);
- clear_bit(order, &huge_anon_orders_always);
- clear_bit(order, &huge_anon_orders_inherit);
- clear_bit(order, &huge_anon_orders_madvise);
+ changed = test_and_clear_bit(order, &huge_anon_orders_always);
+ changed |= test_and_clear_bit(order, &huge_anon_orders_inherit);
+ changed |= test_and_clear_bit(order, &huge_anon_orders_madvise);
spin_unlock(&huge_anon_orders_lock);
} else
ret = -EINVAL;
- if (ret > 0) {
+ if (ret > 0 && changed) {
int err;
err = start_stop_khugepaged();
--
2.47.3
^ permalink raw reply [flat|nested] 5+ messages in thread* [PATCH 2/2] mm: thp: avoid calling start_stop_khugepaged() in enabled_store()
2026-03-04 10:22 [PATCH 0/2] mm: thp: reduce unnecessary start_stop_khugepaged() calls Breno Leitao
2026-03-04 10:22 ` [PATCH 1/2] mm: thp: avoid calling start_stop_khugepaged() in anon_enabled_store() Breno Leitao
@ 2026-03-04 10:22 ` Breno Leitao
2026-03-04 11:18 ` [PATCH 0/2] mm: thp: reduce unnecessary start_stop_khugepaged() calls Kiryl Shutsemau
2 siblings, 0 replies; 5+ messages in thread
From: Breno Leitao @ 2026-03-04 10:22 UTC (permalink / raw)
To: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Zi Yan,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Vlastimil Babka, Suren Baghdasaryan,
Michal Hocko, Brendan Jackman, Johannes Weiner
Cc: linux-mm, linux-kernel, usamaarif642, kas, kernel-team, Breno Leitao
Avoid calling start_stop_khugepaged() at the top-level enabled_store()
for /sys/kernel/mm/transparent_hugepage/enabled. Use test_and_set_bit()
and test_and_clear_bit() to detect whether the configuration
actually changed before calling start_stop_khugepaged().
This avoid calling start_stop_khugepaged() unnecessary.
Signed-off-by: Breno Leitao <leitao@debian.org>
---
mm/huge_memory.c | 21 ++++++++++++++-------
1 file changed, 14 insertions(+), 7 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9abfb115e9329..b6ed44b6e8c02 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -320,21 +320,28 @@ static ssize_t enabled_store(struct kobject *kobj,
struct kobj_attribute *attr,
const char *buf, size_t count)
{
+ bool changed = false;
ssize_t ret = count;
if (sysfs_streq(buf, "always")) {
- clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags);
- set_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags);
+ changed = test_and_clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
+ &transparent_hugepage_flags);
+ changed |= !test_and_set_bit(TRANSPARENT_HUGEPAGE_FLAG,
+ &transparent_hugepage_flags);
} else if (sysfs_streq(buf, "madvise")) {
- clear_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags);
- set_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags);
+ changed = test_and_clear_bit(TRANSPARENT_HUGEPAGE_FLAG,
+ &transparent_hugepage_flags);
+ changed |= !test_and_set_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
+ &transparent_hugepage_flags);
} else if (sysfs_streq(buf, "never")) {
- clear_bit(TRANSPARENT_HUGEPAGE_FLAG, &transparent_hugepage_flags);
- clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG, &transparent_hugepage_flags);
+ changed = test_and_clear_bit(TRANSPARENT_HUGEPAGE_FLAG,
+ &transparent_hugepage_flags);
+ changed |= test_and_clear_bit(TRANSPARENT_HUGEPAGE_REQ_MADV_FLAG,
+ &transparent_hugepage_flags);
} else
ret = -EINVAL;
- if (ret > 0) {
+ if (ret > 0 && changed) {
int err = start_stop_khugepaged();
if (err)
ret = err;
--
2.47.3
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH 0/2] mm: thp: reduce unnecessary start_stop_khugepaged() calls
2026-03-04 10:22 [PATCH 0/2] mm: thp: reduce unnecessary start_stop_khugepaged() calls Breno Leitao
2026-03-04 10:22 ` [PATCH 1/2] mm: thp: avoid calling start_stop_khugepaged() in anon_enabled_store() Breno Leitao
2026-03-04 10:22 ` [PATCH 2/2] mm: thp: avoid calling start_stop_khugepaged() in enabled_store() Breno Leitao
@ 2026-03-04 11:18 ` Kiryl Shutsemau
2026-03-04 11:53 ` Breno Leitao
2 siblings, 1 reply; 5+ messages in thread
From: Kiryl Shutsemau @ 2026-03-04 11:18 UTC (permalink / raw)
To: Breno Leitao
Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Zi Yan,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Vlastimil Babka, Suren Baghdasaryan,
Michal Hocko, Brendan Jackman, Johannes Weiner, linux-mm,
linux-kernel, usamaarif642, kernel-team
On Wed, Mar 04, 2026 at 02:22:32AM -0800, Breno Leitao wrote:
> Breno Leitao (2):
> mm: thp: avoid calling start_stop_khugepaged() in anon_enabled_store()
> mm: thp: avoid calling start_stop_khugepaged() in enabled_store()
I think the same can be achieved cleaner from within start_stop_khugepaged().
Completely untested patch is below.
One noticeable difference that with the patch we don't kick
khugepaged_wait if khugepaged_thread is there. But I don't think it
should make a difference.
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 1dd3cfca610d..80f818d3a094 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2683,18 +2683,18 @@ static void set_recommended_min_free_kbytes(void)
int start_stop_khugepaged(void)
{
- int err = 0;
+ guard(mutex)(&khugepaged_mutex);
+
+ /* Check if anything has to be done */
+ if (hugepage_pmd_enabled() == !!khugepaged_thread)
+ return 0;
- mutex_lock(&khugepaged_mutex);
if (hugepage_pmd_enabled()) {
- if (!khugepaged_thread)
- khugepaged_thread = kthread_run(khugepaged, NULL,
- "khugepaged");
+ khugepaged_thread = kthread_run(khugepaged, NULL, "khugepaged");
if (IS_ERR(khugepaged_thread)) {
pr_err("khugepaged: kthread_run(khugepaged) failed\n");
- err = PTR_ERR(khugepaged_thread);
khugepaged_thread = NULL;
- goto fail;
+ return PTR_ERR(khugepaged_thread);
}
if (!list_empty(&khugepaged_scan.mm_head))
@@ -2703,10 +2703,9 @@ int start_stop_khugepaged(void)
kthread_stop(khugepaged_thread);
khugepaged_thread = NULL;
}
+
set_recommended_min_free_kbytes();
-fail:
- mutex_unlock(&khugepaged_mutex);
- return err;
+ return 0;
}
void khugepaged_min_free_kbytes_update(void)
--
Kiryl Shutsemau / Kirill A. Shutemov
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH 0/2] mm: thp: reduce unnecessary start_stop_khugepaged() calls
2026-03-04 11:18 ` [PATCH 0/2] mm: thp: reduce unnecessary start_stop_khugepaged() calls Kiryl Shutsemau
@ 2026-03-04 11:53 ` Breno Leitao
0 siblings, 0 replies; 5+ messages in thread
From: Breno Leitao @ 2026-03-04 11:53 UTC (permalink / raw)
To: Kiryl Shutsemau
Cc: Andrew Morton, David Hildenbrand, Lorenzo Stoakes, Zi Yan,
Baolin Wang, Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain,
Barry Song, Lance Yang, Vlastimil Babka, Suren Baghdasaryan,
Michal Hocko, Brendan Jackman, Johannes Weiner, linux-mm,
linux-kernel, usamaarif642, kernel-team
Hello Kiryl,
On Wed, Mar 04, 2026 at 11:18:37AM +0000, Kiryl Shutsemau wrote:
> On Wed, Mar 04, 2026 at 02:22:32AM -0800, Breno Leitao wrote:
> > Breno Leitao (2):
> > mm: thp: avoid calling start_stop_khugepaged() in anon_enabled_store()
> > mm: thp: avoid calling start_stop_khugepaged() in enabled_store()
>
> I think the same can be achieved cleaner from within start_stop_khugepaged().
Thanks for the review.
I considered this approach as well. From my limited experience, the preference
is to return early in the _store() callbacks when the operation is a no-op,
rather than pushing that logic deeper into nested code.
Handling it at the _store() level results in more changes overall, but it is
arguably more straightforward to reason about (!?).
That said, I am no MM subsystem expert, and I am happy to go either way.
--breno
^ permalink raw reply [flat|nested] 5+ messages in thread