From: "Li Zhe" <lizhe.67@bytedance.com>
To: <muchun.song@linux.dev>, <osalvador@suse.de>, <david@kernel.org>,
<akpm@linux-foundation.org>, <fvdl@google.com>
Cc: <linux-mm@kvack.org>, <linux-kernel@vger.kernel.org>,
<lizhe.67@bytedance.com>
Subject: [PATCH v2 4/8] mm/hugetlb: introduce per-node sysfs interface "zeroable_hugepages"
Date: Wed, 7 Jan 2026 19:31:26 +0800 [thread overview]
Message-ID: <20260107113130.37231-5-lizhe.67@bytedance.com> (raw)
In-Reply-To: <20260107113130.37231-1-lizhe.67@bytedance.com>
Fresh hugetlb pages are zeroed out when they are faulted in,
just like with all other page types. This can take up a good
amount of time for larger page sizes (e.g. around 250
milliseconds for a 1G page on a Skylake machine).
This normally isn't a problem, since hugetlb pages are typically
mapped by the application for a long time, and the initial delay
when touching them isn't much of an issue.
However, there are some use cases where a large number of hugetlb
pages are touched when an application starts (such as a VM backed
by these pages), rendering the launch noticeably slow.
On an Skylake platform running v6.19-rc2, faulting in 64 × 1 GB huge
pages takes about 16 seconds, roughly 250 ms per page. Even with
Ankur's optimizations[2], the time drops only to ~13 seconds,
~200 ms per page, still a noticeable delay.
To accelerate the above scenario, this patch exports a per-node,
read-write "zeroable_hugepages" sysfs interface for every hugepage size.
This interface reports how many hugepages on that node can currently
be pre-zeroed and allows user space to request that any integer
number in the range [0, max] be zeroed in a single operation.
Exporting this interface offers the following advantages:
(1) User space gains full control over when zeroing is triggered,
enabling it to minimize the impact on both CPU and cache utilization.
(2) Applications can spawn as many zeroing processes as they need,
enabling concurrent background zeroing.
(3) By binding the process to specific CPUs, users can confine zeroing
threads to cores that do not run latency-critical tasks, eliminating
interference.
(4) A zeroing process can be interrupted at any time through standard
signal mechanisms, allowing immediate cancellation.
(5) The CPU consumption incurred by zeroing can be throttled and contained
with cgroups, ensuring that the cost is not borne system-wide.
Tested on the same Skylake platform as above, when the 64 GiB of memory
was pre-zeroed in advance by the pre-zeroing mechanism, the faulting
latency test completed in negligible time.
[1]: https://lore.kernel.org/linux-mm/202412030519.W14yll4e-lkp@intel.com/T/#t
Co-developed-by: Frank van der Linden <fvdl@google.com>
Signed-off-by: Frank van der Linden <fvdl@google.com>
Signed-off-by: Li Zhe <lizhe.67@bytedance.com>
---
mm/hugetlb_sysfs.c | 124 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 124 insertions(+)
diff --git a/mm/hugetlb_sysfs.c b/mm/hugetlb_sysfs.c
index 79ece91406bf..68a7372d3378 100644
--- a/mm/hugetlb_sysfs.c
+++ b/mm/hugetlb_sysfs.c
@@ -352,6 +352,129 @@ struct node_hstate {
};
static struct node_hstate node_hstates[MAX_NUMNODES];
+static ssize_t zeroable_hugepages_show(struct kobject *kobj,
+ struct kobj_attribute *attr, char *buf)
+{
+ struct hstate *h;
+ unsigned long free_huge_pages_zero;
+ int nid;
+
+ h = kobj_to_hstate(kobj, &nid);
+ if (WARN_ON(nid == NUMA_NO_NODE))
+ return -EPERM;
+
+ free_huge_pages_zero = h->free_huge_pages_node[nid] -
+ h->free_huge_pages_zero_node[nid];
+
+ return sprintf(buf, "%lu\n", free_huge_pages_zero);
+}
+
+static inline bool zero_should_abort(struct hstate *h, int nid)
+{
+ return (h->free_huge_pages_zero_node[nid] ==
+ h->free_huge_pages_node[nid]) ||
+ list_empty(&h->hugepage_freelists[nid]);
+}
+
+static void zero_free_hugepages_nid(struct hstate *h,
+ int nid, unsigned int nr_zero)
+{
+ struct list_head *freelist = &h->hugepage_freelists[nid];
+ unsigned int nr_zeroed = 0;
+ struct folio *folio;
+
+ if (zero_should_abort(h, nid))
+ return;
+
+ spin_lock_irq(&hugetlb_lock);
+
+ while (nr_zeroed < nr_zero) {
+
+ if (zero_should_abort(h, nid) || fatal_signal_pending(current))
+ break;
+
+ freelist = freelist->prev;
+ folio = list_entry(freelist, struct folio, lru);
+
+ if (folio_test_hugetlb_zeroed(folio))
+ break;
+
+ if (folio_test_hugetlb_zeroing(folio)) {
+ if (unlikely(freelist->prev ==
+ &h->hugepage_freelists[nid]))
+ break;
+ continue;
+ }
+
+ folio_set_hugetlb_zeroing(folio);
+
+ /*
+ * Incrementing this here is a bit of a fib, since
+ * the page hasn't been cleared yet (it will be done
+ * immediately after dropping the lock below). But
+ * it keeps the count consistent with the overall
+ * free count in case the page gets taken off the
+ * freelist while we're working on it.
+ */
+ h->free_huge_pages_zero_node[nid]++;
+ spin_unlock_irq(&hugetlb_lock);
+
+ /*
+ * HWPoison pages may show up on the freelist.
+ * Don't try to zero it out, but do set the flag
+ * and counts, so that we don't consider it again.
+ */
+ if (!folio_test_hwpoison(folio))
+ folio_zero_user(folio, 0);
+
+ cond_resched();
+
+ spin_lock_irq(&hugetlb_lock);
+ folio_set_hugetlb_zeroed(folio);
+ folio_clear_hugetlb_zeroing(folio);
+
+ /*
+ * If the page is still on the free list, move
+ * it to the head.
+ */
+ if (folio_test_hugetlb_freed(folio))
+ list_move(&folio->lru, &h->hugepage_freelists[nid]);
+
+ /*
+ * If someone was waiting for the zero to
+ * finish, wake them up.
+ */
+ if (waitqueue_active(&h->dqzero_wait[nid]))
+ wake_up(&h->dqzero_wait[nid]);
+ nr_zeroed++;
+ freelist = &h->hugepage_freelists[nid];
+ }
+ spin_unlock_irq(&hugetlb_lock);
+}
+
+static ssize_t zeroable_hugepages_store(struct kobject *kobj,
+ struct kobj_attribute *attr, const char *buf, size_t len)
+{
+ unsigned int nr_zero;
+ struct hstate *h;
+ int err;
+ int nid;
+
+ if (!strcmp(buf, "max") || !strcmp(buf, "max\n")) {
+ nr_zero = UINT_MAX;
+ } else {
+ err = kstrtouint(buf, 10, &nr_zero);
+ if (err)
+ return err;
+ }
+ h = kobj_to_hstate(kobj, &nid);
+
+ zero_free_hugepages_nid(h, nid, nr_zero);
+
+ return len;
+}
+HSTATE_ATTR(zeroable_hugepages);
+
/*
* A subset of global hstate attributes for node devices
*/
@@ -359,6 +482,7 @@ static struct attribute *per_node_hstate_attrs[] = {
&nr_hugepages_attr.attr,
&free_hugepages_attr.attr,
&surplus_hugepages_attr.attr,
+ &zeroable_hugepages_attr.attr,
NULL,
};
--
2.20.1
next prev parent reply other threads:[~2026-01-07 11:33 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-07 11:31 [PATCH v2 0/8] Introduce a huge-page pre-zeroing mechanism Li Zhe
2026-01-07 11:31 ` [PATCH v2 1/8] mm/hugetlb: add pre-zeroed framework Li Zhe
2026-01-07 11:31 ` [PATCH v2 2/8] mm/hugetlb: convert to prep_account_new_hugetlb_folio() Li Zhe
2026-01-07 11:31 ` [PATCH v2 3/8] mm/hugetlb: move the huge folio to the end of the list during enqueue Li Zhe
2026-01-07 11:31 ` Li Zhe [this message]
2026-01-07 11:31 ` [PATCH v2 5/8] mm/hugetlb: simplify function hugetlb_sysfs_add_hstate() Li Zhe
2026-01-07 11:31 ` [PATCH v2 6/8] mm/hugetlb: relocate the per-hstate struct kobject pointer Li Zhe
2026-01-07 11:31 ` [PATCH v2 7/8] mm/hugetlb: add epoll support for interface "zeroable_hugepages" Li Zhe
2026-01-07 11:31 ` [PATCH v2 8/8] mm/hugetlb: limit event generation frequency of function do_zero_free_notify() Li Zhe
2026-01-07 16:19 ` [PATCH v2 0/8] Introduce a huge-page pre-zeroing mechanism Andrew Morton
2026-01-09 6:05 ` Muchun Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260107113130.37231-5-lizhe.67@bytedance.com \
--to=lizhe.67@bytedance.com \
--cc=akpm@linux-foundation.org \
--cc=david@kernel.org \
--cc=fvdl@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox