linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Wei Yang <richard.weiyang@gmail.com>
To: Nico Pache <npache@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org,
	linux-mm@kvack.org, linux-doc@vger.kernel.org, david@redhat.com,
	ziy@nvidia.com, baolin.wang@linux.alibaba.com,
	lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com,
	ryan.roberts@arm.com, dev.jain@arm.com, corbet@lwn.net,
	rostedt@goodmis.org, mhiramat@kernel.org,
	mathieu.desnoyers@efficios.com, akpm@linux-foundation.org,
	baohua@kernel.org, willy@infradead.org, peterx@redhat.com,
	wangkefeng.wang@huawei.com, usamaarif642@gmail.com,
	sunnanyong@huawei.com, vishal.moola@gmail.com,
	thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com,
	kas@kernel.org, aarcange@redhat.com, raquini@redhat.com,
	anshuman.khandual@arm.com, catalin.marinas@arm.com,
	tiwai@suse.de, will@kernel.org, dave.hansen@linux.intel.com,
	jack@suse.cz, cl@gentwo.org, jglisse@google.com,
	surenb@google.com, zokeefe@google.com, hannes@cmpxchg.org,
	rientjes@google.com, mhocko@suse.com, rdunlap@infradead.org,
	hughd@google.com, richard.weiyang@gmail.com,
	lance.yang@linux.dev, vbabka@suse.cz, rppt@kernel.org,
	jannh@google.com, pfalcato@suse.de
Subject: Re: [PATCH v12 mm-new 12/15] khugepaged: Introduce mTHP collapse support
Date: Sun, 9 Nov 2025 02:08:02 +0000	[thread overview]
Message-ID: <20251109020802.g6dytbixd4aygdgh@master> (raw)
In-Reply-To: <20251022183717.70829-13-npache@redhat.com>

On Wed, Oct 22, 2025 at 12:37:14PM -0600, Nico Pache wrote:
>During PMD range scanning, track occupied pages in a bitmap. If mTHPs are
>enabled we remove the restriction of max_ptes_none during the scan phase
>to avoid missing potential mTHP candidates.
>
>Implement collapse_scan_bitmap() to perform binary recursion on the bitmap
>and determine the best eligible order for the collapse. A stack struct is
>used instead of traditional recursion. The algorithm splits the bitmap
>into smaller chunks to find the best fit mTHP.  max_ptes_none is scaled by
>the attempted collapse order to determine how "full" an order must be
>before being considered for collapse.
>
>Once we determine what mTHP sizes fits best in that PMD range a collapse
>is attempted. A minimum collapse order of 2 is used as this is the lowest
>order supported by anon memory.
>
>mTHP collapses reject regions containing swapped out or shared pages.
>This is because adding new entries can lead to new none pages, and these
>may lead to constant promotion into a higher order (m)THP. A similar
>issue can occur with "max_ptes_none > HPAGE_PMD_NR/2" due to a collapse
>introducing at least 2x the number of pages, and on a future scan will
>satisfy the promotion condition once again. This issue is prevented via
>the collapse_allowable_orders() function.
>
>Currently madv_collapse is not supported and will only attempt PMD
>collapse.
>
>We can also remove the check for is_khugepaged inside the PMD scan as
>the collapse_max_ptes_none() function handles this logic now.
>
>Signed-off-by: Nico Pache <npache@redhat.com>

Generally LGTM.

Some nit below.

>---
> include/linux/khugepaged.h |   2 +
> mm/khugepaged.c            | 128 ++++++++++++++++++++++++++++++++++---
> 2 files changed, 122 insertions(+), 8 deletions(-)
>
>diff --git a/include/linux/khugepaged.h b/include/linux/khugepaged.h
>index eb1946a70cff..179ce716e769 100644
>--- a/include/linux/khugepaged.h
>+++ b/include/linux/khugepaged.h
>@@ -1,6 +1,8 @@
> /* SPDX-License-Identifier: GPL-2.0 */
> #ifndef _LINUX_KHUGEPAGED_H
> #define _LINUX_KHUGEPAGED_H
>+#define KHUGEPAGED_MIN_MTHP_ORDER	2
>+#define MAX_MTHP_BITMAP_STACK	(1UL << (ilog2(MAX_PTRS_PER_PTE) - KHUGEPAGED_MIN_MTHP_ORDER))
> 
> #include <linux/mm.h>
> 
>diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>index 89a105124790..e2319bfd0065 100644
>--- a/mm/khugepaged.c
>+++ b/mm/khugepaged.c
>@@ -93,6 +93,11 @@ static DEFINE_READ_MOSTLY_HASHTABLE(mm_slots_hash, MM_SLOTS_HASH_BITS);
> 
> static struct kmem_cache *mm_slot_cache __ro_after_init;
> 
>+struct scan_bit_state {
>+	u8 order;
>+	u16 offset;
>+};
>+
> struct collapse_control {
> 	bool is_khugepaged;
> 
>@@ -101,6 +106,13 @@ struct collapse_control {
> 
> 	/* nodemask for allocation fallback */
> 	nodemask_t alloc_nmask;
>+
>+	/*
>+	 * bitmap used to collapse mTHP sizes.
>+	 */
>+	 DECLARE_BITMAP(mthp_bitmap, HPAGE_PMD_NR);
>+	 DECLARE_BITMAP(mthp_bitmap_mask, HPAGE_PMD_NR);
>+	struct scan_bit_state mthp_bitmap_stack[MAX_MTHP_BITMAP_STACK];

Looks like an indent issue.

> };
> 
> /**
>@@ -1357,6 +1369,85 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long pmd_address,
> 	return result;
> }
> 
>+static void push_mthp_bitmap_stack(struct collapse_control *cc, int *top,
>+				   u8 order, u16 offset)
>+{
>+	cc->mthp_bitmap_stack[++*top] = (struct scan_bit_state)
>+		{ order, offset };
>+}
>+

For me, I may introduce pop_mth_bitmap_stack() .

And use it ...

>+/*
>+ * collapse_scan_bitmap() consumes the bitmap that is generated during
>+ * collapse_scan_pmd() to determine what regions and mTHP orders fit best.
>+ *
>+ * Each bit in the bitmap represents a single occupied (!none/zero) page.
>+ * A stack structure cc->mthp_bitmap_stack is used to check different regions
>+ * of the bitmap for collapse eligibility. We start at the PMD order and
>+ * check if it is eligible for collapse; if not, we add two entries to the
>+ * stack at a lower order to represent the left and right halves of the region.
>+ *
>+ * For each region, we calculate the number of set bits and compare it
>+ * against a threshold derived from collapse_max_ptes_none(). A region is
>+ * eligible if the number of set bits exceeds this threshold.
>+ */
>+static int collapse_scan_bitmap(struct mm_struct *mm, unsigned long address,
>+		int referenced, int unmapped, struct collapse_control *cc,
>+		bool *mmap_locked, unsigned long enabled_orders)
>+{
>+	u8 order, next_order;
>+	u16 offset, mid_offset;
>+	int num_chunks;
>+	int bits_set, threshold_bits;
>+	int top = -1;
>+	int collapsed = 0;
>+	int ret;
>+	struct scan_bit_state state;
>+	unsigned int max_none_ptes;
>+
>+	push_mthp_bitmap_stack(cc, &top, HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER, 0);
>+
>+	while (top >= 0) {
>+		state = cc->mthp_bitmap_stack[top--];

... here.

>+		order = state.order + KHUGEPAGED_MIN_MTHP_ORDER;

We push real_order - KHUGEPAGED_MIN_MTHP_ORDER, and get it by add
KHUGEPAGED_MIN_MTHP_ORDER.

Maybe we can push real_order ...

>+		offset = state.offset;
>+		num_chunks = 1UL << order;
>+
>+		/* Skip mTHP orders that are not enabled */
>+		if (!test_bit(order, &enabled_orders))
>+			goto next_order;
>+
>+		max_none_ptes = collapse_max_ptes_none(order, !cc->is_khugepaged);
>+
>+		/* Calculate weight of the range */
>+		bitmap_zero(cc->mthp_bitmap_mask, HPAGE_PMD_NR);
>+		bitmap_set(cc->mthp_bitmap_mask, offset, num_chunks);
>+		bits_set = bitmap_weight_and(cc->mthp_bitmap,
>+					     cc->mthp_bitmap_mask, HPAGE_PMD_NR);
>+
>+		threshold_bits = (1UL << order) - max_none_ptes - 1;
>+
>+		/* Check if the region is eligible based on the threshold */
>+		if (bits_set > threshold_bits) {
>+			ret = collapse_huge_page(mm, address, referenced,
>+						 unmapped, cc, mmap_locked,
>+						 order, offset);
>+			if (ret == SCAN_SUCCEED) {
>+				collapsed += 1UL << order;
>+				continue;
>+			}
>+		}
>+
>+next_order:
>+		if (state.order > 0) {

...and if (order > KHUGEPAGED_MIN_MTHP_ORDER) here?

Not sure you would like it.

>+			next_order = state.order - 1;
>+			mid_offset = offset + (num_chunks / 2);
>+			push_mthp_bitmap_stack(cc, &top, next_order, mid_offset);
>+			push_mthp_bitmap_stack(cc, &top, next_order, offset);
>+		}
>+	}
>+	return collapsed;
>+}
>+
> static int collapse_scan_pmd(struct mm_struct *mm,
> 			     struct vm_area_struct *vma,
> 			     unsigned long start_addr, bool *mmap_locked,
>@@ -1364,11 +1455,15 @@ static int collapse_scan_pmd(struct mm_struct *mm,
> {
> 	pmd_t *pmd;
> 	pte_t *pte, *_pte;
>+	int i;
> 	int result = SCAN_FAIL, referenced = 0;
>-	int none_or_zero = 0, shared = 0;
>+	int none_or_zero = 0, shared = 0, nr_collapsed = 0;
> 	struct page *page = NULL;
>+	unsigned int max_ptes_none;
> 	struct folio *folio = NULL;
> 	unsigned long addr;
>+	unsigned long enabled_orders;
>+	bool full_scan = true;
> 	spinlock_t *ptl;
> 	int node = NUMA_NO_NODE, unmapped = 0;
> 
>@@ -1378,16 +1473,29 @@ static int collapse_scan_pmd(struct mm_struct *mm,
> 	if (result != SCAN_SUCCEED)
> 		goto out;
> 
>+	bitmap_zero(cc->mthp_bitmap, HPAGE_PMD_NR);
> 	memset(cc->node_load, 0, sizeof(cc->node_load));
> 	nodes_clear(cc->alloc_nmask);
>+
>+	enabled_orders = collapse_allowable_orders(vma, vma->vm_flags, cc->is_khugepaged);
>+
>+	/*
>+	 * If PMD is the only enabled order, enforce max_ptes_none, otherwise
>+	 * scan all pages to populate the bitmap for mTHP collapse.
>+	 */
>+	if (cc->is_khugepaged && enabled_orders == _BITUL(HPAGE_PMD_ORDER))

We sometimes use BIT(), e.g. in collapse_allowable_orders().
And sometimes use _BITUL().

Suggest to use the same form.

Nothing else, great job!

>+		full_scan = false;
>+	max_ptes_none = collapse_max_ptes_none(HPAGE_PMD_ORDER, full_scan);
>+
> 	pte = pte_offset_map_lock(mm, pmd, start_addr, &ptl);
> 	if (!pte) {
> 		result = SCAN_PMD_NULL;
> 		goto out;
> 	}
> 
>-	for (addr = start_addr, _pte = pte; _pte < pte + HPAGE_PMD_NR;
>-	     _pte++, addr += PAGE_SIZE) {
>+	for (i = 0; i < HPAGE_PMD_NR; i++) {
>+		_pte = pte + i;
>+		addr = start_addr + i * PAGE_SIZE;
> 		pte_t pteval = ptep_get(_pte);
> 		if (is_swap_pte(pteval)) {
> 			++unmapped;
>@@ -1412,8 +1520,7 @@ static int collapse_scan_pmd(struct mm_struct *mm,
> 		if (pte_none_or_zero(pteval)) {
> 			++none_or_zero;
> 			if (!userfaultfd_armed(vma) &&
>-			    (!cc->is_khugepaged ||
>-			     none_or_zero <= khugepaged_max_ptes_none)) {
>+			    none_or_zero <= max_ptes_none) {
> 				continue;
> 			} else {
> 				result = SCAN_EXCEED_NONE_PTE;
>@@ -1461,6 +1568,8 @@ static int collapse_scan_pmd(struct mm_struct *mm,
> 			}
> 		}
> 
>+		/* Set bit for occupied pages */
>+		bitmap_set(cc->mthp_bitmap, i, 1);
> 		/*
> 		 * Record which node the original page is from and save this
> 		 * information to cc->node_load[].
>@@ -1517,9 +1626,12 @@ static int collapse_scan_pmd(struct mm_struct *mm,
> out_unmap:
> 	pte_unmap_unlock(pte, ptl);
> 	if (result == SCAN_SUCCEED) {
>-		result = collapse_huge_page(mm, start_addr, referenced,
>-					    unmapped, cc, mmap_locked,
>-					    HPAGE_PMD_ORDER, 0);
>+		nr_collapsed = collapse_scan_bitmap(mm, start_addr, referenced, unmapped,
>+					      cc, mmap_locked, enabled_orders);
>+		if (nr_collapsed > 0)
>+			result = SCAN_SUCCEED;
>+		else
>+			result = SCAN_FAIL;
> 	}
> out:
> 	trace_mm_khugepaged_scan_pmd(mm, folio, referenced,
>-- 
>2.51.0

-- 
Wei Yang
Help you, Help me


  parent reply	other threads:[~2025-11-09  2:08 UTC|newest]

Thread overview: 91+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-22 18:37 [PATCH v12 mm-new 00/15] khugepaged: mTHP support Nico Pache
2025-10-22 18:37 ` [PATCH v12 mm-new 01/15] khugepaged: rename hpage_collapse_* to collapse_* Nico Pache
2025-11-08  1:42   ` Wei Yang
2025-10-22 18:37 ` [PATCH v12 mm-new 02/15] introduce collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
2025-10-27  9:00   ` Lance Yang
2025-10-27 15:44   ` Lorenzo Stoakes
2025-11-08  1:44   ` Wei Yang
2025-10-22 18:37 ` [PATCH v12 mm-new 03/15] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
2025-10-27  9:02   ` Lance Yang
2025-11-08  1:54   ` Wei Yang
2025-10-22 18:37 ` [PATCH v12 mm-new 04/15] khugepaged: generalize alloc_charge_folio() Nico Pache
2025-10-27  9:05   ` Lance Yang
2025-11-08  2:34   ` Wei Yang
2025-10-22 18:37 ` [PATCH v12 mm-new 05/15] khugepaged: generalize __collapse_huge_page_* for mTHP support Nico Pache
2025-10-27  9:17   ` Lance Yang
2025-10-27 16:00   ` Lorenzo Stoakes
2025-11-10 13:20     ` Nico Pache
2025-11-08  3:01   ` Wei Yang
2025-10-22 18:37 ` [PATCH v12 mm-new 06/15] khugepaged: introduce collapse_max_ptes_none helper function Nico Pache
2025-10-27 17:53   ` Lorenzo Stoakes
2025-10-28 10:09     ` Baolin Wang
2025-10-28 13:57       ` Nico Pache
2025-10-28 17:07       ` Lorenzo Stoakes
2025-10-28 17:56         ` David Hildenbrand
2025-10-28 18:09           ` Lorenzo Stoakes
2025-10-28 18:17             ` David Hildenbrand
2025-10-28 18:41               ` Lorenzo Stoakes
2025-10-29 15:04                 ` David Hildenbrand
2025-10-29 18:41                   ` Lorenzo Stoakes
2025-10-29 21:10                     ` Nico Pache
2025-10-30 18:03                       ` Lorenzo Stoakes
2025-10-29 20:45                   ` Nico Pache
2025-10-28 13:36     ` Nico Pache
2025-10-28 14:15       ` David Hildenbrand
2025-10-28 17:29         ` Lorenzo Stoakes
2025-10-28 17:36           ` Lorenzo Stoakes
2025-10-28 18:08           ` David Hildenbrand
2025-10-28 18:59             ` Lorenzo Stoakes
2025-10-28 19:08               ` Lorenzo Stoakes
2025-10-29  2:09               ` Baolin Wang
2025-10-29  2:49                 ` Nico Pache
2025-10-29 18:55                 ` Lorenzo Stoakes
2025-10-29 21:14                   ` Nico Pache
2025-10-30  1:15                     ` Baolin Wang
2025-10-29  2:47               ` Nico Pache
2025-10-29 18:58                 ` Lorenzo Stoakes
2025-10-29 21:23                   ` Nico Pache
2025-10-30 10:15                     ` Lorenzo Stoakes
2025-10-31 11:12               ` David Hildenbrand
2025-10-28 16:57       ` Lorenzo Stoakes
2025-10-28 17:49         ` David Hildenbrand
2025-10-28 17:59           ` Lorenzo Stoakes
2025-10-22 18:37 ` [PATCH v12 mm-new 07/15] khugepaged: generalize collapse_huge_page for mTHP collapse Nico Pache
2025-10-27  3:25   ` Baolin Wang
2025-11-06 18:14   ` Lorenzo Stoakes
2025-11-07  3:09     ` Dev Jain
2025-11-07  9:18       ` Lorenzo Stoakes
2025-11-07 19:33     ` Nico Pache
2025-10-22 18:37 ` [PATCH v12 mm-new 08/15] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
2025-10-22 18:37 ` [PATCH v12 mm-new 09/15] khugepaged: add per-order mTHP collapse failure statistics Nico Pache
2025-11-06 18:45   ` Lorenzo Stoakes
2025-11-07 17:14     ` Nico Pache
2025-10-22 18:37 ` [PATCH v12 mm-new 10/15] khugepaged: improve tracepoints for mTHP orders Nico Pache
2025-10-22 18:37 ` [PATCH v12 mm-new 11/15] khugepaged: introduce collapse_allowable_orders helper function Nico Pache
2025-11-06 18:49   ` Lorenzo Stoakes
2025-11-07 18:01     ` Nico Pache
2025-10-22 18:37 ` [PATCH v12 mm-new 12/15] khugepaged: Introduce mTHP collapse support Nico Pache
2025-10-27  6:28   ` Baolin Wang
2025-11-09  2:08   ` Wei Yang [this message]
2025-11-11 21:56     ` Nico Pache
2025-11-19 11:53   ` Lorenzo Stoakes
2025-11-19 12:08     ` Lorenzo Stoakes
2025-11-20 22:32     ` Nico Pache
2025-10-22 18:37 ` [PATCH v12 mm-new 13/15] khugepaged: avoid unnecessary mTHP collapse attempts Nico Pache
2025-11-09  2:40   ` Wei Yang
2025-11-17 18:16     ` Nico Pache
2025-11-18  2:00       ` Wei Yang
2025-11-19 12:05   ` Lorenzo Stoakes
2025-11-26 23:16     ` Nico Pache
2025-11-26 23:29     ` Nico Pache
2025-10-22 18:37 ` [PATCH v12 mm-new 14/15] khugepaged: run khugepaged for all orders Nico Pache
2025-11-19 12:13   ` Lorenzo Stoakes
2025-11-20  6:37     ` Baolin Wang
2025-10-22 18:37 ` [PATCH v12 mm-new 15/15] Documentation: mm: update the admin guide for mTHP collapse Nico Pache
2025-10-22 19:52   ` Christoph Lameter (Ampere)
2025-10-22 20:22     ` David Hildenbrand
2025-10-23  8:00       ` Lorenzo Stoakes
2025-10-23  8:44         ` Pedro Falcato
2025-10-24 13:54           ` Zach O'Keefe
2025-10-23 23:41       ` Christoph Lameter (Ampere)
2025-10-22 20:13 ` [PATCH v12 mm-new 00/15] khugepaged: mTHP support Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251109020802.g6dytbixd4aygdgh@master \
    --to=richard.weiyang@gmail.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=catalin.marinas@arm.com \
    --cc=cl@gentwo.org \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=jack@suse.cz \
    --cc=jannh@google.com \
    --cc=jglisse@google.com \
    --cc=kas@kernel.org \
    --cc=lance.yang@linux.dev \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-trace-kernel@vger.kernel.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mhiramat@kernel.org \
    --cc=mhocko@suse.com \
    --cc=npache@redhat.com \
    --cc=peterx@redhat.com \
    --cc=pfalcato@suse.de \
    --cc=raquini@redhat.com \
    --cc=rdunlap@infradead.org \
    --cc=rientjes@google.com \
    --cc=rostedt@goodmis.org \
    --cc=rppt@kernel.org \
    --cc=ryan.roberts@arm.com \
    --cc=sunnanyong@huawei.com \
    --cc=surenb@google.com \
    --cc=thomas.hellstrom@linux.intel.com \
    --cc=tiwai@suse.de \
    --cc=usamaarif642@gmail.com \
    --cc=vbabka@suse.cz \
    --cc=vishal.moola@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=yang@os.amperecomputing.com \
    --cc=ziy@nvidia.com \
    --cc=zokeefe@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox