From: Baolin Wang <baolin.wang@linux.alibaba.com>
To: Nico Pache <npache@redhat.com>,
linux-mm@kvack.org, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-trace-kernel@vger.kernel.org
Cc: akpm@linux-foundation.org, corbet@lwn.net, rostedt@goodmis.org,
mhiramat@kernel.org, mathieu.desnoyers@efficios.com,
david@redhat.com, baohua@kernel.org, ryan.roberts@arm.com,
willy@infradead.org, peterx@redhat.com, ziy@nvidia.com,
wangkefeng.wang@huawei.com, usamaarif642@gmail.com,
sunnanyong@huawei.com, vishal.moola@gmail.com,
thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com,
kirill.shutemov@linux.intel.com, aarcange@redhat.com,
raquini@redhat.com, dev.jain@arm.com, anshuman.khandual@arm.com,
catalin.marinas@arm.com, tiwai@suse.de, will@kernel.org,
dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org,
jglisse@google.com, surenb@google.com, zokeefe@google.com,
hannes@cmpxchg.org, rientjes@google.com, mhocko@suse.com,
rdunlap@infradead.org
Subject: Re: [PATCH v4 07/12] khugepaged: add mTHP support
Date: Thu, 24 Apr 2025 20:21:43 +0800 [thread overview]
Message-ID: <3f52af67-489d-46b0-867f-202b59864692@linux.alibaba.com> (raw)
In-Reply-To: <20250417000238.74567-8-npache@redhat.com>
On 2025/4/17 08:02, Nico Pache wrote:
> Introduce the ability for khugepaged to collapse to different mTHP sizes.
> While scanning PMD ranges for potential collapse candidates, keep track
> of pages in KHUGEPAGED_MIN_MTHP_ORDER chunks via a bitmap. Each bit
> represents a utilized region of order KHUGEPAGED_MIN_MTHP_ORDER ptes. If
> mTHPs are enabled we remove the restriction of max_ptes_none during the
> scan phase so we dont bailout early and miss potential mTHP candidates.
>
> After the scan is complete we will perform binary recursion on the
> bitmap to determine which mTHP size would be most efficient to collapse
> to. max_ptes_none will be scaled by the attempted collapse order to
> determine how full a THP must be to be eligible.
>
> If a mTHP collapse is attempted, but contains swapped out, or shared
> pages, we dont perform the collapse.
>
> Signed-off-by: Nico Pache <npache@redhat.com>
> ---
> mm/khugepaged.c | 122 ++++++++++++++++++++++++++++++++++--------------
> 1 file changed, 88 insertions(+), 34 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 83230e9cdf3a..ece39fd71fe6 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -1136,13 +1136,14 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> {
> LIST_HEAD(compound_pagelist);
> pmd_t *pmd, _pmd;
> - pte_t *pte;
> + pte_t *pte, mthp_pte;
> pgtable_t pgtable;
> struct folio *folio;
> spinlock_t *pmd_ptl, *pte_ptl;
> int result = SCAN_FAIL;
> struct vm_area_struct *vma;
> struct mmu_notifier_range range;
> + unsigned long _address = address + offset * PAGE_SIZE;
>
> VM_BUG_ON(address & ~HPAGE_PMD_MASK);
>
> @@ -1158,12 +1159,13 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> *mmap_locked = false;
> }
>
> - result = alloc_charge_folio(&folio, mm, cc, HPAGE_PMD_ORDER);
> + result = alloc_charge_folio(&folio, mm, cc, order);
> if (result != SCAN_SUCCEED)
> goto out_nolock;
>
> mmap_read_lock(mm);
> - result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
> + *mmap_locked = true;
> + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, order);
> if (result != SCAN_SUCCEED) {
> mmap_read_unlock(mm);
> goto out_nolock;
> @@ -1181,13 +1183,14 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> * released when it fails. So we jump out_nolock directly in
> * that case. Continuing to collapse causes inconsistency.
> */
> - result = __collapse_huge_page_swapin(mm, vma, address, pmd,
> - referenced, HPAGE_PMD_ORDER);
> + result = __collapse_huge_page_swapin(mm, vma, _address, pmd,
> + referenced, order);
> if (result != SCAN_SUCCEED)
> goto out_nolock;
> }
>
> mmap_read_unlock(mm);
> + *mmap_locked = false;
> /*
> * Prevent all access to pagetables with the exception of
> * gup_fast later handled by the ptep_clear_flush and the VM
> @@ -1197,7 +1200,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> * mmap_lock.
> */
> mmap_write_lock(mm);
> - result = hugepage_vma_revalidate(mm, address, true, &vma, cc, HPAGE_PMD_ORDER);
> + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, order);
> if (result != SCAN_SUCCEED)
> goto out_up_write;
> /* check if the pmd is still valid */
> @@ -1208,11 +1211,12 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> vma_start_write(vma);
> anon_vma_lock_write(vma->anon_vma);
>
> - mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, address,
> - address + HPAGE_PMD_SIZE);
> + mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, _address,
> + _address + (PAGE_SIZE << order));
> mmu_notifier_invalidate_range_start(&range);
>
> pmd_ptl = pmd_lock(mm, pmd); /* probably unnecessary */
> +
> /*
> * This removes any huge TLB entry from the CPU so we won't allow
> * huge and small TLB entries for the same virtual address to
> @@ -1226,10 +1230,10 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> mmu_notifier_invalidate_range_end(&range);
> tlb_remove_table_sync_one();
>
> - pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl);
> + pte = pte_offset_map_lock(mm, &_pmd, _address, &pte_ptl);
> if (pte) {
> - result = __collapse_huge_page_isolate(vma, address, pte, cc,
> - &compound_pagelist, HPAGE_PMD_ORDER);
> + result = __collapse_huge_page_isolate(vma, _address, pte, cc,
> + &compound_pagelist, order);
> spin_unlock(pte_ptl);
> } else {
> result = SCAN_PMD_NULL;
> @@ -1258,8 +1262,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> anon_vma_unlock_write(vma->anon_vma);
>
> result = __collapse_huge_page_copy(pte, folio, pmd, _pmd,
> - vma, address, pte_ptl,
> - &compound_pagelist, HPAGE_PMD_ORDER);
> + vma, _address, pte_ptl,
> + &compound_pagelist, order);
> pte_unmap(pte);
pte is unmapped here, but...
> if (unlikely(result != SCAN_SUCCEED))
> goto out_up_write;
> @@ -1270,20 +1274,35 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> * write.
> */
> __folio_mark_uptodate(folio);
> - pgtable = pmd_pgtable(_pmd);
> -
> - _pmd = folio_mk_pmd(folio, vma->vm_page_prot);
> - _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
> -
> - spin_lock(pmd_ptl);
> - BUG_ON(!pmd_none(*pmd));
> - folio_add_new_anon_rmap(folio, vma, address, RMAP_EXCLUSIVE);
> - folio_add_lru_vma(folio, vma);
> - pgtable_trans_huge_deposit(mm, pmd, pgtable);
> - set_pmd_at(mm, address, pmd, _pmd);
> - update_mmu_cache_pmd(vma, address, pmd);
> - deferred_split_folio(folio, false);
> - spin_unlock(pmd_ptl);
> + if (order == HPAGE_PMD_ORDER) {
> + pgtable = pmd_pgtable(_pmd);
> + _pmd = folio_mk_pmd(folio, vma->vm_page_prot);
> + _pmd = maybe_pmd_mkwrite(pmd_mkdirty(_pmd), vma);
> +
> + spin_lock(pmd_ptl);
> + BUG_ON(!pmd_none(*pmd));
> + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE);
> + folio_add_lru_vma(folio, vma);
> + pgtable_trans_huge_deposit(mm, pmd, pgtable);
> + set_pmd_at(mm, address, pmd, _pmd);
> + update_mmu_cache_pmd(vma, address, pmd);
> + deferred_split_folio(folio, false);
> + spin_unlock(pmd_ptl);
> + } else { //mTHP
(Nit: use '/* xxx */' format)
> + mthp_pte = mk_pte(&folio->page, vma->vm_page_prot);
> + mthp_pte = maybe_mkwrite(pte_mkdirty(mthp_pte), vma);
> +
> + spin_lock(pmd_ptl);
> + folio_ref_add(folio, (1 << order) - 1);
> + folio_add_new_anon_rmap(folio, vma, _address, RMAP_EXCLUSIVE);
> + folio_add_lru_vma(folio, vma);
> + set_ptes(vma->vm_mm, _address, pte, mthp_pte, (1 << order));
You still used the unmapped pte? Looks incorrect to me.
> + update_mmu_cache_range(NULL, vma, _address, pte, (1 << order));
> +
> + smp_wmb(); /* make pte visible before pmd */
> + pmd_populate(mm, pmd, pmd_pgtable(_pmd));
> + spin_unlock(pmd_ptl);
> + }
>
> folio = NULL;
>
> @@ -1364,31 +1383,58 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> {
> pmd_t *pmd;
> pte_t *pte, *_pte;
> + int i;
> int result = SCAN_FAIL, referenced = 0;
> int none_or_zero = 0, shared = 0;
> struct page *page = NULL;
> struct folio *folio = NULL;
> unsigned long _address;
> + unsigned long enabled_orders;
> spinlock_t *ptl;
> int node = NUMA_NO_NODE, unmapped = 0;
> + bool is_pmd_only;
> bool writable = false;
> -
> + int chunk_none_count = 0;
> + int scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - KHUGEPAGED_MIN_MTHP_ORDER);
> + unsigned long tva_flags = cc->is_khugepaged ? TVA_ENFORCE_SYSFS : 0;
> VM_BUG_ON(address & ~HPAGE_PMD_MASK);
>
> result = find_pmd_or_thp_or_none(mm, address, &pmd);
> if (result != SCAN_SUCCEED)
> goto out;
>
> + bitmap_zero(cc->mthp_bitmap, MAX_MTHP_BITMAP_SIZE);
> + bitmap_zero(cc->mthp_bitmap_temp, MAX_MTHP_BITMAP_SIZE);
> memset(cc->node_load, 0, sizeof(cc->node_load));
> nodes_clear(cc->alloc_nmask);
> +
> + enabled_orders = thp_vma_allowable_orders(vma, vma->vm_flags,
> + tva_flags, THP_ORDERS_ALL_ANON);
> +
> + is_pmd_only = (enabled_orders == (1 << HPAGE_PMD_ORDER));
> +
> pte = pte_offset_map_lock(mm, pmd, address, &ptl);
> if (!pte) {
> result = SCAN_PMD_NULL;
> goto out;
> }
>
> - for (_address = address, _pte = pte; _pte < pte + HPAGE_PMD_NR;
> - _pte++, _address += PAGE_SIZE) {
> + for (i = 0; i < HPAGE_PMD_NR; i++) {
> + /*
> + * we are reading in KHUGEPAGED_MIN_MTHP_NR page chunks. if
> + * there are pages in this chunk keep track of it in the bitmap
> + * for mTHP collapsing.
> + */
> + if (i % KHUGEPAGED_MIN_MTHP_NR == 0) {
> + if (chunk_none_count <= scaled_none)
> + bitmap_set(cc->mthp_bitmap,
> + i / KHUGEPAGED_MIN_MTHP_NR, 1);
> +
> + chunk_none_count = 0;
> + }
> +
> + _pte = pte + i;
> + _address = address + i * PAGE_SIZE;
> pte_t pteval = ptep_get(_pte);
> if (is_swap_pte(pteval)) {
> ++unmapped;
> @@ -1411,10 +1457,11 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> }
> }
> if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> + ++chunk_none_count;
> ++none_or_zero;
> if (!userfaultfd_armed(vma) &&
> - (!cc->is_khugepaged ||
> - none_or_zero <= khugepaged_max_ptes_none)) {
> + (!cc->is_khugepaged || !is_pmd_only ||
> + none_or_zero <= khugepaged_max_ptes_none)) {
> continue;
> } else {
> result = SCAN_EXCEED_NONE_PTE;
> @@ -1510,6 +1557,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> address)))
> referenced++;
> }
> +
> if (!writable) {
> result = SCAN_PAGE_RO;
> } else if (cc->is_khugepaged &&
> @@ -1522,8 +1570,12 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
> out_unmap:
> pte_unmap_unlock(pte, ptl);
> if (result == SCAN_SUCCEED) {
> - result = collapse_huge_page(mm, address, referenced,
> - unmapped, cc, mmap_locked, HPAGE_PMD_ORDER, 0);
> + result = khugepaged_scan_bitmap(mm, address, referenced, unmapped, cc,
> + mmap_locked, enabled_orders);
> + if (result > 0)
> + result = SCAN_SUCCEED;
> + else
> + result = SCAN_FAIL;
> }
> out:
> trace_mm_khugepaged_scan_pmd(mm, &folio->page, writable, referenced,
> @@ -2479,11 +2531,13 @@ static int khugepaged_collapse_single_pmd(unsigned long addr,
> fput(file);
> if (result == SCAN_PTE_MAPPED_HUGEPAGE) {
> mmap_read_lock(mm);
> + *mmap_locked = true;
> if (khugepaged_test_exit_or_disable(mm))
> goto end;
> result = collapse_pte_mapped_thp(mm, addr,
> !cc->is_khugepaged);
> mmap_read_unlock(mm);
> + *mmap_locked = false;
> }
> } else {
> result = khugepaged_scan_pmd(mm, vma, addr,
next prev parent reply other threads:[~2025-04-24 12:21 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-17 0:02 [PATCH v4 00/12] khugepaged: " Nico Pache
2025-04-17 0:02 ` [PATCH v4 01/12] introduce khugepaged_collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
2025-04-23 6:44 ` Baolin Wang
2025-04-23 7:06 ` Nico Pache
2025-04-17 0:02 ` [PATCH v4 02/12] khugepaged: rename hpage_collapse_* to khugepaged_* Nico Pache
2025-04-23 6:49 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 03/12] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
2025-04-23 6:55 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 04/12] khugepaged: generalize alloc_charge_folio() Nico Pache
2025-04-23 7:06 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 05/12] khugepaged: generalize __collapse_huge_page_* for mTHP support Nico Pache
2025-04-23 7:30 ` Baolin Wang
2025-04-23 8:00 ` Nico Pache
2025-04-23 8:25 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 06/12] khugepaged: introduce khugepaged_scan_bitmap " Nico Pache
2025-04-27 2:51 ` Baolin Wang
2025-04-28 14:47 ` Nico Pache
2025-04-29 7:16 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 07/12] khugepaged: add " Nico Pache
2025-04-24 12:21 ` Baolin Wang [this message]
2025-04-28 15:14 ` Nico Pache
2025-04-17 0:02 ` [PATCH v4 08/12] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
2025-04-24 7:48 ` Baolin Wang
2025-04-28 15:44 ` Nico Pache
2025-04-29 6:53 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 09/12] khugepaged: avoid unnecessary mTHP collapse attempts Nico Pache
2025-04-17 0:02 ` [PATCH v4 10/12] khugepaged: improve tracepoints for mTHP orders Nico Pache
2025-04-24 7:51 ` Baolin Wang
2025-04-17 0:02 ` [PATCH v4 11/12] khugepaged: add per-order mTHP khugepaged stats Nico Pache
2025-04-24 7:58 ` Baolin Wang
2025-04-28 15:45 ` Nico Pache
2025-04-17 0:02 ` [PATCH v4 12/12] Documentation: mm: update the admin guide for mTHP collapse Nico Pache
2025-04-24 15:03 ` Usama Arif
2025-04-28 14:54 ` Nico Pache
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=3f52af67-489d-46b0-867f-202b59864692@linux.alibaba.com \
--to=baolin.wang@linux.alibaba.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=baohua@kernel.org \
--cc=catalin.marinas@arm.com \
--cc=cl@gentwo.org \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=hannes@cmpxchg.org \
--cc=jack@suse.cz \
--cc=jglisse@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-trace-kernel@vger.kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=mhiramat@kernel.org \
--cc=mhocko@suse.com \
--cc=npache@redhat.com \
--cc=peterx@redhat.com \
--cc=raquini@redhat.com \
--cc=rdunlap@infradead.org \
--cc=rientjes@google.com \
--cc=rostedt@goodmis.org \
--cc=ryan.roberts@arm.com \
--cc=sunnanyong@huawei.com \
--cc=surenb@google.com \
--cc=thomas.hellstrom@linux.intel.com \
--cc=tiwai@suse.de \
--cc=usamaarif642@gmail.com \
--cc=vishal.moola@gmail.com \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=yang@os.amperecomputing.com \
--cc=ziy@nvidia.com \
--cc=zokeefe@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox