From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@redhat.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
<linux-mm@kvack.org>
Cc: Zi Yan <ziy@nvidia.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>, Lance Yang <lance.yang@linux.dev>,
<Liam.Howlett@oracle.com>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Sidhartha Kumar <sidhartha.kumar@oracle.com>
Subject: [PATCH v4 4/4] mm: huge_memory: use folio_needs_prot_numa() for pmd folio
Date: Mon, 20 Oct 2025 14:18:45 +0800 [thread overview]
Message-ID: <20251020061845.3347258-5-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20251020061845.3347258-1-wangkefeng.wang@huawei.com>
The folio_needs_prot_numa() checks whether to need prot numa, which
skips unsuitable folio, i.e. zone device, shared folios(ksm, CoW),
non-movable dma pinned, dirty file folio and already numa affinity's
folios, the policy should be applied to pmd folio too, which helps
to avoid unnecessary pmd change and folio migration attempts.
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
mm/huge_memory.c | 22 ++++++++--------------
1 file changed, 8 insertions(+), 14 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 2764613a9b3d..121c92f5c486 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2477,8 +2477,8 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
#endif
if (prot_numa) {
- struct folio *folio;
- bool toptier;
+ int target_node = NUMA_NO_NODE;
+
/*
* Avoid trapping faults against the zero page. The read-only
* data is likely to be read-cached on the local CPU and
@@ -2490,19 +2490,13 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
if (pmd_protnone(*pmd))
goto unlock;
- folio = pmd_folio(*pmd);
- toptier = node_is_toptier(folio_nid(folio));
- /*
- * Skip scanning top tier node if normal numa
- * balancing is disabled
- */
- if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
- toptier)
- goto unlock;
+ /* Get target node for single threaded private VMAs */
+ if (!(vma->vm_flags & VM_SHARED) &&
+ atomic_read(&vma->vm_mm->mm_users) == 1)
+ target_node = numa_node_id();
- if (folio_use_access_time(folio))
- folio_xchg_access_time(folio,
- jiffies_to_msecs(jiffies));
+ if (!folio_needs_prot_numa(pmd_folio(*pmd), vma, target_node))
+ goto unlock;
}
/*
* In case prot_numa, we are under mmap_read_lock(mm). It's critical
--
2.27.0
next prev parent reply other threads:[~2025-10-20 6:20 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-20 6:18 [PATCH v4 0/4] mm: some optimizations for prot numa Kefeng Wang
2025-10-20 6:18 ` [PATCH v4 1/4] mm: mprotect: always skip dma pinned folio in prot_numa_skip() Kefeng Wang
2025-10-21 3:06 ` Barry Song
2025-10-20 6:18 ` [PATCH v4 2/4] mm: mprotect: avoid unnecessary struct page accessing if pte_protnone() Kefeng Wang
2025-10-20 12:53 ` Lorenzo Stoakes
2025-10-20 13:08 ` David Hildenbrand
2025-10-20 6:18 ` [PATCH v4 3/4] mm: mprotect: convert to folio_needs_prot_numa() Kefeng Wang
2025-10-20 13:09 ` David Hildenbrand
2025-10-20 13:10 ` Lorenzo Stoakes
2025-10-20 15:14 ` Kefeng Wang
2025-10-20 17:34 ` David Hildenbrand
2025-10-20 17:49 ` Lorenzo Stoakes
2025-10-20 18:12 ` David Hildenbrand
2025-10-20 18:56 ` Lorenzo Stoakes
2025-10-21 8:41 ` Kefeng Wang
2025-10-21 9:13 ` David Hildenbrand
2025-10-21 9:25 ` Lorenzo Stoakes
2025-10-21 9:45 ` Lorenzo Stoakes
2025-10-21 12:54 ` Kefeng Wang
2025-10-21 14:56 ` Lorenzo Stoakes
2025-10-21 15:01 ` David Hildenbrand
2025-10-21 16:36 ` Lorenzo Stoakes
2025-10-22 0:51 ` Kefeng Wang
2025-10-20 6:18 ` Kefeng Wang [this message]
2025-10-20 13:11 ` [PATCH v4 4/4] mm: huge_memory: use folio_needs_prot_numa() for pmd folio David Hildenbrand
2025-10-20 13:15 ` Lorenzo Stoakes
2025-10-20 13:23 ` David Hildenbrand
2025-10-20 15:18 ` Kefeng Wang
2025-10-21 13:37 ` Kefeng Wang
2025-10-21 15:35 ` Lorenzo Stoakes
2025-10-21 15:29 ` Lorenzo Stoakes
2025-10-22 1:33 ` Kefeng Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251020061845.3347258-5-wangkefeng.wang@huawei.com \
--to=wangkefeng.wang@huawei.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=lance.yang@linux.dev \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=ryan.roberts@arm.com \
--cc=sidhartha.kumar@oracle.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox