linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kefeng Wang <wangkefeng.wang@huawei.com>
To: Andrew Morton <akpm@linux-foundation.org>,
	David Hildenbrand <david@redhat.com>,
	Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
	<linux-mm@kvack.org>
Cc: Zi Yan <ziy@nvidia.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
	Barry Song <baohua@kernel.org>, Lance Yang <lance.yang@linux.dev>,
	<Liam.Howlett@oracle.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>
Subject: [PATCH v4 3/4] mm: mprotect: convert to folio_needs_prot_numa()
Date: Mon, 20 Oct 2025 14:18:44 +0800	[thread overview]
Message-ID: <20251020061845.3347258-4-wangkefeng.wang@huawei.com> (raw)
In-Reply-To: <20251020061845.3347258-1-wangkefeng.wang@huawei.com>

The prot_numa_skip() naming is not good since it updates the folio
access time except checking whether to skip prot NUMA, so rename it
to folio_needs_prot_numa(), and cleanup it a bit, remove ret by
directly return value instead of goto style, also make it non-static
function so that it can be reused.

Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
---
 mm/internal.h |  3 +++
 mm/mprotect.c | 43 ++++++++++++++++++++++---------------------
 2 files changed, 25 insertions(+), 21 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 6691d3ea55af..b521b5177d3c 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1403,6 +1403,9 @@ int numa_migrate_check(struct folio *folio, struct vm_fault *vmf,
 		      unsigned long addr, int *flags, bool writable,
 		      int *last_cpupid);
 
+bool folio_needs_prot_numa(struct folio *folio, struct vm_area_struct *vma,
+		int target_node);
+
 void free_zone_device_folio(struct folio *folio);
 int migrate_device_coherent_folio(struct folio *folio);
 
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 6236d120c8e6..1369ba6f6294 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -118,26 +118,30 @@ static int mprotect_folio_pte_batch(struct folio *folio, pte_t *ptep,
 	return folio_pte_batch_flags(folio, NULL, ptep, &pte, max_nr_ptes, flags);
 }
 
-static bool prot_numa_skip(struct vm_area_struct *vma, int target_node,
-		struct folio *folio)
+/**
+ * folio_needs_prot_numa() - Whether the folio needs prot numa
+ * @folio: The folio.
+ * @vma: The VMA mapping.
+ * @target_node: The numa node being accessed.
+ *
+ * Return: True if folio needs prot numa and the access time of
+ *	   folio is adjusted. False otherwise.
+ */
+bool folio_needs_prot_numa(struct folio *folio, struct vm_area_struct *vma,
+		int target_node)
 {
-	bool ret = true;
-	bool toptier;
 	int nid;
 
-	if (!folio)
-		goto skip;
-
-	if (folio_is_zone_device(folio) || folio_test_ksm(folio))
-		goto skip;
+	if (!folio || folio_is_zone_device(folio) || folio_test_ksm(folio))
+		return false;
 
 	/* Also skip shared copy-on-write folios */
 	if (is_cow_mapping(vma->vm_flags) && folio_maybe_mapped_shared(folio))
-		goto skip;
+		return false;
 
 	/* Folios are pinned and can't be migrated */
 	if (folio_maybe_dma_pinned(folio))
-		goto skip;
+		return false;
 
 	/*
 	 * While migration can move some dirty pages,
@@ -145,7 +149,7 @@ static bool prot_numa_skip(struct vm_area_struct *vma, int target_node,
 	 * context.
 	 */
 	if (folio_is_file_lru(folio) && folio_test_dirty(folio))
-		goto skip;
+		return false;
 
 	/*
 	 * Don't mess with PTEs if page is already on the node
@@ -153,23 +157,20 @@ static bool prot_numa_skip(struct vm_area_struct *vma, int target_node,
 	 */
 	nid = folio_nid(folio);
 	if (target_node == nid)
-		goto skip;
-
-	toptier = node_is_toptier(nid);
+		return false;
 
 	/*
 	 * Skip scanning top tier node if normal numa
 	 * balancing is disabled
 	 */
-	if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) && toptier)
-		goto skip;
+	if (!(sysctl_numa_balancing_mode & NUMA_BALANCING_NORMAL) &&
+	    node_is_toptier(nid))
+		return false;
 
-	ret = false;
 	if (folio_use_access_time(folio))
 		folio_xchg_access_time(folio, jiffies_to_msecs(jiffies));
 
-skip:
-	return ret;
+	return true;
 }
 
 /* Set nr_ptes number of ptes, starting from idx */
@@ -315,7 +316,7 @@ static long change_pte_range(struct mmu_gather *tlb,
 			 * pages. See similar comment in change_huge_pmd.
 			 */
 			if (prot_numa &&
-			    prot_numa_skip(vma, target_node, folio)) {
+			    !folio_needs_prot_numa(folio, vma, target_node)) {
 
 				/* determine batch to skip */
 				nr_ptes = mprotect_folio_pte_batch(folio,
-- 
2.27.0



  parent reply	other threads:[~2025-10-20  6:20 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-20  6:18 [PATCH v4 0/4] mm: some optimizations for prot numa Kefeng Wang
2025-10-20  6:18 ` [PATCH v4 1/4] mm: mprotect: always skip dma pinned folio in prot_numa_skip() Kefeng Wang
2025-10-21  3:06   ` Barry Song
2025-10-20  6:18 ` [PATCH v4 2/4] mm: mprotect: avoid unnecessary struct page accessing if pte_protnone() Kefeng Wang
2025-10-20 12:53   ` Lorenzo Stoakes
2025-10-20 13:08   ` David Hildenbrand
2025-10-20  6:18 ` Kefeng Wang [this message]
2025-10-20 13:09   ` [PATCH v4 3/4] mm: mprotect: convert to folio_needs_prot_numa() David Hildenbrand
2025-10-20 13:10   ` Lorenzo Stoakes
2025-10-20 15:14     ` Kefeng Wang
2025-10-20 17:34       ` David Hildenbrand
2025-10-20 17:49         ` Lorenzo Stoakes
2025-10-20 18:12           ` David Hildenbrand
2025-10-20 18:56             ` Lorenzo Stoakes
2025-10-21  8:41               ` Kefeng Wang
2025-10-21  9:13                 ` David Hildenbrand
2025-10-21  9:25                   ` Lorenzo Stoakes
2025-10-21  9:45                     ` Lorenzo Stoakes
2025-10-21 12:54                       ` Kefeng Wang
2025-10-21 14:56                         ` Lorenzo Stoakes
2025-10-21 15:01                           ` David Hildenbrand
2025-10-21 16:36                             ` Lorenzo Stoakes
2025-10-22  0:51                               ` Kefeng Wang
2025-10-20  6:18 ` [PATCH v4 4/4] mm: huge_memory: use folio_needs_prot_numa() for pmd folio Kefeng Wang
2025-10-20 13:11   ` David Hildenbrand
2025-10-20 13:15   ` Lorenzo Stoakes
2025-10-20 13:23     ` David Hildenbrand
2025-10-20 15:18       ` Kefeng Wang
2025-10-21 13:37         ` Kefeng Wang
2025-10-21 15:35           ` Lorenzo Stoakes
2025-10-21 15:29         ` Lorenzo Stoakes
2025-10-22  1:33           ` Kefeng Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251020061845.3347258-4-wangkefeng.wang@huawei.com \
    --to=wangkefeng.wang@huawei.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=lance.yang@linux.dev \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=ryan.roberts@arm.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox