From: Anshuman Khandual <anshuman.khandual@arm.com>
To: linux-arm-kernel@lists.infradead.org
Cc: Anshuman Khandual <anshuman.khandual@arm.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Will Deacon <will@kernel.org>,
Ryan Roberts <ryan.roberts@arm.com>,
Mark Rutland <mark.rutland@arm.com>,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Andrew Morton <akpm@linux-foundation.org>,
David Hildenbrand <david@kernel.org>,
Mike Rapoport <rppt@kernel.org>,
Linu Cherian <linu.cherian@arm.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: [RFC V1 02/16] mm: Add read-write accessors for vm_page_prot
Date: Tue, 24 Feb 2026 10:41:39 +0530 [thread overview]
Message-ID: <20260224051153.3150613-3-anshuman.khandual@arm.com> (raw)
In-Reply-To: <20260224051153.3150613-1-anshuman.khandual@arm.com>
Currently vma->vm_page_prot is safely read from and written to, without any
locks with READ_ONCE() and WRITE_ONCE(). But with introduction of D128 page
tables on arm64 platform, vm_page_prot grows to 128 bits which can't safely
be handled with READ_ONCE() and WRITE_ONCE().
Add read and write accessors for vm_page_prot like pgprot_read/write_once()
which any platform can override when required, although still defaulting as
READ_ONCE() and WRITE_ONCE(), thus preserving the functionality for others.
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: David Hildenbrand <david@kernel.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
---
include/linux/pgtable.h | 14 ++++++++++++++
mm/huge_memory.c | 4 ++--
mm/memory.c | 2 +-
mm/migrate.c | 2 +-
mm/mmap.c | 2 +-
5 files changed, 19 insertions(+), 5 deletions(-)
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index da17139a1279..8858b8b03a02 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -495,6 +495,20 @@ static inline pgd_t pgdp_get(pgd_t *pgdp)
}
#endif
+#ifndef pgprot_read_once
+static inline pgprot_t pgprot_read_once(pgprot_t *prot)
+{
+ return READ_ONCE(*prot);
+}
+#endif
+
+#ifndef pgprot_write_once
+static inline void pgprot_write_once(pgprot_t *prot, pgprot_t val)
+{
+ WRITE_ONCE(*prot, val);
+}
+#endif
+
#ifndef __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
unsigned long address,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d4ca8cfd7f9d..0d9d6569367e 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3233,7 +3233,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd,
} else {
pte_t entry;
- entry = mk_pte(page, READ_ONCE(vma->vm_page_prot));
+ entry = mk_pte(page, pgprot_read_once(&vma->vm_page_prot));
if (write)
entry = pte_mkwrite(entry, vma);
if (!young)
@@ -4918,7 +4918,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new)
entry = softleaf_from_pmd(*pvmw->pmd);
folio_get(folio);
- pmde = folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot));
+ pmde = folio_mk_pmd(folio, pgprot_read_once(&vma->vm_page_prot));
if (pmd_swp_soft_dirty(*pvmw->pmd))
pmde = pmd_mksoft_dirty(pmde);
diff --git a/mm/memory.c b/mm/memory.c
index cfc3077fc52f..2d99c9212883 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -895,7 +895,7 @@ static void restore_exclusive_pte(struct vm_area_struct *vma,
VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio);
- pte = pte_mkold(mk_pte(page, READ_ONCE(vma->vm_page_prot)));
+ pte = pte_mkold(mk_pte(page, pgprot_read_once(&vma->vm_page_prot)));
if (pte_swp_soft_dirty(orig_pte))
pte = pte_mksoft_dirty(pte);
diff --git a/mm/migrate.c b/mm/migrate.c
index 1bf2cf8c44dd..9db1e6ed9042 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -377,7 +377,7 @@ static bool remove_migration_pte(struct folio *folio,
continue;
folio_get(folio);
- pte = mk_pte(new, READ_ONCE(vma->vm_page_prot));
+ pte = mk_pte(new, pgprot_read_once(&vma->vm_page_prot));
entry = softleaf_from_pte(old_pte);
if (!softleaf_is_migration_young(entry))
diff --git a/mm/mmap.c b/mm/mmap.c
index 843160946aa5..af6870115a9d 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -89,7 +89,7 @@ void vma_set_page_prot(struct vm_area_struct *vma)
vm_page_prot = vm_pgprot_modify(vm_page_prot, vm_flags);
}
/* remove_protection_ptes reads vma->vm_page_prot without mmap_lock */
- WRITE_ONCE(vma->vm_page_prot, vm_page_prot);
+ pgprot_write_once(&vma->vm_page_prot, vm_page_prot);
}
/*
--
2.43.0
next prev parent reply other threads:[~2026-02-24 5:12 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-24 5:11 [RFC V1 00/16] arm64/mm: Enable 128 bit page table entries Anshuman Khandual
2026-02-24 5:11 ` [RFC V1 01/16] mm: Abstract printing of pxd_val() Anshuman Khandual
2026-02-24 5:11 ` Anshuman Khandual [this message]
2026-02-24 5:11 ` [RFC V1 03/16] mm: Replace READ_ONCE() in pud_trans_unstable() Anshuman Khandual
2026-02-24 5:11 ` [RFC V1 04/16] perf/events: Replace READ_ONCE() with standard pgtable accessors Anshuman Khandual
2026-02-24 8:48 ` Peter Zijlstra
2026-02-24 5:11 ` [RFC V1 05/16] arm64/mm: Convert READ_ONCE() as pmdp_get() while accessing PMD Anshuman Khandual
2026-02-24 5:11 ` [RFC V1 06/16] arm64/mm: Convert READ_ONCE() as pudp_get() while accessing PUD Anshuman Khandual
2026-02-24 5:11 ` [RFC V1 07/16] arm64/mm: Convert READ_ONCE() as p4dp_get() while accessing P4D Anshuman Khandual
2026-02-24 5:11 ` [RFC V1 08/16] arm64/mm: Convert READ_ONCE() as pgdp_get() while accessing PGD Anshuman Khandual
2026-02-24 5:11 ` [RFC V1 09/16] arm64/mm: Route all pgtable reads via ptdesc_get() Anshuman Khandual
2026-02-24 5:11 ` [RFC V1 10/16] arm64/mm: Route all pgtable writes via ptdesc_set() Anshuman Khandual
2026-02-24 5:11 ` [RFC V1 11/16] arm64/mm: Route all pgtable atomics to central helpers Anshuman Khandual
2026-02-24 5:11 ` [RFC V1 12/16] arm64/mm: Abstract printing of pxd_val() Anshuman Khandual
2026-02-24 5:11 ` [RFC V1 13/16] arm64/mm: Override read-write accessors for vm_page_prot Anshuman Khandual
2026-02-24 5:11 ` [RFC V1 14/16] arm64/mm: Enable fixmap with 5 level page table Anshuman Khandual
2026-02-24 5:11 ` [RFC V1 15/16] arm64/mm: Add macros __tlb_asid_level and __tlb_range Anshuman Khandual
2026-02-24 5:11 ` [RFC V1 16/16] arm64/mm: Add initial support for FEAT_D128 page tables Anshuman Khandual
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260224051153.3150613-3-anshuman.khandual@arm.com \
--to=anshuman.khandual@arm.com \
--cc=akpm@linux-foundation.org \
--cc=catalin.marinas@arm.com \
--cc=david@kernel.org \
--cc=linu.cherian@arm.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mark.rutland@arm.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox