From: Yin Tirui <yintirui@huawei.com>
To: <linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>,
<x86@kernel.org>, <linux-arm-kernel@lists.infradead.org>,
<willy@infradead.org>, <david@kernel.org>,
<catalin.marinas@arm.com>, <will@kernel.org>, <tglx@kernel.org>,
<mingo@redhat.com>, <bp@alien8.de>, <dave.hansen@linux.intel.com>,
<hpa@zytor.com>, <luto@kernel.org>, <peterz@infradead.org>,
<akpm@linux-foundation.org>, <lorenzo.stoakes@oracle.com>,
<ziy@nvidia.com>, <baolin.wang@linux.alibaba.com>,
<Liam.Howlett@oracle.com>, <npache@redhat.com>,
<ryan.roberts@arm.com>, <dev.jain@arm.com>, <baohua@kernel.org>,
<lance.yang@linux.dev>, <vbabka@suse.cz>, <rppt@kernel.org>,
<surenb@google.com>, <mhocko@suse.com>,
<anshuman.khandual@arm.com>, <rmclure@linux.ibm.com>,
<kevin.brodsky@arm.com>, <apopple@nvidia.com>,
<ajd@linux.ibm.com>, <pasha.tatashin@soleen.com>,
<bhe@redhat.com>, <thuth@redhat.com>, <coxu@redhat.com>,
<dan.j.williams@intel.com>, <yu-cheng.yu@intel.com>,
<yangyicong@hisilicon.com>, <baolu.lu@linux.intel.com>,
<jgross@suse.com>, <conor.dooley@microchip.com>,
<Jonathan.Cameron@huawei.com>, <riel@surriel.com>
Cc: <wangkefeng.wang@huawei.com>, <chenjun102@huawei.com>,
<yintirui@huawei.com>
Subject: [PATCH RFC v3 1/4] x86/mm: Use proper page table helpers for huge page generation
Date: Sat, 28 Feb 2026 15:09:03 +0800 [thread overview]
Message-ID: <20260228070906.1418911-2-yintirui@huawei.com> (raw)
In-Reply-To: <20260228070906.1418911-1-yintirui@huawei.com>
Historically, several core x86 mm subsystems (vmemmap, vmalloc, and CPA)
have abused `pfn_pte()` to generate PMD and PUD entries by passing
pgprot values containing the _PAGE_PSE flag, and then casting the
resulting pte_t to a pmd_t or pud_t.
This violates strict type safety and prevents us from enforcing the rule
that `pfn_pte()` should strictly generate pte without huge page attributes.
Fix these abuses by explicitly using the correct level-specific helpers
(`pfn_pmd()` and `pfn_pud()`) and their corresponding setters
(`set_pmd()`, `set_pud()`).
For the CPA (Change Page Attribute) code, which uses `pte_t` as a generic
container for page table entries across all levels in
__should_split_large_page(), pack the correctly generated PMD/PUD values
into the pte_t container.
This cleanup prepares the ground for making `pfn_pte()` strictly filter
out huge page attributes.
Signed-off-by: Yin Tirui <yintirui@huawei.com>
---
arch/x86/mm/init_64.c | 6 +++---
arch/x86/mm/pat/set_memory.c | 6 +++++-
arch/x86/mm/pgtable.c | 4 ++--
3 files changed, 10 insertions(+), 6 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index df2261fa4f98..d65f3d05c66f 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1518,11 +1518,11 @@ static int __meminitdata node_start;
void __meminit vmemmap_set_pmd(pmd_t *pmd, void *p, int node,
unsigned long addr, unsigned long next)
{
- pte_t entry;
+ pmd_t entry;
- entry = pfn_pte(__pa(p) >> PAGE_SHIFT,
+ entry = pfn_pmd(__pa(p) >> PAGE_SHIFT,
PAGE_KERNEL_LARGE);
- set_pmd(pmd, __pmd(pte_val(entry)));
+ set_pmd(pmd, entry);
/* check to see if we have contiguous blocks */
if (p_end != p || node_start != node) {
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 40581a720fe8..87aa0e9a8f82 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -1059,7 +1059,11 @@ static int __should_split_large_page(pte_t *kpte, unsigned long address,
return 1;
/* All checks passed. Update the large page mapping. */
- new_pte = pfn_pte(old_pfn, new_prot);
+ if (level == PG_LEVEL_2M)
+ new_pte = __pte(pmd_val(pfn_pmd(old_pfn, new_prot)));
+ else
+ new_pte = __pte(pud_val(pfn_pud(old_pfn, new_prot)));
+
__set_pmd_pte(kpte, address, new_pte);
cpa->flags |= CPA_FLUSHTLB;
cpa_inc_lp_preserved(level);
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index 2e5ecfdce73c..61320fd44e16 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -644,7 +644,7 @@ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot)
if (pud_present(*pud) && !pud_leaf(*pud))
return 0;
- set_pte((pte_t *)pud, pfn_pte(
+ set_pud(pud, pfn_pud(
(u64)addr >> PAGE_SHIFT,
__pgprot(protval_4k_2_large(pgprot_val(prot)) | _PAGE_PSE)));
@@ -676,7 +676,7 @@ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot)
if (pmd_present(*pmd) && !pmd_leaf(*pmd))
return 0;
- set_pte((pte_t *)pmd, pfn_pte(
+ set_pmd(pmd, pfn_pmd(
(u64)addr >> PAGE_SHIFT,
__pgprot(protval_4k_2_large(pgprot_val(prot)) | _PAGE_PSE)));
--
2.22.0
next prev parent reply other threads:[~2026-02-28 7:15 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-02-28 7:09 [PATCH RFC v3 0/4] mm: add huge pfnmap support for remap_pfn_range() Yin Tirui
2026-02-28 7:09 ` Yin Tirui [this message]
2026-02-28 7:09 ` [PATCH RFC v3 2/4] mm/pgtable: Make pfn_pte() filter out huge page attributes Yin Tirui
2026-02-28 7:09 ` [PATCH RFC v3 3/4] x86/mm: Remove pte_clrhuge() and clean up init_64.c Yin Tirui
2026-02-28 7:09 ` [PATCH RFC v3 4/4] mm: add PMD-level huge page support for remap_pfn_range() Yin Tirui
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260228070906.1418911-2-yintirui@huawei.com \
--to=yintirui@huawei.com \
--cc=Jonathan.Cameron@huawei.com \
--cc=Liam.Howlett@oracle.com \
--cc=ajd@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=anshuman.khandual@arm.com \
--cc=apopple@nvidia.com \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=baolu.lu@linux.intel.com \
--cc=bhe@redhat.com \
--cc=bp@alien8.de \
--cc=catalin.marinas@arm.com \
--cc=chenjun102@huawei.com \
--cc=conor.dooley@microchip.com \
--cc=coxu@redhat.com \
--cc=dan.j.williams@intel.com \
--cc=dave.hansen@linux.intel.com \
--cc=david@kernel.org \
--cc=dev.jain@arm.com \
--cc=hpa@zytor.com \
--cc=jgross@suse.com \
--cc=kevin.brodsky@arm.com \
--cc=lance.yang@linux.dev \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=luto@kernel.org \
--cc=mhocko@suse.com \
--cc=mingo@redhat.com \
--cc=npache@redhat.com \
--cc=pasha.tatashin@soleen.com \
--cc=peterz@infradead.org \
--cc=riel@surriel.com \
--cc=rmclure@linux.ibm.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=surenb@google.com \
--cc=tglx@kernel.org \
--cc=thuth@redhat.com \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=yangyicong@hisilicon.com \
--cc=yu-cheng.yu@intel.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox