linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Convert 64-bit x86/mm/pat to ptdescs
@ 2026-01-28 22:40 Vishal Moola (Oracle)
  2026-01-28 22:40 ` [PATCH v2 1/3] x86/mm/pat: Convert pte code to use ptdescs Vishal Moola (Oracle)
                   ` (3 more replies)
  0 siblings, 4 replies; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2026-01-28 22:40 UTC (permalink / raw)
  To: linux-kernel, linux-mm, x86, Mike Rapoport (Microsoft)
  Cc: akpm, Matthew Wilcox (Oracle),
	Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Vishal Moola (Oracle)

x86/mm/pat should be using ptdescs. One line has already been
converted to pagetable_free(), while the allocation sites use
get_free_pages(). This causes issues separately allocating ptdescs
from struct page.

These patches convert the allocation/free sites to use ptdescs. In
the short term, this helps enable Matthew's work to allocate frozen
pagetables[1]. And in the long term, this will help us cleanly split
ptdesc allocations from struct page.

[1] https://lore.kernel.org/linux-mm/20251113140448.1814860-1-willy@infradead.org/
[2] https://lore.kernel.org/linux-mm/20251020001652.2116669-1-willy@infradead.org/T/#u

------

I've also tested this on a tree that separately allocates ptdescs. That
didn't find any lingering alloc/free issues.

I've realized that the pgd_list should also be using ptdescs (for 32bit
in this file). This can be done in a different patchset since there's
other users of pgd_list that still need to be converted.

Based on current mm-new.

v2:
  - Use pagetable_alloc() in populate_pgd() - in patch 2
  - Rename subject line to specify 64-bit (i.e. 32-bit wasn't converted)
  - Added reference links to the projects mentioned in the cover letter

Vishal Moola (Oracle) (3):
  x86/mm/pat: Convert pte code to use ptdescs
  x86/mm/pat: Convert pmd code to use ptdescs
  x86/mm/pat: Convert split_large_page() to use ptdescs

 arch/x86/mm/pat/set_memory.c | 43 ++++++++++++++++++++----------------
 1 file changed, 24 insertions(+), 19 deletions(-)

-- 
2.52.0



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/3] x86/mm/pat: Convert pte code to use ptdescs
  2026-01-28 22:40 [PATCH v2 0/3] Convert 64-bit x86/mm/pat to ptdescs Vishal Moola (Oracle)
@ 2026-01-28 22:40 ` Vishal Moola (Oracle)
  2026-01-29  8:08   ` Mike Rapoport
  2026-01-28 22:40 ` [PATCH v2 2/3] x86/mm/pat: Convert pmd " Vishal Moola (Oracle)
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2026-01-28 22:40 UTC (permalink / raw)
  To: linux-kernel, linux-mm, x86, Mike Rapoport (Microsoft)
  Cc: akpm, Matthew Wilcox (Oracle),
	Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Vishal Moola (Oracle)

In order to separately allocate ptdescs from pages, we need all allocation
and free sites to use the appropriate functions. Convert these pte
allocation/free sites to use ptdescs.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/x86/mm/pat/set_memory.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 6c6eb486f7a6..2dcb565d8f9b 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -1408,7 +1408,7 @@ static bool try_to_free_pte_page(pte_t *pte)
 		if (!pte_none(pte[i]))
 			return false;
 
-	free_page((unsigned long)pte);
+	pagetable_free(virt_to_ptdesc((void *)pte));
 	return true;
 }
 
@@ -1537,9 +1537,10 @@ static void unmap_pud_range(p4d_t *p4d, unsigned long start, unsigned long end)
 	 */
 }
 
-static int alloc_pte_page(pmd_t *pmd)
+static int alloc_pte_ptdesc(pmd_t *pmd)
 {
-	pte_t *pte = (pte_t *)get_zeroed_page(GFP_KERNEL);
+	pte_t *pte = (pte_t *) ptdesc_address(
+			pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0));
 	if (!pte)
 		return -1;
 
@@ -1600,7 +1601,7 @@ static long populate_pmd(struct cpa_data *cpa,
 		 */
 		pmd = pmd_offset(pud, start);
 		if (pmd_none(*pmd))
-			if (alloc_pte_page(pmd))
+			if (alloc_pte_ptdesc(pmd))
 				return -1;
 
 		populate_pte(cpa, start, pre_end, cur_pages, pmd, pgprot);
@@ -1641,7 +1642,7 @@ static long populate_pmd(struct cpa_data *cpa,
 	if (start < end) {
 		pmd = pmd_offset(pud, start);
 		if (pmd_none(*pmd))
-			if (alloc_pte_page(pmd))
+			if (alloc_pte_ptdesc(pmd))
 				return -1;
 
 		populate_pte(cpa, start, end, num_pages - cur_pages,
-- 
2.52.0



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 2/3] x86/mm/pat: Convert pmd code to use ptdescs
  2026-01-28 22:40 [PATCH v2 0/3] Convert 64-bit x86/mm/pat to ptdescs Vishal Moola (Oracle)
  2026-01-28 22:40 ` [PATCH v2 1/3] x86/mm/pat: Convert pte code to use ptdescs Vishal Moola (Oracle)
@ 2026-01-28 22:40 ` Vishal Moola (Oracle)
  2026-01-28 22:40 ` [PATCH v2 3/3] x86/mm/pat: Convert split_large_page() " Vishal Moola (Oracle)
  2026-01-29  8:05 ` [PATCH v2 0/3] Convert 64-bit x86/mm/pat to ptdescs Mike Rapoport
  3 siblings, 0 replies; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2026-01-28 22:40 UTC (permalink / raw)
  To: linux-kernel, linux-mm, x86, Mike Rapoport (Microsoft)
  Cc: akpm, Matthew Wilcox (Oracle),
	Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Vishal Moola (Oracle)

In order to separately allocate ptdescs from pages, we need all allocation
and free sites to use the appropriate functions. Convert these pmd
allocation/free sites to use ptdescs.

populate_pgd() also allocates pagetables that may later be freed by
try_to_free_pmd_page(), so allocate ptdescs there as well.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/x86/mm/pat/set_memory.c | 19 +++++++++++--------
 1 file changed, 11 insertions(+), 8 deletions(-)

diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 2dcb565d8f9b..ee3d0067aeea 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -1420,7 +1420,7 @@ static bool try_to_free_pmd_page(pmd_t *pmd)
 		if (!pmd_none(pmd[i]))
 			return false;
 
-	free_page((unsigned long)pmd);
+	pagetable_free(virt_to_ptdesc((void *)pmd));
 	return true;
 }
 
@@ -1548,9 +1548,10 @@ static int alloc_pte_ptdesc(pmd_t *pmd)
 	return 0;
 }
 
-static int alloc_pmd_page(pud_t *pud)
+static int alloc_pmd_ptdesc(pud_t *pud)
 {
-	pmd_t *pmd = (pmd_t *)get_zeroed_page(GFP_KERNEL);
+	pmd_t *pmd = (pmd_t *) ptdesc_address(
+			pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0));
 	if (!pmd)
 		return -1;
 
@@ -1623,7 +1624,7 @@ static long populate_pmd(struct cpa_data *cpa,
 		 * We cannot use a 1G page so allocate a PMD page if needed.
 		 */
 		if (pud_none(*pud))
-			if (alloc_pmd_page(pud))
+			if (alloc_pmd_ptdesc(pud))
 				return -1;
 
 		pmd = pmd_offset(pud, start);
@@ -1679,7 +1680,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, p4d_t *p4d,
 		 * Need a PMD page?
 		 */
 		if (pud_none(*pud))
-			if (alloc_pmd_page(pud))
+			if (alloc_pmd_ptdesc(pud))
 				return -1;
 
 		cur_pages = populate_pmd(cpa, start, pre_end, cur_pages,
@@ -1716,7 +1717,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, p4d_t *p4d,
 
 		pud = pud_offset(p4d, start);
 		if (pud_none(*pud))
-			if (alloc_pmd_page(pud))
+			if (alloc_pmd_ptdesc(pud))
 				return -1;
 
 		tmp = populate_pmd(cpa, start, end, cpa->numpages - cur_pages,
@@ -1744,7 +1745,8 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr)
 	pgd_entry = cpa->pgd + pgd_index(addr);
 
 	if (pgd_none(*pgd_entry)) {
-		p4d = (p4d_t *)get_zeroed_page(GFP_KERNEL);
+		p4d = (p4d_t *) ptdesc_address(
+				pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0));
 		if (!p4d)
 			return -1;
 
@@ -1756,7 +1758,8 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr)
 	 */
 	p4d = p4d_offset(pgd_entry, addr);
 	if (p4d_none(*p4d)) {
-		pud = (pud_t *)get_zeroed_page(GFP_KERNEL);
+		pud = (pud_t *) ptdesc_address(
+				pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0));
 		if (!pud)
 			return -1;
 
-- 
2.52.0



^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 3/3] x86/mm/pat: Convert split_large_page() to use ptdescs
  2026-01-28 22:40 [PATCH v2 0/3] Convert 64-bit x86/mm/pat to ptdescs Vishal Moola (Oracle)
  2026-01-28 22:40 ` [PATCH v2 1/3] x86/mm/pat: Convert pte code to use ptdescs Vishal Moola (Oracle)
  2026-01-28 22:40 ` [PATCH v2 2/3] x86/mm/pat: Convert pmd " Vishal Moola (Oracle)
@ 2026-01-28 22:40 ` Vishal Moola (Oracle)
  2026-01-29  8:05 ` [PATCH v2 0/3] Convert 64-bit x86/mm/pat to ptdescs Mike Rapoport
  3 siblings, 0 replies; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2026-01-28 22:40 UTC (permalink / raw)
  To: linux-kernel, linux-mm, x86, Mike Rapoport (Microsoft)
  Cc: akpm, Matthew Wilcox (Oracle),
	Dave Hansen, Andy Lutomirski, Peter Zijlstra,
	Vishal Moola (Oracle)

In order to separately allocate ptdescs from pages, we need all allocation
and free sites to use the appropriate functions.

split_large_page() allocates a page to be used as a page table. This
should be allocating a ptdesc, so convert it.

Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
 arch/x86/mm/pat/set_memory.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index ee3d0067aeea..90ddd064c8c0 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -1119,9 +1119,10 @@ static void split_set_pte(struct cpa_data *cpa, pte_t *pte, unsigned long pfn,
 
 static int
 __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
-		   struct page *base)
+		   struct ptdesc *ptdesc)
 {
 	unsigned long lpaddr, lpinc, ref_pfn, pfn, pfninc = 1;
+	struct page *base = ptdesc_page(ptdesc);
 	pte_t *pbase = (pte_t *)page_address(base);
 	unsigned int i, level;
 	pgprot_t ref_prot;
@@ -1226,18 +1227,18 @@ __split_large_page(struct cpa_data *cpa, pte_t *kpte, unsigned long address,
 static int split_large_page(struct cpa_data *cpa, pte_t *kpte,
 			    unsigned long address)
 {
-	struct page *base;
+	struct ptdesc *ptdesc;
 
 	if (!debug_pagealloc_enabled())
 		spin_unlock(&cpa_lock);
-	base = alloc_pages(GFP_KERNEL, 0);
+	ptdesc = pagetable_alloc(GFP_KERNEL, 0);
 	if (!debug_pagealloc_enabled())
 		spin_lock(&cpa_lock);
-	if (!base)
+	if (!ptdesc)
 		return -ENOMEM;
 
-	if (__split_large_page(cpa, kpte, address, base))
-		__free_page(base);
+	if (__split_large_page(cpa, kpte, address, ptdesc))
+		pagetable_free(ptdesc);
 
 	return 0;
 }
-- 
2.52.0



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 0/3] Convert 64-bit x86/mm/pat to ptdescs
  2026-01-28 22:40 [PATCH v2 0/3] Convert 64-bit x86/mm/pat to ptdescs Vishal Moola (Oracle)
                   ` (2 preceding siblings ...)
  2026-01-28 22:40 ` [PATCH v2 3/3] x86/mm/pat: Convert split_large_page() " Vishal Moola (Oracle)
@ 2026-01-29  8:05 ` Mike Rapoport
  3 siblings, 0 replies; 7+ messages in thread
From: Mike Rapoport @ 2026-01-29  8:05 UTC (permalink / raw)
  To: Vishal Moola (Oracle)
  Cc: linux-kernel, linux-mm, x86, akpm, Matthew Wilcox (Oracle),
	Dave Hansen, Andy Lutomirski, Peter Zijlstra

Hi Vishal,

On Wed, Jan 28, 2026 at 02:40:46PM -0800, Vishal Moola (Oracle) wrote:
> x86/mm/pat should be using ptdescs. One line has already been
> converted to pagetable_free(), while the allocation sites use
> get_free_pages(). This causes issues separately allocating ptdescs
> from struct page.
> 
> These patches convert the allocation/free sites to use ptdescs. In
> the short term, this helps enable Matthew's work to allocate frozen
> pagetables[1]. And in the long term, this will help us cleanly split
> ptdesc allocations from struct page.
> 
> [1] https://lore.kernel.org/linux-mm/20251113140448.1814860-1-willy@infradead.org/
> [2] https://lore.kernel.org/linux-mm/20251020001652.2116669-1-willy@infradead.org/T/#u
> 
> ------
> 
> I've also tested this on a tree that separately allocates ptdescs. That
> didn't find any lingering alloc/free issues.
> 
> I've realized that the pgd_list should also be using ptdescs (for 32bit
> in this file). This can be done in a different patchset since there's
> other users of pgd_list that still need to be converted.

Since Andrew merges cover-letter text into the first commit changelog, some
explanation about pgd_list should be a part of that combined changelog.
 
> Based on current mm-new.
> 
> v2:
>   - Use pagetable_alloc() in populate_pgd() - in patch 2
>   - Rename subject line to specify 64-bit (i.e. 32-bit wasn't converted)
>   - Added reference links to the projects mentioned in the cover letter
> 
> Vishal Moola (Oracle) (3):
>   x86/mm/pat: Convert pte code to use ptdescs
>   x86/mm/pat: Convert pmd code to use ptdescs
>   x86/mm/pat: Convert split_large_page() to use ptdescs
> 
>  arch/x86/mm/pat/set_memory.c | 43 ++++++++++++++++++++----------------
>  1 file changed, 24 insertions(+), 19 deletions(-)
> 
> -- 
> 2.52.0
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/3] x86/mm/pat: Convert pte code to use ptdescs
  2026-01-28 22:40 ` [PATCH v2 1/3] x86/mm/pat: Convert pte code to use ptdescs Vishal Moola (Oracle)
@ 2026-01-29  8:08   ` Mike Rapoport
  2026-01-29 17:04     ` Vishal Moola (Oracle)
  0 siblings, 1 reply; 7+ messages in thread
From: Mike Rapoport @ 2026-01-29  8:08 UTC (permalink / raw)
  To: Vishal Moola (Oracle)
  Cc: linux-kernel, linux-mm, x86, akpm, Matthew Wilcox (Oracle),
	Dave Hansen, Andy Lutomirski, Peter Zijlstra

On Wed, Jan 28, 2026 at 02:40:47PM -0800, Vishal Moola (Oracle) wrote:
> In order to separately allocate ptdescs from pages, we need all allocation
> and free sites to use the appropriate functions. Convert these pte
> allocation/free sites to use ptdescs.
> 
> Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> ---
>  arch/x86/mm/pat/set_memory.c | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index 6c6eb486f7a6..2dcb565d8f9b 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -1408,7 +1408,7 @@ static bool try_to_free_pte_page(pte_t *pte)
>  		if (!pte_none(pte[i]))
>  			return false;
>  
> -	free_page((unsigned long)pte);
> +	pagetable_free(virt_to_ptdesc((void *)pte));
>  	return true;
>  }
>  
> @@ -1537,9 +1537,10 @@ static void unmap_pud_range(p4d_t *p4d, unsigned long start, unsigned long end)
>  	 */
>  }
>  
> -static int alloc_pte_page(pmd_t *pmd)
> +static int alloc_pte_ptdesc(pmd_t *pmd)
>  {
> -	pte_t *pte = (pte_t *)get_zeroed_page(GFP_KERNEL);
> +	pte_t *pte = (pte_t *) ptdesc_address(
> +			pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0));

Sorry I missed this last time, but ptdesc_address(NULL) does not return
NULL. 
The allocation and conversion should be split IMHO.

This applies to all instances in all the patches.

>  	if (!pte)
>  		return -1;
>  
> @@ -1600,7 +1601,7 @@ static long populate_pmd(struct cpa_data *cpa,
>  		 */
>  		pmd = pmd_offset(pud, start);
>  		if (pmd_none(*pmd))
> -			if (alloc_pte_page(pmd))
> +			if (alloc_pte_ptdesc(pmd))
>  				return -1;
>  
>  		populate_pte(cpa, start, pre_end, cur_pages, pmd, pgprot);
> @@ -1641,7 +1642,7 @@ static long populate_pmd(struct cpa_data *cpa,
>  	if (start < end) {
>  		pmd = pmd_offset(pud, start);
>  		if (pmd_none(*pmd))
> -			if (alloc_pte_page(pmd))
> +			if (alloc_pte_ptdesc(pmd))
>  				return -1;
>  
>  		populate_pte(cpa, start, end, num_pages - cur_pages,
> -- 
> 2.52.0
> 

-- 
Sincerely yours,
Mike.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/3] x86/mm/pat: Convert pte code to use ptdescs
  2026-01-29  8:08   ` Mike Rapoport
@ 2026-01-29 17:04     ` Vishal Moola (Oracle)
  0 siblings, 0 replies; 7+ messages in thread
From: Vishal Moola (Oracle) @ 2026-01-29 17:04 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: linux-kernel, linux-mm, x86, akpm, Matthew Wilcox (Oracle),
	Dave Hansen, Andy Lutomirski, Peter Zijlstra

On Thu, Jan 29, 2026 at 10:08:33AM +0200, Mike Rapoport wrote:
> On Wed, Jan 28, 2026 at 02:40:47PM -0800, Vishal Moola (Oracle) wrote:
> > In order to separately allocate ptdescs from pages, we need all allocation
> > and free sites to use the appropriate functions. Convert these pte
> > allocation/free sites to use ptdescs.
> > 
> > Signed-off-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
> > ---
> >  arch/x86/mm/pat/set_memory.c | 11 ++++++-----
> >  1 file changed, 6 insertions(+), 5 deletions(-)
> > 
> > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > index 6c6eb486f7a6..2dcb565d8f9b 100644
> > --- a/arch/x86/mm/pat/set_memory.c
> > +++ b/arch/x86/mm/pat/set_memory.c
> > @@ -1408,7 +1408,7 @@ static bool try_to_free_pte_page(pte_t *pte)
> >  		if (!pte_none(pte[i]))
> >  			return false;
> >  
> > -	free_page((unsigned long)pte);
> > +	pagetable_free(virt_to_ptdesc((void *)pte));
> >  	return true;
> >  }
> >  
> > @@ -1537,9 +1537,10 @@ static void unmap_pud_range(p4d_t *p4d, unsigned long start, unsigned long end)
> >  	 */
> >  }
> >  
> > -static int alloc_pte_page(pmd_t *pmd)
> > +static int alloc_pte_ptdesc(pmd_t *pmd)
> >  {
> > -	pte_t *pte = (pte_t *)get_zeroed_page(GFP_KERNEL);
> > +	pte_t *pte = (pte_t *) ptdesc_address(
> > +			pagetable_alloc(GFP_KERNEL | __GFP_ZERO, 0));
> 
> Sorry I missed this last time, but ptdesc_address(NULL) does not return
> NULL. 
> The allocation and conversion should be split IMHO.

Good catch.

> This applies to all instances in all the patches.

Thanks for reviewing, I'll send v3 with your feedback included next
week.

> >  	if (!pte)
> >  		return -1;
> >  
> > @@ -1600,7 +1601,7 @@ static long populate_pmd(struct cpa_data *cpa,
> >  		 */
> >  		pmd = pmd_offset(pud, start);
> >  		if (pmd_none(*pmd))
> > -			if (alloc_pte_page(pmd))
> > +			if (alloc_pte_ptdesc(pmd))
> >  				return -1;
> >  
> >  		populate_pte(cpa, start, pre_end, cur_pages, pmd, pgprot);
> > @@ -1641,7 +1642,7 @@ static long populate_pmd(struct cpa_data *cpa,
> >  	if (start < end) {
> >  		pmd = pmd_offset(pud, start);
> >  		if (pmd_none(*pmd))
> > -			if (alloc_pte_page(pmd))
> > +			if (alloc_pte_ptdesc(pmd))
> >  				return -1;
> >  
> >  		populate_pte(cpa, start, end, num_pages - cur_pages,
> > -- 
> > 2.52.0
> > 
> 
> -- 
> Sincerely yours,
> Mike.


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2026-01-29 17:04 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2026-01-28 22:40 [PATCH v2 0/3] Convert 64-bit x86/mm/pat to ptdescs Vishal Moola (Oracle)
2026-01-28 22:40 ` [PATCH v2 1/3] x86/mm/pat: Convert pte code to use ptdescs Vishal Moola (Oracle)
2026-01-29  8:08   ` Mike Rapoport
2026-01-29 17:04     ` Vishal Moola (Oracle)
2026-01-28 22:40 ` [PATCH v2 2/3] x86/mm/pat: Convert pmd " Vishal Moola (Oracle)
2026-01-28 22:40 ` [PATCH v2 3/3] x86/mm/pat: Convert split_large_page() " Vishal Moola (Oracle)
2026-01-29  8:05 ` [PATCH v2 0/3] Convert 64-bit x86/mm/pat to ptdescs Mike Rapoport

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox