* [PATCH 1/3] mm/hugetlb: Refactor unmap_ref_private() to take folio instead of page
@ 2025-04-17 15:43 nifan.cxl
2025-04-17 15:43 ` [PATCH 2/3] mm/hugetlb: Refactor unmap_hugepage_range() " nifan.cxl
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: nifan.cxl @ 2025-04-17 15:43 UTC (permalink / raw)
To: muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel, Fan Ni
From: Fan Ni <fan.ni@samsung.com>
The function unmap_ref_private() has only user, which passes in
&folio->page. Let it take folio directly.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
---
mm/hugetlb.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ccc4f08f8481..b5d1ac8290a7 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6064,7 +6064,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
* same region.
*/
static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
- struct page *page, unsigned long address)
+ struct folio *folio, unsigned long address)
{
struct hstate *h = hstate_vma(vma);
struct vm_area_struct *iter_vma;
@@ -6108,7 +6108,8 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
*/
if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
unmap_hugepage_range(iter_vma, address,
- address + huge_page_size(h), page, 0);
+ address + huge_page_size(h),
+ folio_page(folio, 0), 0);
}
i_mmap_unlock_write(mapping);
}
@@ -6231,8 +6232,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
hugetlb_vma_unlock_read(vma);
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
- unmap_ref_private(mm, vma, &old_folio->page,
- vmf->address);
+ unmap_ref_private(mm, vma, old_folio, vmf->address);
mutex_lock(&hugetlb_fault_mutex_table[hash]);
hugetlb_vma_lock_read(vma);
--
2.47.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 2/3] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page
2025-04-17 15:43 [PATCH 1/3] mm/hugetlb: Refactor unmap_ref_private() to take folio instead of page nifan.cxl
@ 2025-04-17 15:43 ` nifan.cxl
2025-04-17 16:13 ` Sidhartha Kumar
2025-04-18 2:51 ` Muchun Song
2025-04-17 15:43 ` [PATCH 3/3] mm/hugetlb: Refactor __unmap_hugepage_range() " nifan.cxl
` (2 subsequent siblings)
3 siblings, 2 replies; 11+ messages in thread
From: nifan.cxl @ 2025-04-17 15:43 UTC (permalink / raw)
To: muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel, Fan Ni
From: Fan Ni <fan.ni@samsung.com>
The function unmap_hugepage_range() has two kinds of users:
1) unmap_ref_private(), which passes in the head page of a folio. Since
unmap_ref_private() already takes folio and there are no other uses
of the folio struct in the function, it is natural for
unmap_hugepage_range() to take folio also.
2) All other uses, which pass in NULL pointer.
In both cases, we can pass in folio. Refactor unmap_hugepage_range() to
take folio.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
---
include/linux/hugetlb.h | 2 +-
mm/hugetlb.c | 7 ++++---
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index a57bed83c657..b7699f35c87f 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -128,7 +128,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *,
struct vm_area_struct *, struct vm_area_struct *);
void unmap_hugepage_range(struct vm_area_struct *,
- unsigned long, unsigned long, struct page *,
+ unsigned long, unsigned long, struct folio *folio,
zap_flags_t);
void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b5d1ac8290a7..3181dbe0c4bb 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6039,7 +6039,7 @@ void __hugetlb_zap_end(struct vm_area_struct *vma,
}
void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
- unsigned long end, struct page *ref_page,
+ unsigned long end, struct folio *ref_folio,
zap_flags_t zap_flags)
{
struct mmu_notifier_range range;
@@ -6051,7 +6051,8 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
mmu_notifier_invalidate_range_start(&range);
tlb_gather_mmu(&tlb, vma->vm_mm);
- __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags);
+ __unmap_hugepage_range(&tlb, vma, start, end,
+ folio_page(ref_folio, 0), zap_flags);
mmu_notifier_invalidate_range_end(&range);
tlb_finish_mmu(&tlb);
@@ -6109,7 +6110,7 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
unmap_hugepage_range(iter_vma, address,
address + huge_page_size(h),
- folio_page(folio, 0), 0);
+ folio, 0);
}
i_mmap_unlock_write(mapping);
}
--
2.47.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH 3/3] mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of page
2025-04-17 15:43 [PATCH 1/3] mm/hugetlb: Refactor unmap_ref_private() to take folio instead of page nifan.cxl
2025-04-17 15:43 ` [PATCH 2/3] mm/hugetlb: Refactor unmap_hugepage_range() " nifan.cxl
@ 2025-04-17 15:43 ` nifan.cxl
2025-04-17 16:21 ` Sidhartha Kumar
2025-04-17 16:11 ` [PATCH 1/3] mm/hugetlb: Refactor unmap_ref_private() " Sidhartha Kumar
2025-04-18 2:51 ` Muchun Song
3 siblings, 1 reply; 11+ messages in thread
From: nifan.cxl @ 2025-04-17 15:43 UTC (permalink / raw)
To: muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel, Fan Ni
From: Fan Ni <fan.ni@samsung.com>
The function __unmap_hugepage_range() has two kinds of users:
1) unmap_hugepage_range(), which passes in the head page of a folio.
Since unmap_hugepage_range() already takes folio and there are no other
uses of the folio struct in the function, it is natural for
__unmap_hugepage_range() to take folio also.
2) All other uses, which pass in NULL pointer.
In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to
take folio.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
---
Question: If the change in the patch makes sense, should we try to convert all
"page" uses in __unmap_hugepage_range() to folio?
---
include/linux/hugetlb.h | 2 +-
mm/hugetlb.c | 10 +++++-----
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index b7699f35c87f..d6c503dd2f7d 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -133,7 +133,7 @@ void unmap_hugepage_range(struct vm_area_struct *,
void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma,
unsigned long start, unsigned long end,
- struct page *ref_page, zap_flags_t zap_flags);
+ struct folio *ref_folio, zap_flags_t zap_flags);
void hugetlb_report_meminfo(struct seq_file *);
int hugetlb_report_node_meminfo(char *buf, int len, int nid);
void hugetlb_show_meminfo_node(int nid);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 3181dbe0c4bb..7d280ab23784 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5833,7 +5833,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
unsigned long start, unsigned long end,
- struct page *ref_page, zap_flags_t zap_flags)
+ struct folio *ref_folio, zap_flags_t zap_flags)
{
struct mm_struct *mm = vma->vm_mm;
unsigned long address;
@@ -5910,8 +5910,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
* page is being unmapped, not a range. Ensure the page we
* are about to unmap is the actual page of interest.
*/
- if (ref_page) {
- if (page != ref_page) {
+ if (ref_folio) {
+ if (page != folio_page(ref_folio, 0)) {
spin_unlock(ptl);
continue;
}
@@ -5977,7 +5977,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
/*
* Bail out after unmapping reference page if supplied
*/
- if (ref_page)
+ if (ref_folio)
break;
}
tlb_end_vma(tlb, vma);
@@ -6052,7 +6052,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
tlb_gather_mmu(&tlb, vma->vm_mm);
__unmap_hugepage_range(&tlb, vma, start, end,
- folio_page(ref_folio, 0), zap_flags);
+ ref_folio, zap_flags);
mmu_notifier_invalidate_range_end(&range);
tlb_finish_mmu(&tlb);
--
2.47.2
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] mm/hugetlb: Refactor unmap_ref_private() to take folio instead of page
2025-04-17 15:43 [PATCH 1/3] mm/hugetlb: Refactor unmap_ref_private() to take folio instead of page nifan.cxl
2025-04-17 15:43 ` [PATCH 2/3] mm/hugetlb: Refactor unmap_hugepage_range() " nifan.cxl
2025-04-17 15:43 ` [PATCH 3/3] mm/hugetlb: Refactor __unmap_hugepage_range() " nifan.cxl
@ 2025-04-17 16:11 ` Sidhartha Kumar
2025-04-18 2:51 ` Muchun Song
3 siblings, 0 replies; 11+ messages in thread
From: Sidhartha Kumar @ 2025-04-17 16:11 UTC (permalink / raw)
To: nifan.cxl, muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel, Fan Ni
On 4/17/25 11:43 AM, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function unmap_ref_private() has only user, which passes in
> &folio->page. Let it take folio directly.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> ---
> mm/hugetlb.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index ccc4f08f8481..b5d1ac8290a7 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6064,7 +6064,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
> * same region.
> */
> static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
> - struct page *page, unsigned long address)
> + struct folio *folio, unsigned long address)
> {
> struct hstate *h = hstate_vma(vma);
> struct vm_area_struct *iter_vma;
> @@ -6108,7 +6108,8 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
> */
> if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
> unmap_hugepage_range(iter_vma, address,
> - address + huge_page_size(h), page, 0);
> + address + huge_page_size(h),
> + folio_page(folio, 0), 0);
> }
> i_mmap_unlock_write(mapping);
> }
> @@ -6231,8 +6232,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
> hugetlb_vma_unlock_read(vma);
> mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>
> - unmap_ref_private(mm, vma, &old_folio->page,
> - vmf->address);
> + unmap_ref_private(mm, vma, old_folio, vmf->address);
>
> mutex_lock(&hugetlb_fault_mutex_table[hash]);
> hugetlb_vma_lock_read(vma);
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page
2025-04-17 15:43 ` [PATCH 2/3] mm/hugetlb: Refactor unmap_hugepage_range() " nifan.cxl
@ 2025-04-17 16:13 ` Sidhartha Kumar
2025-04-18 2:51 ` Muchun Song
1 sibling, 0 replies; 11+ messages in thread
From: Sidhartha Kumar @ 2025-04-17 16:13 UTC (permalink / raw)
To: nifan.cxl, muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel, Fan Ni
On 4/17/25 11:43 AM, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function unmap_hugepage_range() has two kinds of users:
> 1) unmap_ref_private(), which passes in the head page of a folio. Since
> unmap_ref_private() already takes folio and there are no other uses
> of the folio struct in the function, it is natural for
> unmap_hugepage_range() to take folio also.
> 2) All other uses, which pass in NULL pointer.
>
> In both cases, we can pass in folio. Refactor unmap_hugepage_range() to
> take folio.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> ---
> include/linux/hugetlb.h | 2 +-
> mm/hugetlb.c | 7 ++++---
> 2 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index a57bed83c657..b7699f35c87f 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -128,7 +128,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
> int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *,
> struct vm_area_struct *, struct vm_area_struct *);
> void unmap_hugepage_range(struct vm_area_struct *,
> - unsigned long, unsigned long, struct page *,
> + unsigned long, unsigned long, struct folio *folio,
> zap_flags_t);
> void __unmap_hugepage_range(struct mmu_gather *tlb,
> struct vm_area_struct *vma,
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index b5d1ac8290a7..3181dbe0c4bb 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6039,7 +6039,7 @@ void __hugetlb_zap_end(struct vm_area_struct *vma,
> }
>
> void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
> - unsigned long end, struct page *ref_page,
> + unsigned long end, struct folio *ref_folio,
> zap_flags_t zap_flags)
> {
> struct mmu_notifier_range range;
> @@ -6051,7 +6051,8 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
> mmu_notifier_invalidate_range_start(&range);
> tlb_gather_mmu(&tlb, vma->vm_mm);
>
> - __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags);
> + __unmap_hugepage_range(&tlb, vma, start, end,
> + folio_page(ref_folio, 0), zap_flags);
>
> mmu_notifier_invalidate_range_end(&range);
> tlb_finish_mmu(&tlb);
> @@ -6109,7 +6110,7 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
> if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
> unmap_hugepage_range(iter_vma, address,
> address + huge_page_size(h),
> - folio_page(folio, 0), 0);
> + folio, 0);
> }
> i_mmap_unlock_write(mapping);
> }
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of page
2025-04-17 15:43 ` [PATCH 3/3] mm/hugetlb: Refactor __unmap_hugepage_range() " nifan.cxl
@ 2025-04-17 16:21 ` Sidhartha Kumar
2025-04-17 16:34 ` Fan Ni
0 siblings, 1 reply; 11+ messages in thread
From: Sidhartha Kumar @ 2025-04-17 16:21 UTC (permalink / raw)
To: nifan.cxl, muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel, Fan Ni
On 4/17/25 11:43 AM, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function __unmap_hugepage_range() has two kinds of users:
> 1) unmap_hugepage_range(), which passes in the head page of a folio.
> Since unmap_hugepage_range() already takes folio and there are no other
> uses of the folio struct in the function, it is natural for
> __unmap_hugepage_range() to take folio also.
> 2) All other uses, which pass in NULL pointer.
>
> In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to
> take folio.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> ---
>
> Question: If the change in the patch makes sense, should we try to convert all
> "page" uses in __unmap_hugepage_range() to folio?
>
For this to be correct, we have to ensure that the pte in:
page = pte_page(pte);
only refers to the pte of a head page. pte comes from:
pte = huge_ptep_get(mm, address, ptep);
and in the for loop above:
for (; address < end; address += sz)
address is incremented by the huge page size so I think address here
only points to head pages of hugetlb folios and it would make sense to
convert page to folio here.
> ---
> include/linux/hugetlb.h | 2 +-
> mm/hugetlb.c | 10 +++++-----
> 2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index b7699f35c87f..d6c503dd2f7d 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -133,7 +133,7 @@ void unmap_hugepage_range(struct vm_area_struct *,
> void __unmap_hugepage_range(struct mmu_gather *tlb,
> struct vm_area_struct *vma,
> unsigned long start, unsigned long end,
> - struct page *ref_page, zap_flags_t zap_flags);
> + struct folio *ref_folio, zap_flags_t zap_flags);
> void hugetlb_report_meminfo(struct seq_file *);
> int hugetlb_report_node_meminfo(char *buf, int len, int nid);
> void hugetlb_show_meminfo_node(int nid);
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 3181dbe0c4bb..7d280ab23784 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5833,7 +5833,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
>
> void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> unsigned long start, unsigned long end,
> - struct page *ref_page, zap_flags_t zap_flags)
> + struct folio *ref_folio, zap_flags_t zap_flags)
> {
> struct mm_struct *mm = vma->vm_mm;
> unsigned long address;
> @@ -5910,8 +5910,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> * page is being unmapped, not a range. Ensure the page we
> * are about to unmap is the actual page of interest.
> */
> - if (ref_page) {
> - if (page != ref_page) {
> + if (ref_folio) {
> + if (page != folio_page(ref_folio, 0)) {
> spin_unlock(ptl);
> continue;
> }
> @@ -5977,7 +5977,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> /*
> * Bail out after unmapping reference page if supplied
> */
> - if (ref_page)
> + if (ref_folio)
> break;
> }
> tlb_end_vma(tlb, vma);
> @@ -6052,7 +6052,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
> tlb_gather_mmu(&tlb, vma->vm_mm);
>
> __unmap_hugepage_range(&tlb, vma, start, end,
> - folio_page(ref_folio, 0), zap_flags);
> + ref_folio, zap_flags);
>
> mmu_notifier_invalidate_range_end(&range);
> tlb_finish_mmu(&tlb);
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of page
2025-04-17 16:21 ` Sidhartha Kumar
@ 2025-04-17 16:34 ` Fan Ni
2025-04-18 3:03 ` Muchun Song
0 siblings, 1 reply; 11+ messages in thread
From: Fan Ni @ 2025-04-17 16:34 UTC (permalink / raw)
To: Sidhartha Kumar
Cc: nifan.cxl, muchun.song, willy, mcgrof, a.manzanares, dave, akpm,
david, linux-mm, linux-kernel
On Thu, Apr 17, 2025 at 12:21:55PM -0400, Sidhartha Kumar wrote:
> On 4/17/25 11:43 AM, nifan.cxl@gmail.com wrote:
> > From: Fan Ni <fan.ni@samsung.com>
> >
> > The function __unmap_hugepage_range() has two kinds of users:
> > 1) unmap_hugepage_range(), which passes in the head page of a folio.
> > Since unmap_hugepage_range() already takes folio and there are no other
> > uses of the folio struct in the function, it is natural for
> > __unmap_hugepage_range() to take folio also.
> > 2) All other uses, which pass in NULL pointer.
> >
> > In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to
> > take folio.
> >
> > Signed-off-by: Fan Ni <fan.ni@samsung.com>
> > ---
> >
> > Question: If the change in the patch makes sense, should we try to convert all
> > "page" uses in __unmap_hugepage_range() to folio?
> >
>
> For this to be correct, we have to ensure that the pte in:
>
> page = pte_page(pte);
>
> only refers to the pte of a head page. pte comes from:
>
> pte = huge_ptep_get(mm, address, ptep);
>
> and in the for loop above:
>
> for (; address < end; address += sz)
>
> address is incremented by the huge page size so I think address here only
> points to head pages of hugetlb folios and it would make sense to convert
> page to folio here.
>
Thanks Sidhartha for reviewing the series. I have similar understanding and
wanted to get confirmation from experts in this area.
Thanks.
Fan
> > ---
> > include/linux/hugetlb.h | 2 +-
> > mm/hugetlb.c | 10 +++++-----
> > 2 files changed, 6 insertions(+), 6 deletions(-)
> >
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index b7699f35c87f..d6c503dd2f7d 100644
> > --- a/include/linux/hugetlb.h
> > +++ b/include/linux/hugetlb.h
> > @@ -133,7 +133,7 @@ void unmap_hugepage_range(struct vm_area_struct *,
> > void __unmap_hugepage_range(struct mmu_gather *tlb,
> > struct vm_area_struct *vma,
> > unsigned long start, unsigned long end,
> > - struct page *ref_page, zap_flags_t zap_flags);
> > + struct folio *ref_folio, zap_flags_t zap_flags);
> > void hugetlb_report_meminfo(struct seq_file *);
> > int hugetlb_report_node_meminfo(char *buf, int len, int nid);
> > void hugetlb_show_meminfo_node(int nid);
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 3181dbe0c4bb..7d280ab23784 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -5833,7 +5833,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
> > void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > unsigned long start, unsigned long end,
> > - struct page *ref_page, zap_flags_t zap_flags)
> > + struct folio *ref_folio, zap_flags_t zap_flags)
> > {
> > struct mm_struct *mm = vma->vm_mm;
> > unsigned long address;
> > @@ -5910,8 +5910,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > * page is being unmapped, not a range. Ensure the page we
> > * are about to unmap is the actual page of interest.
> > */
> > - if (ref_page) {
> > - if (page != ref_page) {
> > + if (ref_folio) {
> > + if (page != folio_page(ref_folio, 0)) {
> > spin_unlock(ptl);
> > continue;
> > }
> > @@ -5977,7 +5977,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > /*
> > * Bail out after unmapping reference page if supplied
> > */
> > - if (ref_page)
> > + if (ref_folio)
> > break;
> > }
> > tlb_end_vma(tlb, vma);
> > @@ -6052,7 +6052,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
> > tlb_gather_mmu(&tlb, vma->vm_mm);
> > __unmap_hugepage_range(&tlb, vma, start, end,
> > - folio_page(ref_folio, 0), zap_flags);
> > + ref_folio, zap_flags);
> > mmu_notifier_invalidate_range_end(&range);
> > tlb_finish_mmu(&tlb);
> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/3] mm/hugetlb: Refactor unmap_ref_private() to take folio instead of page
2025-04-17 15:43 [PATCH 1/3] mm/hugetlb: Refactor unmap_ref_private() to take folio instead of page nifan.cxl
` (2 preceding siblings ...)
2025-04-17 16:11 ` [PATCH 1/3] mm/hugetlb: Refactor unmap_ref_private() " Sidhartha Kumar
@ 2025-04-18 2:51 ` Muchun Song
3 siblings, 0 replies; 11+ messages in thread
From: Muchun Song @ 2025-04-18 2:51 UTC (permalink / raw)
To: nifan.cxl
Cc: willy, mcgrof, a.manzanares, dave, akpm, david, linux-mm,
linux-kernel, Fan Ni
> On Apr 17, 2025, at 23:43, nifan.cxl@gmail.com wrote:
>
> From: Fan Ni <fan.ni@samsung.com>
>
> The function unmap_ref_private() has only user, which passes in
> &folio->page. Let it take folio directly.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Thanks.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 2/3] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page
2025-04-17 15:43 ` [PATCH 2/3] mm/hugetlb: Refactor unmap_hugepage_range() " nifan.cxl
2025-04-17 16:13 ` Sidhartha Kumar
@ 2025-04-18 2:51 ` Muchun Song
1 sibling, 0 replies; 11+ messages in thread
From: Muchun Song @ 2025-04-18 2:51 UTC (permalink / raw)
To: nifan.cxl
Cc: willy, mcgrof, a.manzanares, dave, akpm, david, linux-mm,
linux-kernel, Fan Ni
> On Apr 17, 2025, at 23:43, nifan.cxl@gmail.com wrote:
>
> From: Fan Ni <fan.ni@samsung.com>
>
> The function unmap_hugepage_range() has two kinds of users:
> 1) unmap_ref_private(), which passes in the head page of a folio. Since
> unmap_ref_private() already takes folio and there are no other uses
> of the folio struct in the function, it is natural for
> unmap_hugepage_range() to take folio also.
> 2) All other uses, which pass in NULL pointer.
>
> In both cases, we can pass in folio. Refactor unmap_hugepage_range() to
> take folio.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Thanks.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of page
2025-04-17 16:34 ` Fan Ni
@ 2025-04-18 3:03 ` Muchun Song
2025-04-18 4:27 ` Fan Ni
0 siblings, 1 reply; 11+ messages in thread
From: Muchun Song @ 2025-04-18 3:03 UTC (permalink / raw)
To: Fan Ni
Cc: Sidhartha Kumar, willy, mcgrof, a.manzanares, dave, akpm, david,
linux-mm, linux-kernel
> On Apr 18, 2025, at 00:34, Fan Ni <nifan.cxl@gmail.com> wrote:
>
> On Thu, Apr 17, 2025 at 12:21:55PM -0400, Sidhartha Kumar wrote:
>> On 4/17/25 11:43 AM, nifan.cxl@gmail.com wrote:
>>> From: Fan Ni <fan.ni@samsung.com>
>>>
>>> The function __unmap_hugepage_range() has two kinds of users:
>>> 1) unmap_hugepage_range(), which passes in the head page of a folio.
>>> Since unmap_hugepage_range() already takes folio and there are no other
>>> uses of the folio struct in the function, it is natural for
>>> __unmap_hugepage_range() to take folio also.
>>> 2) All other uses, which pass in NULL pointer.
>>>
>>> In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to
>>> take folio.
>>>
>>> Signed-off-by: Fan Ni <fan.ni@samsung.com>
>>> ---
>>>
>>> Question: If the change in the patch makes sense, should we try to convert all
>>> "page" uses in __unmap_hugepage_range() to folio?
>>>
>>
>> For this to be correct, we have to ensure that the pte in:
>>
>> page = pte_page(pte);
>>
>> only refers to the pte of a head page. pte comes from:
>>
>> pte = huge_ptep_get(mm, address, ptep);
>>
>> and in the for loop above:
>>
>> for (; address < end; address += sz)
>>
>> address is incremented by the huge page size so I think address here only
>> points to head pages of hugetlb folios and it would make sense to convert
>> page to folio here.
>>
>
> Thanks Sidhartha for reviewing the series. I have similar understanding and
> wanted to get confirmation from experts in this area.
I think your understanding is right. BTW, you forgot to update definition of
__unmap_hugepage_range() under !CONFIG_HUGETLB_PAGE case.
>
> Thanks.
> Fan
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 3/3] mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of page
2025-04-18 3:03 ` Muchun Song
@ 2025-04-18 4:27 ` Fan Ni
0 siblings, 0 replies; 11+ messages in thread
From: Fan Ni @ 2025-04-18 4:27 UTC (permalink / raw)
To: Muchun Song
Cc: Fan Ni, Sidhartha Kumar, willy, mcgrof, a.manzanares, dave, akpm,
david, linux-mm, linux-kernel
On Fri, Apr 18, 2025 at 11:03:59AM +0800, Muchun Song wrote:
>
>
> > On Apr 18, 2025, at 00:34, Fan Ni <nifan.cxl@gmail.com> wrote:
> >
> > On Thu, Apr 17, 2025 at 12:21:55PM -0400, Sidhartha Kumar wrote:
> >> On 4/17/25 11:43 AM, nifan.cxl@gmail.com wrote:
> >>> From: Fan Ni <fan.ni@samsung.com>
> >>>
> >>> The function __unmap_hugepage_range() has two kinds of users:
> >>> 1) unmap_hugepage_range(), which passes in the head page of a folio.
> >>> Since unmap_hugepage_range() already takes folio and there are no other
> >>> uses of the folio struct in the function, it is natural for
> >>> __unmap_hugepage_range() to take folio also.
> >>> 2) All other uses, which pass in NULL pointer.
> >>>
> >>> In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to
> >>> take folio.
> >>>
> >>> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> >>> ---
> >>>
> >>> Question: If the change in the patch makes sense, should we try to convert all
> >>> "page" uses in __unmap_hugepage_range() to folio?
> >>>
> >>
> >> For this to be correct, we have to ensure that the pte in:
> >>
> >> page = pte_page(pte);
> >>
> >> only refers to the pte of a head page. pte comes from:
> >>
> >> pte = huge_ptep_get(mm, address, ptep);
> >>
> >> and in the for loop above:
> >>
> >> for (; address < end; address += sz)
> >>
> >> address is incremented by the huge page size so I think address here only
> >> points to head pages of hugetlb folios and it would make sense to convert
> >> page to folio here.
> >>
> >
> > Thanks Sidhartha for reviewing the series. I have similar understanding and
> > wanted to get confirmation from experts in this area.
>
> I think your understanding is right. BTW, you forgot to update definition of
> __unmap_hugepage_range() under !CONFIG_HUGETLB_PAGE case.
>
Thanks Muchun. You are right, we need to update that.
Hi Andrew,
I see you picked this patch up, should I send a v2 for the series to fix the
issue mentioned above?
The fix is simple as below.
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index d6c503dd2f7d..ebaf95231934 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -452,7 +452,7 @@ static inline long hugetlb_change_protection(
static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma, unsigned long start,
- unsigned long end, struct page *ref_page,
+ unsigned long end, struct folio *ref_folio,
zap_flags_t zap_flags)
{
BUG();
Fan
> >
> > Thanks.
> > Fan
>
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2025-04-18 4:27 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-17 15:43 [PATCH 1/3] mm/hugetlb: Refactor unmap_ref_private() to take folio instead of page nifan.cxl
2025-04-17 15:43 ` [PATCH 2/3] mm/hugetlb: Refactor unmap_hugepage_range() " nifan.cxl
2025-04-17 16:13 ` Sidhartha Kumar
2025-04-18 2:51 ` Muchun Song
2025-04-17 15:43 ` [PATCH 3/3] mm/hugetlb: Refactor __unmap_hugepage_range() " nifan.cxl
2025-04-17 16:21 ` Sidhartha Kumar
2025-04-17 16:34 ` Fan Ni
2025-04-18 3:03 ` Muchun Song
2025-04-18 4:27 ` Fan Ni
2025-04-17 16:11 ` [PATCH 1/3] mm/hugetlb: Refactor unmap_ref_private() " Sidhartha Kumar
2025-04-18 2:51 ` Muchun Song
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox