* [PATCH v3 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private()
2025-04-28 17:11 [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and nifan.cxl
@ 2025-04-28 17:11 ` nifan.cxl
2025-04-28 20:49 ` David Hildenbrand
` (2 more replies)
2025-04-28 17:11 ` [PATCH v3 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page nifan.cxl
` (4 subsequent siblings)
5 siblings, 3 replies; 22+ messages in thread
From: nifan.cxl @ 2025-04-28 17:11 UTC (permalink / raw)
To: muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel,
Fan Ni, Sidhartha Kumar
From: Fan Ni <fan.ni@samsung.com>
The function unmap_ref_private() has only user, which passes in
&folio->page. Let it take folio directly.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
mm/hugetlb.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e287d8050b40..b1268e7ca1f6 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6039,7 +6039,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
* same region.
*/
static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
- struct page *page, unsigned long address)
+ struct folio *folio, unsigned long address)
{
struct hstate *h = hstate_vma(vma);
struct vm_area_struct *iter_vma;
@@ -6083,7 +6083,8 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
*/
if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
unmap_hugepage_range(iter_vma, address,
- address + huge_page_size(h), page, 0);
+ address + huge_page_size(h),
+ &folio->page, 0);
}
i_mmap_unlock_write(mapping);
}
@@ -6206,8 +6207,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
hugetlb_vma_unlock_read(vma);
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
- unmap_ref_private(mm, vma, &old_folio->page,
- vmf->address);
+ unmap_ref_private(mm, vma, old_folio, vmf->address);
mutex_lock(&hugetlb_fault_mutex_table[hash]);
hugetlb_vma_lock_read(vma);
--
2.47.2
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v3 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private()
2025-04-28 17:11 ` [PATCH v3 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private() nifan.cxl
@ 2025-04-28 20:49 ` David Hildenbrand
2025-04-28 21:02 ` Matthew Wilcox
2025-04-30 7:50 ` Oscar Salvador
2 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2025-04-28 20:49 UTC (permalink / raw)
To: nifan.cxl, muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, linux-mm, linux-kernel, Fan Ni,
Sidhartha Kumar
On 28.04.25 19:11, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function unmap_ref_private() has only user, which passes in
"only a single user"
> &folio->page. Let it take folio directly.
"the folio"
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private()
2025-04-28 17:11 ` [PATCH v3 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private() nifan.cxl
2025-04-28 20:49 ` David Hildenbrand
@ 2025-04-28 21:02 ` Matthew Wilcox
2025-04-30 7:50 ` Oscar Salvador
2 siblings, 0 replies; 22+ messages in thread
From: Matthew Wilcox @ 2025-04-28 21:02 UTC (permalink / raw)
To: nifan.cxl
Cc: muchun.song, mcgrof, a.manzanares, dave, akpm, david, linux-mm,
linux-kernel, Fan Ni, Sidhartha Kumar
On Mon, Apr 28, 2025 at 10:11:44AM -0700, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function unmap_ref_private() has only user, which passes in
> &folio->page. Let it take folio directly.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> Reviewed-by: Muchun Song <muchun.song@linux.dev>
> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private()
2025-04-28 17:11 ` [PATCH v3 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private() nifan.cxl
2025-04-28 20:49 ` David Hildenbrand
2025-04-28 21:02 ` Matthew Wilcox
@ 2025-04-30 7:50 ` Oscar Salvador
2 siblings, 0 replies; 22+ messages in thread
From: Oscar Salvador @ 2025-04-30 7:50 UTC (permalink / raw)
To: nifan.cxl
Cc: muchun.song, willy, mcgrof, a.manzanares, dave, akpm, david,
linux-mm, linux-kernel, Fan Ni, Sidhartha Kumar
On Mon, Apr 28, 2025 at 10:11:44AM -0700, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function unmap_ref_private() has only user, which passes in
> &folio->page. Let it take folio directly.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> Reviewed-by: Muchun Song <muchun.song@linux.dev>
> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
> ---
> mm/hugetlb.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index e287d8050b40..b1268e7ca1f6 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6039,7 +6039,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
> * same region.
> */
> static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
> - struct page *page, unsigned long address)
> + struct folio *folio, unsigned long address)
> {
> struct hstate *h = hstate_vma(vma);
> struct vm_area_struct *iter_vma;
> @@ -6083,7 +6083,8 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
> */
> if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
> unmap_hugepage_range(iter_vma, address,
> - address + huge_page_size(h), page, 0);
> + address + huge_page_size(h),
> + &folio->page, 0);
> }
> i_mmap_unlock_write(mapping);
> }
> @@ -6206,8 +6207,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
> hugetlb_vma_unlock_read(vma);
> mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>
> - unmap_ref_private(mm, vma, &old_folio->page,
> - vmf->address);
> + unmap_ref_private(mm, vma, old_folio, vmf->address);
>
> mutex_lock(&hugetlb_fault_mutex_table[hash]);
> hugetlb_vma_lock_read(vma);
> --
> 2.47.2
>
>
--
Oscar Salvador
SUSE Labs
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v3 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page
2025-04-28 17:11 [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and nifan.cxl
2025-04-28 17:11 ` [PATCH v3 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private() nifan.cxl
@ 2025-04-28 17:11 ` nifan.cxl
2025-04-28 20:50 ` David Hildenbrand
2025-04-30 7:52 ` Oscar Salvador
2025-04-28 17:11 ` [PATCH v3 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() " nifan.cxl
` (3 subsequent siblings)
5 siblings, 2 replies; 22+ messages in thread
From: nifan.cxl @ 2025-04-28 17:11 UTC (permalink / raw)
To: muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel,
Fan Ni, Sidhartha Kumar
From: Fan Ni <fan.ni@samsung.com>
The function unmap_hugepage_range() has two kinds of users:
1) unmap_ref_private(), which passes in the head page of a folio. Since
unmap_ref_private() already takes folio and there are no other uses
of the folio struct in the function, it is natural for
unmap_hugepage_range() to take folio also.
2) All other uses, which pass in NULL pointer.
In both cases, we can pass in folio. Refactor unmap_hugepage_range() to
take folio.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
include/linux/hugetlb.h | 4 ++--
mm/hugetlb.c | 7 ++++---
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index a57bed83c657..83d85cbb4284 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -128,8 +128,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *,
struct vm_area_struct *, struct vm_area_struct *);
void unmap_hugepage_range(struct vm_area_struct *,
- unsigned long, unsigned long, struct page *,
- zap_flags_t);
+ unsigned long start, unsigned long end,
+ struct folio *, zap_flags_t);
void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma,
unsigned long start, unsigned long end,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b1268e7ca1f6..7601e3d344bc 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6014,7 +6014,7 @@ void __hugetlb_zap_end(struct vm_area_struct *vma,
}
void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
- unsigned long end, struct page *ref_page,
+ unsigned long end, struct folio *folio,
zap_flags_t zap_flags)
{
struct mmu_notifier_range range;
@@ -6026,7 +6026,8 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
mmu_notifier_invalidate_range_start(&range);
tlb_gather_mmu(&tlb, vma->vm_mm);
- __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags);
+ __unmap_hugepage_range(&tlb, vma, start, end,
+ &folio->page, zap_flags);
mmu_notifier_invalidate_range_end(&range);
tlb_finish_mmu(&tlb);
@@ -6084,7 +6085,7 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
unmap_hugepage_range(iter_vma, address,
address + huge_page_size(h),
- &folio->page, 0);
+ folio, 0);
}
i_mmap_unlock_write(mapping);
}
--
2.47.2
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v3 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page
2025-04-28 17:11 ` [PATCH v3 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page nifan.cxl
@ 2025-04-28 20:50 ` David Hildenbrand
2025-04-30 7:52 ` Oscar Salvador
1 sibling, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2025-04-28 20:50 UTC (permalink / raw)
To: nifan.cxl, muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, linux-mm, linux-kernel, Fan Ni,
Sidhartha Kumar
On 28.04.25 19:11, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function unmap_hugepage_range() has two kinds of users:
> 1) unmap_ref_private(), which passes in the head page of a folio. Since
> unmap_ref_private() already takes folio and there are no other uses
> of the folio struct in the function, it is natural for
> unmap_hugepage_range() to take folio also.
> 2) All other uses, which pass in NULL pointer.
>
> In both cases, we can pass in folio. Refactor unmap_hugepage_range() to
> take folio.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> Reviewed-by: Muchun Song <muchun.song@linux.dev>
> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page
2025-04-28 17:11 ` [PATCH v3 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page nifan.cxl
2025-04-28 20:50 ` David Hildenbrand
@ 2025-04-30 7:52 ` Oscar Salvador
1 sibling, 0 replies; 22+ messages in thread
From: Oscar Salvador @ 2025-04-30 7:52 UTC (permalink / raw)
To: nifan.cxl
Cc: muchun.song, willy, mcgrof, a.manzanares, dave, akpm, david,
linux-mm, linux-kernel, Fan Ni, Sidhartha Kumar
On Mon, Apr 28, 2025 at 10:11:45AM -0700, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function unmap_hugepage_range() has two kinds of users:
> 1) unmap_ref_private(), which passes in the head page of a folio. Since
> unmap_ref_private() already takes folio and there are no other uses
> of the folio struct in the function, it is natural for
> unmap_hugepage_range() to take folio also.
> 2) All other uses, which pass in NULL pointer.
>
> In both cases, we can pass in folio. Refactor unmap_hugepage_range() to
> take folio.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> Reviewed-by: Muchun Song <muchun.song@linux.dev>
> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
--
Oscar Salvador
SUSE Labs
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v3 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of page
2025-04-28 17:11 [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and nifan.cxl
2025-04-28 17:11 ` [PATCH v3 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private() nifan.cxl
2025-04-28 17:11 ` [PATCH v3 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page nifan.cxl
@ 2025-04-28 17:11 ` nifan.cxl
2025-04-28 20:51 ` David Hildenbrand
2025-04-30 7:58 ` Oscar Salvador
2025-04-28 17:11 ` [PATCH v3 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range() nifan.cxl
` (2 subsequent siblings)
5 siblings, 2 replies; 22+ messages in thread
From: nifan.cxl @ 2025-04-28 17:11 UTC (permalink / raw)
To: muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel, Fan Ni
From: Fan Ni <fan.ni@samsung.com>
The function __unmap_hugepage_range() has two kinds of users:
1) unmap_hugepage_range(), which passes in the head page of a folio.
Since unmap_hugepage_range() already takes folio and there are no other
uses of the folio struct in the function, it is natural for
__unmap_hugepage_range() to take folio also.
2) All other uses, which pass in NULL pointer.
In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to
take folio.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
---
include/linux/hugetlb.h | 4 ++--
mm/hugetlb.c | 10 +++++-----
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 83d85cbb4284..3a07a60c8cd9 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -133,7 +133,7 @@ void unmap_hugepage_range(struct vm_area_struct *,
void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma,
unsigned long start, unsigned long end,
- struct page *ref_page, zap_flags_t zap_flags);
+ struct folio *, zap_flags_t zap_flags);
void hugetlb_report_meminfo(struct seq_file *);
int hugetlb_report_node_meminfo(char *buf, int len, int nid);
void hugetlb_show_meminfo_node(int nid);
@@ -452,7 +452,7 @@ static inline long hugetlb_change_protection(
static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma, unsigned long start,
- unsigned long end, struct page *ref_page,
+ unsigned long end, struct folio *folio,
zap_flags_t zap_flags)
{
BUG();
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7601e3d344bc..6696206d556e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5808,7 +5808,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
unsigned long start, unsigned long end,
- struct page *ref_page, zap_flags_t zap_flags)
+ struct folio *folio, zap_flags_t zap_flags)
{
struct mm_struct *mm = vma->vm_mm;
unsigned long address;
@@ -5885,8 +5885,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
* page is being unmapped, not a range. Ensure the page we
* are about to unmap is the actual page of interest.
*/
- if (ref_page) {
- if (page != ref_page) {
+ if (folio) {
+ if (page_folio(page) != folio) {
spin_unlock(ptl);
continue;
}
@@ -5952,7 +5952,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
/*
* Bail out after unmapping reference page if supplied
*/
- if (ref_page)
+ if (folio)
break;
}
tlb_end_vma(tlb, vma);
@@ -6027,7 +6027,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
tlb_gather_mmu(&tlb, vma->vm_mm);
__unmap_hugepage_range(&tlb, vma, start, end,
- &folio->page, zap_flags);
+ folio, zap_flags);
mmu_notifier_invalidate_range_end(&range);
tlb_finish_mmu(&tlb);
--
2.47.2
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v3 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of page
2025-04-28 17:11 ` [PATCH v3 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() " nifan.cxl
@ 2025-04-28 20:51 ` David Hildenbrand
2025-04-30 7:58 ` Oscar Salvador
1 sibling, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2025-04-28 20:51 UTC (permalink / raw)
To: nifan.cxl, muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, linux-mm, linux-kernel, Fan Ni
On 28.04.25 19:11, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function __unmap_hugepage_range() has two kinds of users:
> 1) unmap_hugepage_range(), which passes in the head page of a folio.
> Since unmap_hugepage_range() already takes folio and there are no other
> uses of the folio struct in the function, it is natural for
> __unmap_hugepage_range() to take folio also.
> 2) All other uses, which pass in NULL pointer.
>
> In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to
> take folio.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of page
2025-04-28 17:11 ` [PATCH v3 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() " nifan.cxl
2025-04-28 20:51 ` David Hildenbrand
@ 2025-04-30 7:58 ` Oscar Salvador
2025-05-04 21:35 ` Fan Ni
1 sibling, 1 reply; 22+ messages in thread
From: Oscar Salvador @ 2025-04-30 7:58 UTC (permalink / raw)
To: nifan.cxl
Cc: muchun.song, willy, mcgrof, a.manzanares, dave, akpm, david,
linux-mm, linux-kernel, Fan Ni
On Mon, Apr 28, 2025 at 10:11:46AM -0700, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function __unmap_hugepage_range() has two kinds of users:
> 1) unmap_hugepage_range(), which passes in the head page of a folio.
> Since unmap_hugepage_range() already takes folio and there are no other
> uses of the folio struct in the function, it is natural for
> __unmap_hugepage_range() to take folio also.
> 2) All other uses, which pass in NULL pointer.
>
> In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to
> take folio.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
But:
> void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> unsigned long start, unsigned long end,
> - struct page *ref_page, zap_flags_t zap_flags)
> + struct folio *folio, zap_flags_t zap_flags)
I think we are kinda losing information here. ref_ was a good hint
and...
> struct mm_struct *mm = vma->vm_mm;
> unsigned long address;
> @@ -5885,8 +5885,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> * page is being unmapped, not a range. Ensure the page we
> * are about to unmap is the actual page of interest.
> */
> - if (ref_page) {
> - if (page != ref_page) {
> + if (folio) {
> + if (page_folio(page) != folio) {
You have to update the comment above, since we are not passing a
reference page anymore but a folio.
--
Oscar Salvador
SUSE Labs
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v3 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of page
2025-04-30 7:58 ` Oscar Salvador
@ 2025-05-04 21:35 ` Fan Ni
2025-05-05 7:01 ` David Hildenbrand
0 siblings, 1 reply; 22+ messages in thread
From: Fan Ni @ 2025-05-04 21:35 UTC (permalink / raw)
To: Oscar Salvador
Cc: nifan.cxl, muchun.song, willy, mcgrof, a.manzanares, dave, akpm,
david, linux-mm, linux-kernel
On Wed, Apr 30, 2025 at 09:58:35AM +0200, Oscar Salvador wrote:
> On Mon, Apr 28, 2025 at 10:11:46AM -0700, nifan.cxl@gmail.com wrote:
> > From: Fan Ni <fan.ni@samsung.com>
> >
> > The function __unmap_hugepage_range() has two kinds of users:
> > 1) unmap_hugepage_range(), which passes in the head page of a folio.
> > Since unmap_hugepage_range() already takes folio and there are no other
> > uses of the folio struct in the function, it is natural for
> > __unmap_hugepage_range() to take folio also.
> > 2) All other uses, which pass in NULL pointer.
> >
> > In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to
> > take folio.
> >
> > Signed-off-by: Fan Ni <fan.ni@samsung.com>
>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
>
> But:
>
> > void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > unsigned long start, unsigned long end,
> > - struct page *ref_page, zap_flags_t zap_flags)
> > + struct folio *folio, zap_flags_t zap_flags)
>
> I think we are kinda losing information here. ref_ was a good hint
> and...
Hi Oscar,
Thanks for the feedback.
Since the sugguested change here is minor and does not affect the
function, and we do not have a aligned opinion here.
https://lore.kernel.org/linux-mm/b23ef51b-1284-4168-8157-432c3e045788@redhat.com/
I will leave it as it is until there are more pushes for the change.
>
> > struct mm_struct *mm = vma->vm_mm;
> > unsigned long address;
> > @@ -5885,8 +5885,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> > * page is being unmapped, not a range. Ensure the page we
> > * are about to unmap is the actual page of interest.
> > */
> > - if (ref_page) {
> > - if (page != ref_page) {
> > + if (folio) {
> > + if (page_folio(page) != folio) {
>
> You have to update the comment above, since we are not passing a
> reference page anymore but a folio.
Will update in the next version. Thanks.
Fan
>
>
>
> --
> Oscar Salvador
> SUSE Labs
>
--
Fan Ni
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v3 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of page
2025-05-04 21:35 ` Fan Ni
@ 2025-05-05 7:01 ` David Hildenbrand
0 siblings, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2025-05-05 7:01 UTC (permalink / raw)
To: Fan Ni, Oscar Salvador
Cc: muchun.song, willy, mcgrof, a.manzanares, dave, akpm, linux-mm,
linux-kernel
On 04.05.25 23:35, Fan Ni wrote:
> On Wed, Apr 30, 2025 at 09:58:35AM +0200, Oscar Salvador wrote:
>> On Mon, Apr 28, 2025 at 10:11:46AM -0700, nifan.cxl@gmail.com wrote:
>>> From: Fan Ni <fan.ni@samsung.com>
>>>
>>> The function __unmap_hugepage_range() has two kinds of users:
>>> 1) unmap_hugepage_range(), which passes in the head page of a folio.
>>> Since unmap_hugepage_range() already takes folio and there are no other
>>> uses of the folio struct in the function, it is natural for
>>> __unmap_hugepage_range() to take folio also.
>>> 2) All other uses, which pass in NULL pointer.
>>>
>>> In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to
>>> take folio.
>>>
>>> Signed-off-by: Fan Ni <fan.ni@samsung.com>
>>
>> Reviewed-by: Oscar Salvador <osalvador@suse.de>
>>
>> But:
>>
>>> void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
>>> unsigned long start, unsigned long end,
>>> - struct page *ref_page, zap_flags_t zap_flags)
>>> + struct folio *folio, zap_flags_t zap_flags)
>>
>> I think we are kinda losing information here. ref_ was a good hint
>> and...
>
> Hi Oscar,
>
> Thanks for the feedback.
> Since the sugguested change here is minor and does not affect the
> function, and we do not have a aligned opinion here.
> https://lore.kernel.org/linux-mm/b23ef51b-1284-4168-8157-432c3e045788@redhat.com/
> I will leave it as it is until there are more pushes for the change.
I don't think we're losing any information. :) Especially if nowhere
else in the kernel we use the term "ref_page" or "ref_folio". And things
like put_ref_page(), free_unref_folios() ... talk about dropping
references not "this is the reference folio".
"folio_to_unmap" might be clearer, but then, I think we should just use
a single folio variable in that function.
This function might benefit from kerneldoc, though :)
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 22+ messages in thread
* [PATCH v3 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range()
2025-04-28 17:11 [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and nifan.cxl
` (2 preceding siblings ...)
2025-04-28 17:11 ` [PATCH v3 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() " nifan.cxl
@ 2025-04-28 17:11 ` nifan.cxl
2025-04-28 20:53 ` David Hildenbrand
2025-04-30 8:09 ` Oscar Salvador
2025-04-28 19:31 ` [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and Andrew Morton
2025-04-28 21:01 ` Matthew Wilcox
5 siblings, 2 replies; 22+ messages in thread
From: nifan.cxl @ 2025-04-28 17:11 UTC (permalink / raw)
To: muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel, Fan Ni
From: Fan Ni <fan.ni@samsung.com>
In __unmap_hugepage_range(), the "page" pointer always points to the
first page of a huge page, which guarantees there is a folio associating
with it. Convert the "page" pointer to use folio.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
---
mm/hugetlb.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6696206d556e..293c2afa724b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5815,12 +5815,12 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
pte_t *ptep;
pte_t pte;
spinlock_t *ptl;
- struct page *page;
struct hstate *h = hstate_vma(vma);
unsigned long sz = huge_page_size(h);
bool adjust_reservation = false;
unsigned long last_addr_mask;
bool force_flush = false;
+ const bool folio_provided = !!folio;
WARN_ON(!is_vm_hugetlb_page(vma));
BUG_ON(start & ~huge_page_mask(h));
@@ -5879,14 +5879,13 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
continue;
}
- page = pte_page(pte);
/*
* If a reference page is supplied, it is because a specific
* page is being unmapped, not a range. Ensure the page we
* are about to unmap is the actual page of interest.
*/
- if (folio) {
- if (page_folio(page) != folio) {
+ if (folio_provided) {
+ if (folio != page_folio(pte_page(pte))) {
spin_unlock(ptl);
continue;
}
@@ -5896,12 +5895,14 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
* looking like data was lost
*/
set_vma_resv_flags(vma, HPAGE_RESV_UNMAPPED);
+ } else {
+ folio = page_folio(pte_page(pte));
}
pte = huge_ptep_get_and_clear(mm, address, ptep, sz);
tlb_remove_huge_tlb_entry(h, tlb, ptep, address);
if (huge_pte_dirty(pte))
- set_page_dirty(page);
+ folio_mark_dirty(folio);
/* Leave a uffd-wp pte marker if needed */
if (huge_pte_uffd_wp(pte) &&
!(zap_flags & ZAP_FLAG_DROP_MARKER))
@@ -5909,7 +5910,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
make_pte_marker(PTE_MARKER_UFFD_WP),
sz);
hugetlb_count_sub(pages_per_huge_page(h), mm);
- hugetlb_remove_rmap(page_folio(page));
+ hugetlb_remove_rmap(folio);
/*
* Restore the reservation for anonymous page, otherwise the
@@ -5918,8 +5919,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
* reservation bit.
*/
if (!h->surplus_huge_pages && __vma_private_lock(vma) &&
- folio_test_anon(page_folio(page))) {
- folio_set_hugetlb_restore_reserve(page_folio(page));
+ folio_test_anon(folio)) {
+ folio_set_hugetlb_restore_reserve(folio);
/* Reservation to be adjusted after the spin lock */
adjust_reservation = true;
}
@@ -5943,16 +5944,17 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
* count will not be incremented by free_huge_folio.
* Act as if we consumed the reservation.
*/
- folio_clear_hugetlb_restore_reserve(page_folio(page));
+ folio_clear_hugetlb_restore_reserve(folio);
else if (rc)
vma_add_reservation(h, vma, address);
}
- tlb_remove_page_size(tlb, page, huge_page_size(h));
+ tlb_remove_page_size(tlb, folio_page(folio, 0),
+ folio_size(folio));
/*
* Bail out after unmapping reference page if supplied
*/
- if (folio)
+ if (folio_provided)
break;
}
tlb_end_vma(tlb, vma);
--
2.47.2
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v3 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range()
2025-04-28 17:11 ` [PATCH v3 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range() nifan.cxl
@ 2025-04-28 20:53 ` David Hildenbrand
2025-04-30 8:09 ` Oscar Salvador
1 sibling, 0 replies; 22+ messages in thread
From: David Hildenbrand @ 2025-04-28 20:53 UTC (permalink / raw)
To: nifan.cxl, muchun.song, willy
Cc: mcgrof, a.manzanares, dave, akpm, linux-mm, linux-kernel, Fan Ni
On 28.04.25 19:11, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> In __unmap_hugepage_range(), the "page" pointer always points to the
> first page of a huge page, which guarantees there is a folio associating
> with it. Convert the "page" pointer to use folio.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> ---
> mm/hugetlb.c | 24 +++++++++++++-----------
> 1 file changed, 13 insertions(+), 11 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 6696206d556e..293c2afa724b 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5815,12 +5815,12 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> pte_t *ptep;
> pte_t pte;
> spinlock_t *ptl;
> - struct page *page;
> struct hstate *h = hstate_vma(vma);
> unsigned long sz = huge_page_size(h);
> bool adjust_reservation = false;
> unsigned long last_addr_mask;
> bool force_flush = false;
> + const bool folio_provided = !!folio;
I would but that all the way to the top.
> }
> @@ -5943,16 +5944,17 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> * count will not be incremented by free_huge_folio.
> * Act as if we consumed the reservation.
> */
> - folio_clear_hugetlb_restore_reserve(page_folio(page));
> + folio_clear_hugetlb_restore_reserve(folio);
> else if (rc)
> vma_add_reservation(h, vma, address);
> }
>
> - tlb_remove_page_size(tlb, page, huge_page_size(h));
> + tlb_remove_page_size(tlb, folio_page(folio, 0),
> + folio_size(folio));
> /*
> * Bail out after unmapping reference page if supplied
> */
I wonder if we want to adjust that comment while at it.
/* If we were instructed to unmap a specific folio, we're done. */
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range()
2025-04-28 17:11 ` [PATCH v3 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range() nifan.cxl
2025-04-28 20:53 ` David Hildenbrand
@ 2025-04-30 8:09 ` Oscar Salvador
1 sibling, 0 replies; 22+ messages in thread
From: Oscar Salvador @ 2025-04-30 8:09 UTC (permalink / raw)
To: nifan.cxl
Cc: muchun.song, willy, mcgrof, a.manzanares, dave, akpm, david,
linux-mm, linux-kernel, Fan Ni
On Mon, Apr 28, 2025 at 10:11:47AM -0700, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> In __unmap_hugepage_range(), the "page" pointer always points to the
> first page of a huge page, which guarantees there is a folio associating
> with it. Convert the "page" pointer to use folio.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> ---
> mm/hugetlb.c | 24 +++++++++++++-----------
> 1 file changed, 13 insertions(+), 11 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 6696206d556e..293c2afa724b 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5815,12 +5815,12 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> pte_t *ptep;
> pte_t pte;
> spinlock_t *ptl;
> - struct page *page;
> struct hstate *h = hstate_vma(vma);
> unsigned long sz = huge_page_size(h);
> bool adjust_reservation = false;
> unsigned long last_addr_mask;
> bool force_flush = false;
> + const bool folio_provided = !!folio;
This might be me just nitpicking but:
Why not rename folio to ref_folio o provided_folio and do
struct folio *folio = ref_folio;
I think it is more natural than the boolean.
Also you need to update the comment that references the 'reference page' and the one
David mentioned.
Anyway,
Reviewed-by: Oscar Salvador <osalvador@suse.de>
--
Oscar Salvador
SUSE Labs
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and
2025-04-28 17:11 [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and nifan.cxl
` (3 preceding siblings ...)
2025-04-28 17:11 ` [PATCH v3 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range() nifan.cxl
@ 2025-04-28 19:31 ` Andrew Morton
2025-04-28 19:41 ` Fan Ni
2025-04-28 21:01 ` Matthew Wilcox
5 siblings, 1 reply; 22+ messages in thread
From: Andrew Morton @ 2025-04-28 19:31 UTC (permalink / raw)
To: nifan.cxl
Cc: muchun.song, willy, mcgrof, a.manzanares, dave, david, linux-mm,
linux-kernel, Fan Ni
On Mon, 28 Apr 2025 10:11:43 -0700 nifan.cxl@gmail.com wrote:
> Subject: [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and
What was intended here?
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and
2025-04-28 19:31 ` [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and Andrew Morton
@ 2025-04-28 19:41 ` Fan Ni
2025-04-28 20:54 ` David Hildenbrand
0 siblings, 1 reply; 22+ messages in thread
From: Fan Ni @ 2025-04-28 19:41 UTC (permalink / raw)
To: Andrew Morton
Cc: nifan.cxl, muchun.song, willy, mcgrof, a.manzanares, dave, david,
linux-mm, linux-kernel
On Mon, Apr 28, 2025 at 12:31:43PM -0700, Andrew Morton wrote:
> On Mon, 28 Apr 2025 10:11:43 -0700 nifan.cxl@gmail.com wrote:
>
> > Subject: [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and
>
> What was intended here?
Since I have changes in each patch in this series, I thought it may be
better to have a cover letter to summarize all the changes in one place.
Should I just mention the changes in each patch separately?
Fan
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and
2025-04-28 19:41 ` Fan Ni
@ 2025-04-28 20:54 ` David Hildenbrand
2025-04-28 21:30 ` Fan Ni
0 siblings, 1 reply; 22+ messages in thread
From: David Hildenbrand @ 2025-04-28 20:54 UTC (permalink / raw)
To: Fan Ni, Andrew Morton
Cc: muchun.song, willy, mcgrof, a.manzanares, dave, linux-mm, linux-kernel
On 28.04.25 21:41, Fan Ni wrote:
> On Mon, Apr 28, 2025 at 12:31:43PM -0700, Andrew Morton wrote:
>> On Mon, 28 Apr 2025 10:11:43 -0700 nifan.cxl@gmail.com wrote:
>>
>>> Subject: [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and
>>
>> What was intended here?
>
> Since I have changes in each patch in this series, I thought it may be
> better to have a cover letter to summarize all the changes in one place.
>
> Should I just mention the changes in each patch separately?
The subject got trimmed, I think Andrew is asking what the real subject
is supposed to be
"mm/hugetlb: Let unmap_hugepage_range() and ... " ?
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and
2025-04-28 20:54 ` David Hildenbrand
@ 2025-04-28 21:30 ` Fan Ni
0 siblings, 0 replies; 22+ messages in thread
From: Fan Ni @ 2025-04-28 21:30 UTC (permalink / raw)
To: David Hildenbrand
Cc: Fan Ni, Andrew Morton, muchun.song, willy, mcgrof, a.manzanares,
dave, linux-mm, linux-kernel
On Mon, Apr 28, 2025 at 10:54:51PM +0200, David Hildenbrand wrote:
> On 28.04.25 21:41, Fan Ni wrote:
> > On Mon, Apr 28, 2025 at 12:31:43PM -0700, Andrew Morton wrote:
> > > On Mon, 28 Apr 2025 10:11:43 -0700 nifan.cxl@gmail.com wrote:
> > >
> > > > Subject: [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and
> > >
> > > What was intended here?
> >
> > Since I have changes in each patch in this series, I thought it may be
> > better to have a cover letter to summarize all the changes in one place.
> >
> > Should I just mention the changes in each patch separately?
>
> The subject got trimmed, I think Andrew is asking what the real subject is
> supposed to be
Oh. I see. Thanks.
The subject is
mm/hugetlb: Let unmap_hugepage_range() and several related functions take folio
instead of page
Fan
>
> "mm/hugetlb: Let unmap_hugepage_range() and ... " ?
>
> --
> Cheers,
>
> David / dhildenb
>
^ permalink raw reply [flat|nested] 22+ messages in thread
* Re: [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and
2025-04-28 17:11 [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and nifan.cxl
` (4 preceding siblings ...)
2025-04-28 19:31 ` [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and Andrew Morton
@ 2025-04-28 21:01 ` Matthew Wilcox
2025-04-28 21:33 ` Fan Ni
5 siblings, 1 reply; 22+ messages in thread
From: Matthew Wilcox @ 2025-04-28 21:01 UTC (permalink / raw)
To: nifan.cxl
Cc: muchun.song, mcgrof, a.manzanares, dave, akpm, david, linux-mm,
linux-kernel, Fan Ni
On Mon, Apr 28, 2025 at 10:11:43AM -0700, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
When you're sending v2, send the same cover letter as you did for the
first one; don't make people go and look at v1 for the cover letter.
Put what was changed in v2 after it, so they have some idea what to look
at when reviewing.
> Changes compared to v2,
>
> Patch 1:
> 1) Update the commit log subject;
> 2) Use &folio->page instead of folio_page(folio) in unmap_ref_private()
> when calling unmap_hugepage_range();
>
> Patch 2:
> 1) Update the declaration of unmap_hugepage_range() in hugetlb.h;
> 2) Use &folio->page instead of folio_page(folio) in unmap_hugepage_range()
> when calling __unmap_hugepage_range();
>
> Patch 3:
> 1) Update the declaration of __unmap_hugepage_range() in hugetlb.h;
> 2) Rename ref_folio to folio;
> 3) compare folio instead of page in __unmap_hugepage_range() when folio is
> provided when calling __unmap_hugepage_range();
>
> Patch 4:
> 1) Pass folio size instead of huge_page_size() when calling
> tlb_remove_page_size() by Matthew;
> 2) Update the processing inside __unmap_hugepage_range() when folio
> is provided as sugguested by David Hildenbrand;
> 3) Since there is some functional change in this patch, we do not pick up the
> tags;
>
> v2:
> https://lore.kernel.org/linux-mm/20250418170834.248318-2-nifan.cxl@gmail.com
>
> Fan Ni (4):
> mm/hugetlb: Pass folio instead of page to unmap_ref_private()
> mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of
> page
> mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of
> page
> mm/hugetlb: Convert use of struct page to folio in
> __unmap_hugepage_range()
>
> include/linux/hugetlb.h | 8 ++++----
> mm/hugetlb.c | 39 +++++++++++++++++++++------------------
> 2 files changed, 25 insertions(+), 22 deletions(-)
>
> --
> 2.47.2
>
^ permalink raw reply [flat|nested] 22+ messages in thread* Re: [PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and
2025-04-28 21:01 ` Matthew Wilcox
@ 2025-04-28 21:33 ` Fan Ni
0 siblings, 0 replies; 22+ messages in thread
From: Fan Ni @ 2025-04-28 21:33 UTC (permalink / raw)
To: Matthew Wilcox
Cc: nifan.cxl, muchun.song, mcgrof, a.manzanares, dave, akpm, david,
linux-mm, linux-kernel
On Mon, Apr 28, 2025 at 10:01:36PM +0100, Matthew Wilcox wrote:
> On Mon, Apr 28, 2025 at 10:11:43AM -0700, nifan.cxl@gmail.com wrote:
> > From: Fan Ni <fan.ni@samsung.com>
>
> When you're sending v2, send the same cover letter as you did for the
> first one; don't make people go and look at v1 for the cover letter.
>
> Put what was changed in v2 after it, so they have some idea what to look
> at when reviewing.
Hi Matthew,
Thanks for the instruction. The previous version of the series (v2) did
not have a cover letter, so I only attached a link here.
I will keep the rule in my mind for future sending.
Fan
>
> > Changes compared to v2,
> >
> > Patch 1:
> > 1) Update the commit log subject;
> > 2) Use &folio->page instead of folio_page(folio) in unmap_ref_private()
> > when calling unmap_hugepage_range();
> >
> > Patch 2:
> > 1) Update the declaration of unmap_hugepage_range() in hugetlb.h;
> > 2) Use &folio->page instead of folio_page(folio) in unmap_hugepage_range()
> > when calling __unmap_hugepage_range();
> >
> > Patch 3:
> > 1) Update the declaration of __unmap_hugepage_range() in hugetlb.h;
> > 2) Rename ref_folio to folio;
> > 3) compare folio instead of page in __unmap_hugepage_range() when folio is
> > provided when calling __unmap_hugepage_range();
> >
> > Patch 4:
> > 1) Pass folio size instead of huge_page_size() when calling
> > tlb_remove_page_size() by Matthew;
> > 2) Update the processing inside __unmap_hugepage_range() when folio
> > is provided as sugguested by David Hildenbrand;
> > 3) Since there is some functional change in this patch, we do not pick up the
> > tags;
> >
> > v2:
> > https://lore.kernel.org/linux-mm/20250418170834.248318-2-nifan.cxl@gmail.com
> >
> > Fan Ni (4):
> > mm/hugetlb: Pass folio instead of page to unmap_ref_private()
> > mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of
> > page
> > mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of
> > page
> > mm/hugetlb: Convert use of struct page to folio in
> > __unmap_hugepage_range()
> >
> > include/linux/hugetlb.h | 8 ++++----
> > mm/hugetlb.c | 39 +++++++++++++++++++++------------------
> > 2 files changed, 25 insertions(+), 22 deletions(-)
> >
> > --
> > 2.47.2
> >
^ permalink raw reply [flat|nested] 22+ messages in thread