* [PATCH v4 0/4] Let unmap_hugepage_range() and several related functions to take folio instead of page
@ 2025-05-05 18:22 nifan.cxl
2025-05-05 18:22 ` [PATCH v4 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private() nifan.cxl
` (3 more replies)
0 siblings, 4 replies; 8+ messages in thread
From: nifan.cxl @ 2025-05-05 18:22 UTC (permalink / raw)
To: muchun.song, willy, osalvador
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel,
nifan.cxl, Fan Ni
From: Fan Ni <fan.ni@samsung.com>
Changes compared to v3:
Patch 1:
1) Pick up tags;
2) Reword the commit message a little bit based on feedback;
Patch 2:
1) Pick up tags;
Patch 3:
1) Pick up tags;
2) Update the comment to reflect the use of folio instead of page
in __unmap_hugepage_range;
Patch 4:
1) Pick up tags;
2) Move folio_provided definition up in the function as suggested;
v3: https://lore.kernel.org/linux-mm/63aec3cb-60a8-4f30-9b12-3ee19c6c14f3@redhat.com/T/#mbbe1f5bdfcf3a70e6b2b1dfba9af35110a5065ec
--------------------------------------------------------------
Cover letter from v3:
[PATCH v3 0/4] mm/hugetlb: Let unmap_hugepage_range() and
From: Fan Ni <fan.ni@samsung.com>
Changes compared to v2,
Patch 1:
1) Update the commit log subject;
2) Use &folio->page instead of folio_page(folio) in unmap_ref_private()
when calling unmap_hugepage_range();
Patch 2:
1) Update the declaration of unmap_hugepage_range() in hugetlb.h;
2) Use &folio->page instead of folio_page(folio) in unmap_hugepage_range()
when calling __unmap_hugepage_range();
Patch 3:
1) Update the declaration of __unmap_hugepage_range() in hugetlb.h;
2) Rename ref_folio to folio;
3) compare folio instead of page in __unmap_hugepage_range() when folio is
provided when calling __unmap_hugepage_range();
Patch 4:
1) Pass folio size instead of huge_page_size() when calling
tlb_remove_page_size() by Matthew;
2) Update the processing inside __unmap_hugepage_range() when folio
is provided as sugguested by David Hildenbrand;
3) Since there is some functional change in this patch, we do not pick up the
tags;
v2:
https://lore.kernel.org/linux-mm/20250418170834.248318-2-nifan.cxl@gmail.com
--------------------------------------------------------------
Fan Ni (4):
mm/hugetlb: Pass folio instead of page to unmap_ref_private()
mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of
page
mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of
page
mm/hugetlb: Convert use of struct page to folio in
__unmap_hugepage_range()
include/linux/hugetlb.h | 8 +++----
mm/hugetlb.c | 47 ++++++++++++++++++++++-------------------
2 files changed, 29 insertions(+), 26 deletions(-)
--
2.47.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v4 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private()
2025-05-05 18:22 [PATCH v4 0/4] Let unmap_hugepage_range() and several related functions to take folio instead of page nifan.cxl
@ 2025-05-05 18:22 ` nifan.cxl
2025-05-05 18:22 ` [PATCH v4 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page nifan.cxl
` (2 subsequent siblings)
3 siblings, 0 replies; 8+ messages in thread
From: nifan.cxl @ 2025-05-05 18:22 UTC (permalink / raw)
To: muchun.song, willy, osalvador
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel,
nifan.cxl, Fan Ni, Sidhartha Kumar
From: Fan Ni <fan.ni@samsung.com>
The function unmap_ref_private() has only a single user, which passes in
&folio->page. Let it take the folio directly.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: David Hildenbrand <david@redhat.com>
---
mm/hugetlb.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0057d1f1dc9a..0c2b264a7ab8 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6071,7 +6071,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
* same region.
*/
static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
- struct page *page, unsigned long address)
+ struct folio *folio, unsigned long address)
{
struct hstate *h = hstate_vma(vma);
struct vm_area_struct *iter_vma;
@@ -6115,7 +6115,8 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
*/
if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
unmap_hugepage_range(iter_vma, address,
- address + huge_page_size(h), page, 0);
+ address + huge_page_size(h),
+ &folio->page, 0);
}
i_mmap_unlock_write(mapping);
}
@@ -6238,8 +6239,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
hugetlb_vma_unlock_read(vma);
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
- unmap_ref_private(mm, vma, &old_folio->page,
- vmf->address);
+ unmap_ref_private(mm, vma, old_folio, vmf->address);
mutex_lock(&hugetlb_fault_mutex_table[hash]);
hugetlb_vma_lock_read(vma);
--
2.47.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v4 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page
2025-05-05 18:22 [PATCH v4 0/4] Let unmap_hugepage_range() and several related functions to take folio instead of page nifan.cxl
2025-05-05 18:22 ` [PATCH v4 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private() nifan.cxl
@ 2025-05-05 18:22 ` nifan.cxl
2025-05-05 19:59 ` Vishal Moola (Oracle)
2025-05-05 18:22 ` [PATCH v4 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() " nifan.cxl
2025-05-05 18:22 ` [PATCH v4 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range() nifan.cxl
3 siblings, 1 reply; 8+ messages in thread
From: nifan.cxl @ 2025-05-05 18:22 UTC (permalink / raw)
To: muchun.song, willy, osalvador
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel,
nifan.cxl, Fan Ni, Sidhartha Kumar
From: Fan Ni <fan.ni@samsung.com>
The function unmap_hugepage_range() has two kinds of users:
1) unmap_ref_private(), which passes in the head page of a folio. Since
unmap_ref_private() already takes folio and there are no other uses
of the folio struct in the function, it is natural for
unmap_hugepage_range() to take folio also.
2) All other uses, which pass in NULL pointer.
In both cases, we can pass in folio. Refactor unmap_hugepage_range() to
take folio.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: David Hildenbrand <david@redhat.com>
---
include/linux/hugetlb.h | 4 ++--
mm/hugetlb.c | 7 ++++---
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 23ebf49c5d6a..f6d5f24e793c 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -129,8 +129,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *,
struct vm_area_struct *, struct vm_area_struct *);
void unmap_hugepage_range(struct vm_area_struct *,
- unsigned long, unsigned long, struct page *,
- zap_flags_t);
+ unsigned long start, unsigned long end,
+ struct folio *, zap_flags_t);
void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma,
unsigned long start, unsigned long end,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0c2b264a7ab8..c339ffe05556 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6046,7 +6046,7 @@ void __hugetlb_zap_end(struct vm_area_struct *vma,
}
void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
- unsigned long end, struct page *ref_page,
+ unsigned long end, struct folio *folio,
zap_flags_t zap_flags)
{
struct mmu_notifier_range range;
@@ -6058,7 +6058,8 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
mmu_notifier_invalidate_range_start(&range);
tlb_gather_mmu(&tlb, vma->vm_mm);
- __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags);
+ __unmap_hugepage_range(&tlb, vma, start, end,
+ &folio->page, zap_flags);
mmu_notifier_invalidate_range_end(&range);
tlb_finish_mmu(&tlb);
@@ -6116,7 +6117,7 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
unmap_hugepage_range(iter_vma, address,
address + huge_page_size(h),
- &folio->page, 0);
+ folio, 0);
}
i_mmap_unlock_write(mapping);
}
--
2.47.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v4 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() to take folio instead of page
2025-05-05 18:22 [PATCH v4 0/4] Let unmap_hugepage_range() and several related functions to take folio instead of page nifan.cxl
2025-05-05 18:22 ` [PATCH v4 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private() nifan.cxl
2025-05-05 18:22 ` [PATCH v4 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page nifan.cxl
@ 2025-05-05 18:22 ` nifan.cxl
2025-05-05 18:22 ` [PATCH v4 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range() nifan.cxl
3 siblings, 0 replies; 8+ messages in thread
From: nifan.cxl @ 2025-05-05 18:22 UTC (permalink / raw)
To: muchun.song, willy, osalvador
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel,
nifan.cxl, Fan Ni
From: Fan Ni <fan.ni@samsung.com>
The function __unmap_hugepage_range() has two kinds of users:
1) unmap_hugepage_range(), which passes in the head page of a folio.
Since unmap_hugepage_range() already takes folio and there are no other
uses of the folio struct in the function, it is natural for
__unmap_hugepage_range() to take folio also.
2) All other uses, which pass in NULL pointer.
In both cases, we can pass in folio. Refactor __unmap_hugepage_range() to
take folio.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
---
include/linux/hugetlb.h | 4 ++--
mm/hugetlb.c | 18 +++++++++---------
2 files changed, 11 insertions(+), 11 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index f6d5f24e793c..eb21619206af 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -134,7 +134,7 @@ void unmap_hugepage_range(struct vm_area_struct *,
void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma,
unsigned long start, unsigned long end,
- struct page *ref_page, zap_flags_t zap_flags);
+ struct folio *, zap_flags_t zap_flags);
void hugetlb_report_meminfo(struct seq_file *);
int hugetlb_report_node_meminfo(char *buf, int len, int nid);
void hugetlb_show_meminfo_node(int nid);
@@ -455,7 +455,7 @@ static inline long hugetlb_change_protection(
static inline void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma, unsigned long start,
- unsigned long end, struct page *ref_page,
+ unsigned long end, struct folio *folio,
zap_flags_t zap_flags)
{
BUG();
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c339ffe05556..443b75e116cf 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5840,7 +5840,7 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
unsigned long start, unsigned long end,
- struct page *ref_page, zap_flags_t zap_flags)
+ struct folio *folio, zap_flags_t zap_flags)
{
struct mm_struct *mm = vma->vm_mm;
unsigned long address;
@@ -5913,12 +5913,12 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
page = pte_page(pte);
/*
- * If a reference page is supplied, it is because a specific
- * page is being unmapped, not a range. Ensure the page we
- * are about to unmap is the actual page of interest.
+ * If a folio is supplied, it is because a specific
+ * folio is being unmapped, not a range. Ensure the folio we
+ * are about to unmap is the actual folio of interest.
*/
- if (ref_page) {
- if (page != ref_page) {
+ if (folio) {
+ if (page_folio(page) != folio) {
spin_unlock(ptl);
continue;
}
@@ -5982,9 +5982,9 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
tlb_remove_page_size(tlb, page, huge_page_size(h));
/*
- * Bail out after unmapping reference page if supplied
+ * If we were instructed to unmap a specific folio, we're done.
*/
- if (ref_page)
+ if (folio)
break;
}
tlb_end_vma(tlb, vma);
@@ -6059,7 +6059,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
tlb_gather_mmu(&tlb, vma->vm_mm);
__unmap_hugepage_range(&tlb, vma, start, end,
- &folio->page, zap_flags);
+ folio, zap_flags);
mmu_notifier_invalidate_range_end(&range);
tlb_finish_mmu(&tlb);
--
2.47.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v4 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range()
2025-05-05 18:22 [PATCH v4 0/4] Let unmap_hugepage_range() and several related functions to take folio instead of page nifan.cxl
` (2 preceding siblings ...)
2025-05-05 18:22 ` [PATCH v4 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() " nifan.cxl
@ 2025-05-05 18:22 ` nifan.cxl
2025-05-05 22:03 ` Andrew Morton
3 siblings, 1 reply; 8+ messages in thread
From: nifan.cxl @ 2025-05-05 18:22 UTC (permalink / raw)
To: muchun.song, willy, osalvador
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel,
nifan.cxl, Fan Ni
From: Fan Ni <fan.ni@samsung.com>
In __unmap_hugepage_range(), the "page" pointer always points to the
first page of a huge page, which guarantees there is a folio associating
with it. Convert the "page" pointer to use folio.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: David Hildenbrand <david@redhat.com>
---
mm/hugetlb.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 443b75e116cf..d53caf96a4b2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5843,11 +5843,11 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
struct folio *folio, zap_flags_t zap_flags)
{
struct mm_struct *mm = vma->vm_mm;
+ const bool folio_provided = !!folio;
unsigned long address;
pte_t *ptep;
pte_t pte;
spinlock_t *ptl;
- struct page *page;
struct hstate *h = hstate_vma(vma);
unsigned long sz = huge_page_size(h);
bool adjust_reservation = false;
@@ -5911,14 +5911,13 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
continue;
}
- page = pte_page(pte);
/*
* If a folio is supplied, it is because a specific
* folio is being unmapped, not a range. Ensure the folio we
* are about to unmap is the actual folio of interest.
*/
- if (folio) {
- if (page_folio(page) != folio) {
+ if (folio_provided) {
+ if (folio != page_folio(pte_page(pte))) {
spin_unlock(ptl);
continue;
}
@@ -5928,12 +5927,14 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
* looking like data was lost
*/
set_vma_resv_flags(vma, HPAGE_RESV_UNMAPPED);
+ } else {
+ folio = page_folio(pte_page(pte));
}
pte = huge_ptep_get_and_clear(mm, address, ptep, sz);
tlb_remove_huge_tlb_entry(h, tlb, ptep, address);
if (huge_pte_dirty(pte))
- set_page_dirty(page);
+ folio_mark_dirty(folio);
/* Leave a uffd-wp pte marker if needed */
if (huge_pte_uffd_wp(pte) &&
!(zap_flags & ZAP_FLAG_DROP_MARKER))
@@ -5941,7 +5942,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
make_pte_marker(PTE_MARKER_UFFD_WP),
sz);
hugetlb_count_sub(pages_per_huge_page(h), mm);
- hugetlb_remove_rmap(page_folio(page));
+ hugetlb_remove_rmap(folio);
/*
* Restore the reservation for anonymous page, otherwise the
@@ -5950,8 +5951,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
* reservation bit.
*/
if (!h->surplus_huge_pages && __vma_private_lock(vma) &&
- folio_test_anon(page_folio(page))) {
- folio_set_hugetlb_restore_reserve(page_folio(page));
+ folio_test_anon(folio)) {
+ folio_set_hugetlb_restore_reserve(folio);
/* Reservation to be adjusted after the spin lock */
adjust_reservation = true;
}
@@ -5975,16 +5976,17 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
* count will not be incremented by free_huge_folio.
* Act as if we consumed the reservation.
*/
- folio_clear_hugetlb_restore_reserve(page_folio(page));
+ folio_clear_hugetlb_restore_reserve(folio);
else if (rc)
vma_add_reservation(h, vma, address);
}
- tlb_remove_page_size(tlb, page, huge_page_size(h));
+ tlb_remove_page_size(tlb, folio_page(folio, 0),
+ folio_size(folio));
/*
* If we were instructed to unmap a specific folio, we're done.
*/
- if (folio)
+ if (folio_provided)
break;
}
tlb_end_vma(tlb, vma);
--
2.47.2
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v4 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page
2025-05-05 18:22 ` [PATCH v4 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page nifan.cxl
@ 2025-05-05 19:59 ` Vishal Moola (Oracle)
2025-05-28 2:08 ` Andrew Morton
0 siblings, 1 reply; 8+ messages in thread
From: Vishal Moola (Oracle) @ 2025-05-05 19:59 UTC (permalink / raw)
To: nifan.cxl
Cc: muchun.song, willy, osalvador, mcgrof, a.manzanares, dave, akpm,
david, linux-mm, linux-kernel, Fan Ni, Sidhartha Kumar
On Mon, May 05, 2025 at 11:22:42AM -0700, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function unmap_hugepage_range() has two kinds of users:
> 1) unmap_ref_private(), which passes in the head page of a folio. Since
> unmap_ref_private() already takes folio and there are no other uses
> of the folio struct in the function, it is natural for
> unmap_hugepage_range() to take folio also.
> 2) All other uses, which pass in NULL pointer.
>
> In both cases, we can pass in folio. Refactor unmap_hugepage_range() to
> take folio.
It looks like unmap_ref_private() is the only caller that cares about
passing a particular folio to unmap_hugepage_range(). Is there any
reason we shouldn't drop the folio argument and call
__unmap_hugepage_range() directly?
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> Reviewed-by: Muchun Song <muchun.song@linux.dev>
> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Acked-by: David Hildenbrand <david@redhat.com>
> ---
> include/linux/hugetlb.h | 4 ++--
> mm/hugetlb.c | 7 ++++---
> 2 files changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 23ebf49c5d6a..f6d5f24e793c 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -129,8 +129,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
> int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *,
> struct vm_area_struct *, struct vm_area_struct *);
> void unmap_hugepage_range(struct vm_area_struct *,
> - unsigned long, unsigned long, struct page *,
> - zap_flags_t);
> + unsigned long start, unsigned long end,
> + struct folio *, zap_flags_t);
> void __unmap_hugepage_range(struct mmu_gather *tlb,
> struct vm_area_struct *vma,
> unsigned long start, unsigned long end,
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 0c2b264a7ab8..c339ffe05556 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6046,7 +6046,7 @@ void __hugetlb_zap_end(struct vm_area_struct *vma,
> }
>
> void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
> - unsigned long end, struct page *ref_page,
> + unsigned long end, struct folio *folio,
> zap_flags_t zap_flags)
> {
> struct mmu_notifier_range range;
> @@ -6058,7 +6058,8 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
> mmu_notifier_invalidate_range_start(&range);
> tlb_gather_mmu(&tlb, vma->vm_mm);
>
> - __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags);
> + __unmap_hugepage_range(&tlb, vma, start, end,
> + &folio->page, zap_flags);
>
> mmu_notifier_invalidate_range_end(&range);
> tlb_finish_mmu(&tlb);
> @@ -6116,7 +6117,7 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
> if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
> unmap_hugepage_range(iter_vma, address,
> address + huge_page_size(h),
> - &folio->page, 0);
> + folio, 0);
> }
> i_mmap_unlock_write(mapping);
> }
> --
> 2.47.2
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v4 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range()
2025-05-05 18:22 ` [PATCH v4 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range() nifan.cxl
@ 2025-05-05 22:03 ` Andrew Morton
0 siblings, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2025-05-05 22:03 UTC (permalink / raw)
To: nifan.cxl
Cc: muchun.song, willy, osalvador, mcgrof, a.manzanares, dave, david,
linux-mm, linux-kernel, Fan Ni
On Mon, 5 May 2025 11:22:44 -0700 nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> In __unmap_hugepage_range(), the "page" pointer always points to the
> first page of a huge page, which guarantees there is a folio associating
> with it. Convert the "page" pointer to use folio.
>
> ...
>
> * Restore the reservation for anonymous page, otherwise the
> @@ -5950,8 +5951,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> * reservation bit.
> */
> if (!h->surplus_huge_pages && __vma_private_lock(vma) &&
> - folio_test_anon(page_folio(page))) {
> - folio_set_hugetlb_restore_reserve(page_folio(page));
> + folio_test_anon(folio)) {
> + folio_set_hugetlb_restore_reserve(folio);
> /* Reservation to be adjusted after the spin lock */
> adjust_reservation = true;
> }
> @@ -5975,16 +5976,17 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
> * count will not be incremented by free_huge_folio.
> * Act as if we consumed the reservation.
> */
I did not enjoy reading the above comment, so I did this to it.
The comment would be better if it described why we "Act as if we
consumed the reservation.". "why, not what".
From: Andrew Morton <akpm@linux-foundation.org>
Subject: mm/hugetlb.c: __unmap_hugepage_range(): comment cleanup
Date: Mon May 5 02:54:25 PM PDT 2025
Wrap to 80 cols, fix a typo, use regular layout, parenthesize function
identifiers, fix grammar and add braces.
Cc: David Hildenbrand <david@redhat.com>
Cc: Fan Ni <fan.ni@samsung.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: "Vishal Moola (Oracle)" <vishal.moola@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/hugetlb.c | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
--- a/mm/hugetlb.c~mm-hugetlbc-__unmap_hugepage_range-comment-cleanup
+++ a/mm/hugetlb.c
@@ -5969,16 +5969,19 @@ void __unmap_hugepage_range(struct mmu_g
if (adjust_reservation) {
int rc = vma_needs_reservation(h, vma, address);
- if (rc < 0)
- /* Pressumably allocate_file_region_entries failed
- * to allocate a file_region struct. Clear
- * hugetlb_restore_reserve so that global reserve
- * count will not be incremented by free_huge_folio.
- * Act as if we consumed the reservation.
+ if (rc < 0) {
+ /*
+ * Presumably allocate_file_region_entries()
+ * failed to allocate a file_region struct.
+ * Clear hugetlb_restore_reserve so that the
+ * global reserve count will not be incremented
+ * by free_huge_folio(). Act as if we consumed
+ * the reservation.
*/
folio_clear_hugetlb_restore_reserve(folio);
- else if (rc)
+ } else if (rc) {
vma_add_reservation(h, vma, address);
+ }
}
tlb_remove_page_size(tlb, folio_page(folio, 0),
_
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v4 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page
2025-05-05 19:59 ` Vishal Moola (Oracle)
@ 2025-05-28 2:08 ` Andrew Morton
0 siblings, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2025-05-28 2:08 UTC (permalink / raw)
To: Vishal Moola (Oracle)
Cc: nifan.cxl, muchun.song, willy, osalvador, mcgrof, a.manzanares,
dave, david, linux-mm, linux-kernel, Fan Ni, Sidhartha Kumar
On Mon, 5 May 2025 12:59:20 -0700 "Vishal Moola (Oracle)" <vishal.moola@gmail.com> wrote:
> On Mon, May 05, 2025 at 11:22:42AM -0700, nifan.cxl@gmail.com wrote:
> > From: Fan Ni <fan.ni@samsung.com>
> >
> > The function unmap_hugepage_range() has two kinds of users:
> > 1) unmap_ref_private(), which passes in the head page of a folio. Since
> > unmap_ref_private() already takes folio and there are no other uses
> > of the folio struct in the function, it is natural for
> > unmap_hugepage_range() to take folio also.
> > 2) All other uses, which pass in NULL pointer.
> >
> > In both cases, we can pass in folio. Refactor unmap_hugepage_range() to
> > take folio.
>
> It looks like unmap_ref_private() is the only caller that cares about
> passing a particular folio to unmap_hugepage_range(). Is there any
> reason we shouldn't drop the folio argument and call
> __unmap_hugepage_range() directly?
afaict there was no response to this review comment.
I'll proceed with the patchset, but please let's not lose sight of this.
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2025-05-28 2:08 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-05-05 18:22 [PATCH v4 0/4] Let unmap_hugepage_range() and several related functions to take folio instead of page nifan.cxl
2025-05-05 18:22 ` [PATCH v4 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private() nifan.cxl
2025-05-05 18:22 ` [PATCH v4 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page nifan.cxl
2025-05-05 19:59 ` Vishal Moola (Oracle)
2025-05-28 2:08 ` Andrew Morton
2025-05-05 18:22 ` [PATCH v4 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() " nifan.cxl
2025-05-05 18:22 ` [PATCH v4 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range() nifan.cxl
2025-05-05 22:03 ` Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox