From: nifan.cxl@gmail.com
To: muchun.song@linux.dev, willy@infradead.org, osalvador@suse.de
Cc: mcgrof@kernel.org, a.manzanares@samsung.com, dave@stgolabs.net,
akpm@linux-foundation.org, david@redhat.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, nifan.cxl@gmail.com,
Fan Ni <fan.ni@samsung.com>
Subject: [PATCH v4 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range()
Date: Mon, 5 May 2025 11:22:44 -0700 [thread overview]
Message-ID: <20250505182345.506888-6-nifan.cxl@gmail.com> (raw)
In-Reply-To: <20250505182345.506888-2-nifan.cxl@gmail.com>
From: Fan Ni <fan.ni@samsung.com>
In __unmap_hugepage_range(), the "page" pointer always points to the
first page of a huge page, which guarantees there is a folio associating
with it. Convert the "page" pointer to use folio.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: David Hildenbrand <david@redhat.com>
---
mm/hugetlb.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 443b75e116cf..d53caf96a4b2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5843,11 +5843,11 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
struct folio *folio, zap_flags_t zap_flags)
{
struct mm_struct *mm = vma->vm_mm;
+ const bool folio_provided = !!folio;
unsigned long address;
pte_t *ptep;
pte_t pte;
spinlock_t *ptl;
- struct page *page;
struct hstate *h = hstate_vma(vma);
unsigned long sz = huge_page_size(h);
bool adjust_reservation = false;
@@ -5911,14 +5911,13 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
continue;
}
- page = pte_page(pte);
/*
* If a folio is supplied, it is because a specific
* folio is being unmapped, not a range. Ensure the folio we
* are about to unmap is the actual folio of interest.
*/
- if (folio) {
- if (page_folio(page) != folio) {
+ if (folio_provided) {
+ if (folio != page_folio(pte_page(pte))) {
spin_unlock(ptl);
continue;
}
@@ -5928,12 +5927,14 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
* looking like data was lost
*/
set_vma_resv_flags(vma, HPAGE_RESV_UNMAPPED);
+ } else {
+ folio = page_folio(pte_page(pte));
}
pte = huge_ptep_get_and_clear(mm, address, ptep, sz);
tlb_remove_huge_tlb_entry(h, tlb, ptep, address);
if (huge_pte_dirty(pte))
- set_page_dirty(page);
+ folio_mark_dirty(folio);
/* Leave a uffd-wp pte marker if needed */
if (huge_pte_uffd_wp(pte) &&
!(zap_flags & ZAP_FLAG_DROP_MARKER))
@@ -5941,7 +5942,7 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
make_pte_marker(PTE_MARKER_UFFD_WP),
sz);
hugetlb_count_sub(pages_per_huge_page(h), mm);
- hugetlb_remove_rmap(page_folio(page));
+ hugetlb_remove_rmap(folio);
/*
* Restore the reservation for anonymous page, otherwise the
@@ -5950,8 +5951,8 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
* reservation bit.
*/
if (!h->surplus_huge_pages && __vma_private_lock(vma) &&
- folio_test_anon(page_folio(page))) {
- folio_set_hugetlb_restore_reserve(page_folio(page));
+ folio_test_anon(folio)) {
+ folio_set_hugetlb_restore_reserve(folio);
/* Reservation to be adjusted after the spin lock */
adjust_reservation = true;
}
@@ -5975,16 +5976,17 @@ void __unmap_hugepage_range(struct mmu_gather *tlb, struct vm_area_struct *vma,
* count will not be incremented by free_huge_folio.
* Act as if we consumed the reservation.
*/
- folio_clear_hugetlb_restore_reserve(page_folio(page));
+ folio_clear_hugetlb_restore_reserve(folio);
else if (rc)
vma_add_reservation(h, vma, address);
}
- tlb_remove_page_size(tlb, page, huge_page_size(h));
+ tlb_remove_page_size(tlb, folio_page(folio, 0),
+ folio_size(folio));
/*
* If we were instructed to unmap a specific folio, we're done.
*/
- if (folio)
+ if (folio_provided)
break;
}
tlb_end_vma(tlb, vma);
--
2.47.2
next prev parent reply other threads:[~2025-05-05 18:25 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-05 18:22 [PATCH v4 0/4] Let unmap_hugepage_range() and several related functions to take folio instead of page nifan.cxl
2025-05-05 18:22 ` [PATCH v4 1/4] mm/hugetlb: Pass folio instead of page to unmap_ref_private() nifan.cxl
2025-05-05 18:22 ` [PATCH v4 2/4] mm/hugetlb: Refactor unmap_hugepage_range() to take folio instead of page nifan.cxl
2025-05-05 19:59 ` Vishal Moola (Oracle)
2025-05-28 2:08 ` Andrew Morton
2025-05-05 18:22 ` [PATCH v4 3/4] mm/hugetlb: Refactor __unmap_hugepage_range() " nifan.cxl
2025-05-05 18:22 ` nifan.cxl [this message]
2025-05-05 22:03 ` [PATCH v4 4/4] mm/hugetlb: Convert use of struct page to folio in __unmap_hugepage_range() Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250505182345.506888-6-nifan.cxl@gmail.com \
--to=nifan.cxl@gmail.com \
--cc=a.manzanares@samsung.com \
--cc=akpm@linux-foundation.org \
--cc=dave@stgolabs.net \
--cc=david@redhat.com \
--cc=fan.ni@samsung.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mcgrof@kernel.org \
--cc=muchun.song@linux.dev \
--cc=osalvador@suse.de \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox