* [PATCH v2] mm: Introduce free_folio_and_swap_cache() to replace free_page_and_swap_cache()
@ 2025-04-13 4:22 nifan.cxl
2025-04-13 20:05 ` David Hildenbrand
2025-04-13 21:34 ` Matthew Wilcox
0 siblings, 2 replies; 3+ messages in thread
From: nifan.cxl @ 2025-04-13 4:22 UTC (permalink / raw)
To: willy
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel,
will, aneesh.kumar, hca, gor, linux-s390, ziy, Fan Ni,
Vishal Moola (Oracle)
From: Fan Ni <fan.ni@samsung.com>
The function free_page_and_swap_cache() takes a struct page pointer as
input parameter, but it will immediately convert it to folio and all
operations following within use folio instead of page. It makes more
sense to pass in folio directly.
Introduce free_folio_and_swap_cache(), which takes folio as input to
replace free_page_and_swap_cache(). And apply it to all occurrences
where free_page_and_swap_cache() was used.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Acked-by: Davidlohr Bueso <dave@stgolabs.net>
Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
---
arch/s390/include/asm/tlb.h | 4 ++--
include/linux/swap.h | 6 +++---
mm/huge_memory.c | 2 +-
mm/khugepaged.c | 2 +-
mm/swap_state.c | 8 +++-----
5 files changed, 10 insertions(+), 12 deletions(-)
diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h
index f20601995bb0..e5103e8e697d 100644
--- a/arch/s390/include/asm/tlb.h
+++ b/arch/s390/include/asm/tlb.h
@@ -40,7 +40,7 @@ static inline bool __tlb_remove_folio_pages(struct mmu_gather *tlb,
/*
* Release the page cache reference for a pte removed by
* tlb_ptep_clear_flush. In both flush modes the tlb for a page cache page
- * has already been freed, so just do free_page_and_swap_cache.
+ * has already been freed, so just do free_folio_and_swap_cache.
*
* s390 doesn't delay rmap removal.
*/
@@ -49,7 +49,7 @@ static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
{
VM_WARN_ON_ONCE(delay_rmap);
- free_page_and_swap_cache(page);
+ free_folio_and_swap_cache(page_folio(page));
return false;
}
diff --git a/include/linux/swap.h b/include/linux/swap.h
index db46b25a65ae..ef76f65686ee 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -450,7 +450,7 @@ static inline unsigned long total_swapcache_pages(void)
}
void free_swap_cache(struct folio *folio);
-void free_page_and_swap_cache(struct page *);
+void free_folio_and_swap_cache(struct folio *folio);
void free_pages_and_swap_cache(struct encoded_page **, int);
/* linux/mm/swapfile.c */
extern atomic_long_t nr_swap_pages;
@@ -522,8 +522,8 @@ static inline void put_swap_device(struct swap_info_struct *si)
do { (val)->freeswap = (val)->totalswap = 0; } while (0)
/* only sparc can not include linux/pagemap.h in this file
* so leave put_page and release_pages undeclared... */
-#define free_page_and_swap_cache(page) \
- put_page(page)
+#define free_folio_and_swap_cache(folio) \
+ folio_put(folio)
#define free_pages_and_swap_cache(pages, nr) \
release_pages((pages), (nr));
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 28c87e0e036f..65a5ddf60ec7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3640,7 +3640,7 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
* requires taking the lru_lock so we do the put_page
* of the tail pages after the split is complete.
*/
- free_page_and_swap_cache(&new_folio->page);
+ free_folio_and_swap_cache(new_folio);
}
return ret;
}
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index b8838ba8207a..5cf204ab6af0 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -746,7 +746,7 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
ptep_clear(vma->vm_mm, address, _pte);
folio_remove_rmap_pte(src, src_page, vma);
spin_unlock(ptl);
- free_page_and_swap_cache(src_page);
+ free_folio_and_swap_cache(src);
}
}
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 68fd981b514f..ac4e0994931c 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -232,13 +232,11 @@ void free_swap_cache(struct folio *folio)
}
/*
- * Perform a free_page(), also freeing any swap cache associated with
- * this page if it is the last user of the page.
+ * Freeing a folio and also freeing any swap cache associated with
+ * this folio if it is the last user.
*/
-void free_page_and_swap_cache(struct page *page)
+void free_folio_and_swap_cache(struct folio *folio)
{
- struct folio *folio = page_folio(page);
-
free_swap_cache(folio);
if (!is_huge_zero_folio(folio))
folio_put(folio);
--
2.47.2
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v2] mm: Introduce free_folio_and_swap_cache() to replace free_page_and_swap_cache()
2025-04-13 4:22 [PATCH v2] mm: Introduce free_folio_and_swap_cache() to replace free_page_and_swap_cache() nifan.cxl
@ 2025-04-13 20:05 ` David Hildenbrand
2025-04-13 21:34 ` Matthew Wilcox
1 sibling, 0 replies; 3+ messages in thread
From: David Hildenbrand @ 2025-04-13 20:05 UTC (permalink / raw)
To: nifan.cxl, willy
Cc: mcgrof, a.manzanares, dave, akpm, linux-mm, linux-kernel, will,
aneesh.kumar, hca, gor, linux-s390, ziy, Fan Ni,
Vishal Moola (Oracle)
On 13.04.25 06:22, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function free_page_and_swap_cache() takes a struct page pointer as
> input parameter, but it will immediately convert it to folio and all
> operations following within use folio instead of page. It makes more
> sense to pass in folio directly.
>
> Introduce free_folio_and_swap_cache(), which takes folio as input to
> replace free_page_and_swap_cache(). And apply it to all occurrences
> where free_page_and_swap_cache() was used.
Patch title should better be
"mm: convert free_page_and_swap_cache() to free_folio_and_swap_cache()"
and similarly adjust the patch description. Thanks!
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH v2] mm: Introduce free_folio_and_swap_cache() to replace free_page_and_swap_cache()
2025-04-13 4:22 [PATCH v2] mm: Introduce free_folio_and_swap_cache() to replace free_page_and_swap_cache() nifan.cxl
2025-04-13 20:05 ` David Hildenbrand
@ 2025-04-13 21:34 ` Matthew Wilcox
1 sibling, 0 replies; 3+ messages in thread
From: Matthew Wilcox @ 2025-04-13 21:34 UTC (permalink / raw)
To: nifan.cxl
Cc: mcgrof, a.manzanares, dave, akpm, david, linux-mm, linux-kernel,
will, aneesh.kumar, hca, gor, linux-s390, ziy, Fan Ni,
Vishal Moola (Oracle)
On Sat, Apr 12, 2025 at 09:22:21PM -0700, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function free_page_and_swap_cache() takes a struct page pointer as
> input parameter, but it will immediately convert it to folio and all
> operations following within use folio instead of page. It makes more
> sense to pass in folio directly.
>
> Introduce free_folio_and_swap_cache(), which takes folio as input to
> replace free_page_and_swap_cache(). And apply it to all occurrences
> where free_page_and_swap_cache() was used.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Acked-by: Davidlohr Bueso <dave@stgolabs.net>
> Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> @@ -522,8 +522,8 @@ static inline void put_swap_device(struct swap_info_struct *si)
> do { (val)->freeswap = (val)->totalswap = 0; } while (0)
> /* only sparc can not include linux/pagemap.h in this file
> * so leave put_page and release_pages undeclared... */
> -#define free_page_and_swap_cache(page) \
> - put_page(page)
> +#define free_folio_and_swap_cache(folio) \
> + folio_put(folio)
Since you're respinning this patch anyway, you can delete the comment
about sparc. This file has included pagemap.h since 4ee60ec156d9
in 2021.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-04-13 21:34 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-13 4:22 [PATCH v2] mm: Introduce free_folio_and_swap_cache() to replace free_page_and_swap_cache() nifan.cxl
2025-04-13 20:05 ` David Hildenbrand
2025-04-13 21:34 ` Matthew Wilcox
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox