* [PATCH 0/4] Clean up __folio_put()
@ 2024-03-02 7:00 Matthew Wilcox (Oracle)
2024-03-02 7:00 ` [PATCH 1/4] mm/swap: Free non-hugetlb large folios in a batch Matthew Wilcox (Oracle)
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-03-02 7:00 UTC (permalink / raw)
To: linux-mm; +Cc: Matthew Wilcox (Oracle), Kent Overstreet
With all the changes over the last few years, __folio_put_small and
__folio_put_large have become almost identical to each other ... except
you can't tell because they're spread over two files. Rearrange it all
so that you can tell, and then inline them both into __folio_put().
Matthew Wilcox (Oracle) (4):
mm/swap: Free non-hugetlb large folios in a batch
page_alloc: Combine free_the_page() and free_unref_page()
page_alloc: Inline destroy_large_folio() into __folio_put_large()
swap: Combine __folio_put_small, __folio_put_large and __folio_put
include/linux/mm.h | 2 --
mm/page_alloc.c | 37 ++++++++++---------------------------
mm/swap.c | 41 ++++++++++++++---------------------------
3 files changed, 24 insertions(+), 56 deletions(-)
--
2.43.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 1/4] mm/swap: Free non-hugetlb large folios in a batch
2024-03-02 7:00 [PATCH 0/4] Clean up __folio_put() Matthew Wilcox (Oracle)
@ 2024-03-02 7:00 ` Matthew Wilcox (Oracle)
2024-03-02 7:00 ` [PATCH 2/4] page_alloc: Combine free_the_page() and free_unref_page() Matthew Wilcox (Oracle)
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-03-02 7:00 UTC (permalink / raw)
To: linux-mm; +Cc: Matthew Wilcox (Oracle), Kent Overstreet
free_unref_folios() can now handle non-hugetlb large folios, so
keep normal large folios in the batch. hugetlb folios still need
special treatment (and I think they can be freed this way).
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/swap.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index 6b697d33fa5b..d1e016e9ee1a 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -158,8 +158,8 @@ void put_pages_list(struct list_head *pages)
list_for_each_entry(folio, pages, lru) {
if (!folio_put_testzero(folio))
continue;
- if (folio_test_large(folio)) {
- __folio_put_large(folio);
+ if (folio_test_hugetlb(folio)) {
+ free_huge_folio(folio);
continue;
}
/* LRU flag must be clear because it's passed using the lru */
--
2.43.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 2/4] page_alloc: Combine free_the_page() and free_unref_page()
2024-03-02 7:00 [PATCH 0/4] Clean up __folio_put() Matthew Wilcox (Oracle)
2024-03-02 7:00 ` [PATCH 1/4] mm/swap: Free non-hugetlb large folios in a batch Matthew Wilcox (Oracle)
@ 2024-03-02 7:00 ` Matthew Wilcox (Oracle)
2024-03-02 7:00 ` [PATCH 3/4] page_alloc: Inline destroy_large_folio() into __folio_put_large() Matthew Wilcox (Oracle)
2024-03-02 7:00 ` [PATCH 4/4] swap: Combine __folio_put_small, __folio_put_large and __folio_put Matthew Wilcox (Oracle)
3 siblings, 0 replies; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-03-02 7:00 UTC (permalink / raw)
To: linux-mm; +Cc: Matthew Wilcox (Oracle), Kent Overstreet
The pcp_allowed_order() check was only being skipped by
__folio_put_small() which is about to be rearranged.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/page_alloc.c | 25 +++++++++++--------------
1 file changed, 11 insertions(+), 14 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 16241906a368..a51cbae62501 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -558,14 +558,6 @@ static inline bool pcp_allowed_order(unsigned int order)
return false;
}
-static inline void free_the_page(struct page *page, unsigned int order)
-{
- if (pcp_allowed_order(order)) /* Via pcp? */
- free_unref_page(page, order);
- else
- __free_pages_ok(page, order, FPI_NONE);
-}
-
/*
* Higher-order pages are called "compound pages". They are structured thusly:
*
@@ -601,7 +593,7 @@ void destroy_large_folio(struct folio *folio)
folio_undo_large_rmappable(folio);
mem_cgroup_uncharge(folio);
- free_the_page(&folio->page, folio_order(folio));
+ free_unref_page(&folio->page, folio_order(folio));
}
static inline void set_buddy_order(struct page *page, unsigned int order)
@@ -2520,6 +2512,11 @@ void free_unref_page(struct page *page, unsigned int order)
unsigned long pfn = page_to_pfn(page);
int migratetype, pcpmigratetype;
+ if (!pcp_allowed_order(order)) {
+ __free_pages_ok(page, order, FPI_NONE);
+ return;
+ }
+
if (!free_unref_page_prepare(page, pfn, order))
return;
@@ -4694,10 +4691,10 @@ void __free_pages(struct page *page, unsigned int order)
int head = PageHead(page);
if (put_page_testzero(page))
- free_the_page(page, order);
+ free_unref_page(page, order);
else if (!head)
while (order-- > 0)
- free_the_page(page + (1 << order), order);
+ free_unref_page(page + (1 << order), order);
}
EXPORT_SYMBOL(__free_pages);
@@ -4748,7 +4745,7 @@ void __page_frag_cache_drain(struct page *page, unsigned int count)
VM_BUG_ON_PAGE(page_ref_count(page) == 0, page);
if (page_ref_sub_and_test(page, count))
- free_the_page(page, compound_order(page));
+ free_unref_page(page, compound_order(page));
}
EXPORT_SYMBOL(__page_frag_cache_drain);
@@ -4789,7 +4786,7 @@ void *page_frag_alloc_align(struct page_frag_cache *nc,
goto refill;
if (unlikely(nc->pfmemalloc)) {
- free_the_page(page, compound_order(page));
+ free_unref_page(page, compound_order(page));
goto refill;
}
@@ -4833,7 +4830,7 @@ void page_frag_free(void *addr)
struct page *page = virt_to_head_page(addr);
if (unlikely(put_page_testzero(page)))
- free_the_page(page, compound_order(page));
+ free_unref_page(page, compound_order(page));
}
EXPORT_SYMBOL(page_frag_free);
--
2.43.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 3/4] page_alloc: Inline destroy_large_folio() into __folio_put_large()
2024-03-02 7:00 [PATCH 0/4] Clean up __folio_put() Matthew Wilcox (Oracle)
2024-03-02 7:00 ` [PATCH 1/4] mm/swap: Free non-hugetlb large folios in a batch Matthew Wilcox (Oracle)
2024-03-02 7:00 ` [PATCH 2/4] page_alloc: Combine free_the_page() and free_unref_page() Matthew Wilcox (Oracle)
@ 2024-03-02 7:00 ` Matthew Wilcox (Oracle)
2024-03-02 7:00 ` [PATCH 4/4] swap: Combine __folio_put_small, __folio_put_large and __folio_put Matthew Wilcox (Oracle)
3 siblings, 0 replies; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-03-02 7:00 UTC (permalink / raw)
To: linux-mm; +Cc: Matthew Wilcox (Oracle), Kent Overstreet
destroy_large_folio() has only one caller, move its contents there.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
include/linux/mm.h | 2 --
mm/page_alloc.c | 14 --------------
mm/swap.c | 13 ++++++++++---
3 files changed, 10 insertions(+), 19 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index cfbf2bbc6200..9445155a0873 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1315,8 +1315,6 @@ void folio_copy(struct folio *dst, struct folio *src);
unsigned long nr_free_buffer_pages(void);
-void destroy_large_folio(struct folio *folio);
-
/* Returns the number of bytes in this potentially compound page. */
static inline unsigned long page_size(struct page *page)
{
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a51cbae62501..8d1f065f6a32 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -582,20 +582,6 @@ void prep_compound_page(struct page *page, unsigned int order)
prep_compound_head(page, order);
}
-void destroy_large_folio(struct folio *folio)
-{
- if (folio_test_hugetlb(folio)) {
- free_huge_folio(folio);
- return;
- }
-
- if (folio_test_large_rmappable(folio))
- folio_undo_large_rmappable(folio);
-
- mem_cgroup_uncharge(folio);
- free_unref_page(&folio->page, folio_order(folio));
-}
-
static inline void set_buddy_order(struct page *page, unsigned int order)
{
set_page_private(page, order);
diff --git a/mm/swap.c b/mm/swap.c
index d1e016e9ee1a..d8c24300ea3d 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -127,9 +127,16 @@ static void __folio_put_large(struct folio *folio)
* (it's never listed to any LRU lists) and no memcg routines should
* be called for hugetlb (it has a separate hugetlb_cgroup.)
*/
- if (!folio_test_hugetlb(folio))
- page_cache_release(folio);
- destroy_large_folio(folio);
+ if (folio_test_hugetlb(folio)) {
+ free_huge_folio(folio);
+ return;
+ }
+
+ page_cache_release(folio);
+ if (folio_test_large_rmappable(folio))
+ folio_undo_large_rmappable(folio);
+ mem_cgroup_uncharge(folio);
+ free_unref_page(&folio->page, folio_order(folio));
}
void __folio_put(struct folio *folio)
--
2.43.0
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 4/4] swap: Combine __folio_put_small, __folio_put_large and __folio_put
2024-03-02 7:00 [PATCH 0/4] Clean up __folio_put() Matthew Wilcox (Oracle)
` (2 preceding siblings ...)
2024-03-02 7:00 ` [PATCH 3/4] page_alloc: Inline destroy_large_folio() into __folio_put_large() Matthew Wilcox (Oracle)
@ 2024-03-02 7:00 ` Matthew Wilcox (Oracle)
3 siblings, 0 replies; 5+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-03-02 7:00 UTC (permalink / raw)
To: linux-mm; +Cc: Matthew Wilcox (Oracle), Kent Overstreet
It's now obvious that __folio_put_small and __folio_put_large do
almost exactly the same thing. Inline them both into __folio_put.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
mm/swap.c | 32 ++++++--------------------------
1 file changed, 6 insertions(+), 26 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index d8c24300ea3d..a910af21ba68 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -112,42 +112,22 @@ static void page_cache_release(struct folio *folio)
unlock_page_lruvec_irqrestore(lruvec, flags);
}
-static void __folio_put_small(struct folio *folio)
-{
- page_cache_release(folio);
- mem_cgroup_uncharge(folio);
- free_unref_page(&folio->page, 0);
-}
-
-static void __folio_put_large(struct folio *folio)
+void __folio_put(struct folio *folio)
{
- /*
- * __page_cache_release() is supposed to be called for thp, not for
- * hugetlb. This is because hugetlb page does never have PageLRU set
- * (it's never listed to any LRU lists) and no memcg routines should
- * be called for hugetlb (it has a separate hugetlb_cgroup.)
- */
- if (folio_test_hugetlb(folio)) {
+ if (unlikely(folio_is_zone_device(folio))) {
+ free_zone_device_page(&folio->page);
+ return;
+ } else if (folio_test_hugetlb(folio)) {
free_huge_folio(folio);
return;
}
page_cache_release(folio);
- if (folio_test_large_rmappable(folio))
+ if (folio_test_large(folio) && folio_test_large_rmappable(folio))
folio_undo_large_rmappable(folio);
mem_cgroup_uncharge(folio);
free_unref_page(&folio->page, folio_order(folio));
}
-
-void __folio_put(struct folio *folio)
-{
- if (unlikely(folio_is_zone_device(folio)))
- free_zone_device_page(&folio->page);
- else if (unlikely(folio_test_large(folio)))
- __folio_put_large(folio);
- else
- __folio_put_small(folio);
-}
EXPORT_SYMBOL(__folio_put);
/**
--
2.43.0
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2024-03-02 7:00 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-03-02 7:00 [PATCH 0/4] Clean up __folio_put() Matthew Wilcox (Oracle)
2024-03-02 7:00 ` [PATCH 1/4] mm/swap: Free non-hugetlb large folios in a batch Matthew Wilcox (Oracle)
2024-03-02 7:00 ` [PATCH 2/4] page_alloc: Combine free_the_page() and free_unref_page() Matthew Wilcox (Oracle)
2024-03-02 7:00 ` [PATCH 3/4] page_alloc: Inline destroy_large_folio() into __folio_put_large() Matthew Wilcox (Oracle)
2024-03-02 7:00 ` [PATCH 4/4] swap: Combine __folio_put_small, __folio_put_large and __folio_put Matthew Wilcox (Oracle)
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox