From: Ryan Roberts <ryan.roberts@arm.com>
To: "Matthew Wilcox (Oracle)" <willy@infradead.org>, linux-mm@kvack.org
Subject: Re: [RFC PATCH 09/14] mm: Handle large folios in free_unref_folios()
Date: Thu, 31 Aug 2023 16:21:53 +0100 [thread overview]
Message-ID: <fd082020-10c2-48fe-b9e0-5f3c7a2ae91c@arm.com> (raw)
In-Reply-To: <20230825135918.4164671-10-willy@infradead.org>
On 25/08/2023 14:59, Matthew Wilcox (Oracle) wrote:
> Call folio_undo_large_rmappable() if needed. free_unref_page_prepare()
> destroys the ability to call folio_order(), so stash the order in
> folio->private for the benefit of the second loop.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
> mm/page_alloc.c | 21 +++++++++++++++------
> 1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index bca5c70b5576..e586d17fb7f2 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2465,7 +2465,7 @@ void free_unref_page(struct page *page, unsigned int order)
> }
>
> /*
> - * Free a batch of 0-order pages
> + * Free a batch of folios
> */
> void free_unref_folios(struct folio_batch *folios)
> {
> @@ -2478,7 +2478,11 @@ void free_unref_folios(struct folio_batch *folios)
> for (i = 0, j = 0; i < folios->nr; i++) {
> struct folio *folio = folios->folios[i];
> unsigned long pfn = folio_pfn(folio);
> - if (!free_unref_page_prepare(&folio->page, pfn, 0))
> + unsigned int order = folio_order(folio);
Do you need to do anything special for hugetlb folios? I see that
destroy_large_folio() has:
if (folio_test_hugetlb(folio)) {
free_huge_folio(folio);
return;
}
> +
> + if (order > 0 && folio_test_large_rmappable(folio))
> + folio_undo_large_rmappable(folio);
> + if (!free_unref_page_prepare(&folio->page, pfn, order))
> continue;
>
> /*
> @@ -2486,11 +2490,13 @@ void free_unref_folios(struct folio_batch *folios)
> * comment in free_unref_page.
> */
> migratetype = get_pcppage_migratetype(&folio->page);
> - if (unlikely(is_migrate_isolate(migratetype))) {
> + if (order > PAGE_ALLOC_COSTLY_ORDER ||
Should this be `if (!pcp_allowed_order(order) ||` ? That helper includes the THP
pageblock_order too.
> + is_migrate_isolate(migratetype)) {
> free_one_page(folio_zone(folio), &folio->page, pfn,
> - 0, migratetype, FPI_NONE);
> + order, migratetype, FPI_NONE);
> continue;
> }
> + folio->private = (void *)(unsigned long)order;
> if (j != i)
> folios->folios[j] = folio;
> j++;
> @@ -2500,7 +2506,9 @@ void free_unref_folios(struct folio_batch *folios)
> for (i = 0; i < folios->nr; i++) {
> struct folio *folio = folios->folios[i];
> struct zone *zone = folio_zone(folio);
> + unsigned int order = (unsigned long)folio->private;
>
> + folio->private = NULL;
> migratetype = get_pcppage_migratetype(&folio->page);
>
> /* Different zone requires a different pcp lock */
> @@ -2519,7 +2527,7 @@ void free_unref_folios(struct folio_batch *folios)
> if (unlikely(!pcp)) {
> pcp_trylock_finish(UP_flags);
> free_one_page(zone, &folio->page,
> - folio_pfn(folio), 0,
> + folio_pfn(folio), order,
> migratetype, FPI_NONE);
> locked_zone = NULL;
> continue;
> @@ -2535,7 +2543,8 @@ void free_unref_folios(struct folio_batch *folios)
> migratetype = MIGRATE_MOVABLE;
>
> trace_mm_page_free_batched(&folio->page);
> - free_unref_page_commit(zone, pcp, &folio->page, migratetype, 0);
> + free_unref_page_commit(zone, pcp, &folio->page, migratetype,
> + order);
> }
>
> if (pcp) {
next prev parent reply other threads:[~2023-08-31 15:21 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-08-25 13:59 [RFC PATCH 00/14] Rearrange batched folio freeing Matthew Wilcox (Oracle)
2023-08-25 13:59 ` [RFC PATCH 01/14] mm: Make folios_put() the basis of release_pages() Matthew Wilcox (Oracle)
2023-08-31 14:21 ` Ryan Roberts
2023-09-01 3:58 ` Matthew Wilcox
2023-09-01 8:14 ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 02/14] mm: Convert free_unref_page_list() to use folios Matthew Wilcox (Oracle)
2023-08-31 14:29 ` Ryan Roberts
2023-09-01 4:03 ` Matthew Wilcox
2023-09-01 8:15 ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 03/14] mm: Add free_unref_folios() Matthew Wilcox (Oracle)
2023-08-31 14:39 ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 04/14] mm: Use folios_put() in __folio_batch_release() Matthew Wilcox (Oracle)
2023-08-31 14:41 ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 05/14] memcg: Add mem_cgroup_uncharge_folios() Matthew Wilcox (Oracle)
2023-08-31 14:49 ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 06/14] mm: Remove use of folio list from folios_put() Matthew Wilcox (Oracle)
2023-08-31 14:53 ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 07/14] mm: Use free_unref_folios() in put_pages_list() Matthew Wilcox (Oracle)
2023-08-25 13:59 ` [RFC PATCH 08/14] mm: use __page_cache_release() in folios_put() Matthew Wilcox (Oracle)
2023-08-25 13:59 ` [RFC PATCH 09/14] mm: Handle large folios in free_unref_folios() Matthew Wilcox (Oracle)
2023-08-31 15:21 ` Ryan Roberts [this message]
2023-09-01 4:09 ` Matthew Wilcox
2023-08-25 13:59 ` [RFC PATCH 10/14] mm: Allow non-hugetlb large folios to be batch processed Matthew Wilcox (Oracle)
2023-08-31 15:28 ` Ryan Roberts
2023-09-01 4:10 ` Matthew Wilcox
2023-08-25 13:59 ` [RFC PATCH 11/14] mm: Free folios in a batch in shrink_folio_list() Matthew Wilcox (Oracle)
2023-09-04 3:43 ` Matthew Wilcox
2024-01-05 17:00 ` Matthew Wilcox
2023-08-25 13:59 ` [RFC PATCH 12/14] mm: Free folios directly in move_folios_to_lru() Matthew Wilcox (Oracle)
2023-08-31 15:46 ` Ryan Roberts
2023-09-01 4:16 ` Matthew Wilcox
2023-08-25 13:59 ` [RFC PATCH 13/14] memcg: Remove mem_cgroup_uncharge_list() Matthew Wilcox (Oracle)
2023-08-31 18:26 ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 14/14] mm: Remove free_unref_page_list() Matthew Wilcox (Oracle)
2023-08-31 18:27 ` Ryan Roberts
2023-08-30 18:50 ` [RFC PATCH 15/18] mm: Convert free_pages_and_swap_cache() to use folios_put() Matthew Wilcox (Oracle)
2023-08-30 18:50 ` [RFC PATCH 16/18] mm: Use a folio in __collapse_huge_page_copy_succeeded() Matthew Wilcox (Oracle)
2023-08-30 18:50 ` [RFC PATCH 17/18] mm: Convert free_swap_cache() to take a folio Matthew Wilcox (Oracle)
2023-08-31 18:49 ` Ryan Roberts
2023-08-30 18:50 ` [RFC PATCH 18/18] mm: Add pfn_range_put() Matthew Wilcox (Oracle)
2023-08-31 19:03 ` Ryan Roberts
2023-09-01 4:27 ` Matthew Wilcox
2023-09-01 7:59 ` Ryan Roberts
2023-09-04 13:25 ` [RFC PATCH 00/14] Rearrange batched folio freeing Ryan Roberts
2023-09-05 13:15 ` Matthew Wilcox
2023-09-05 13:26 ` Ryan Roberts
2023-09-05 14:00 ` Matthew Wilcox
2023-09-06 3:48 ` Matthew Wilcox
2023-09-06 10:23 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fd082020-10c2-48fe-b9e0-5f3c7a2ae91c@arm.com \
--to=ryan.roberts@arm.com \
--cc=linux-mm@kvack.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox