linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: linux-mm@kvack.org
Cc: Mel Gorman <mgorman@suse.de>
Subject: Re: [RFC PATCH 11/14] mm: Free folios in a batch in shrink_folio_list()
Date: Fri, 5 Jan 2024 17:00:43 +0000	[thread overview]
Message-ID: <ZZg1u2bb1sN8n+Ks@casper.infradead.org> (raw)
In-Reply-To: <ZPVSWjcO/hiEtcvn@casper.infradead.org>

On Mon, Sep 04, 2023 at 04:43:22AM +0100, Matthew Wilcox wrote:
> On Fri, Aug 25, 2023 at 02:59:15PM +0100, Matthew Wilcox (Oracle) wrote:
> > Use free_unref_page_batch() to free the folios.  This may increase
> > the numer of IPIs from calling try_to_unmap_flush() more often,
> > but that's going to be very workload-dependent.
> 
> I'd like to propose this as a replacement for this patch.  Queue the
> mapped folios up so we can flush them all in one go.  Free the unmapped
> ones, and the mapped ones after the flush.

Any reaction to this patch?  I'm putting together a v2 for posting after
the merge window, and I got no feedback on whether the former version
or this one is better.

> It does change the ordering of mem_cgroup_uncharge_folios() and
> the page flush.  I think that's OK.  This is only build-tested;
> something has messed up my laptop and I can no longer launch VMs.
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 6f13394b112e..526d5bb84622 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1706,14 +1706,16 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
>  		struct pglist_data *pgdat, struct scan_control *sc,
>  		struct reclaim_stat *stat, bool ignore_references)
>  {
> +	struct folio_batch free_folios;
>  	LIST_HEAD(ret_folios);
> -	LIST_HEAD(free_folios);
> +	LIST_HEAD(mapped_folios);
>  	LIST_HEAD(demote_folios);
>  	unsigned int nr_reclaimed = 0;
>  	unsigned int pgactivate = 0;
>  	bool do_demote_pass;
>  	struct swap_iocb *plug = NULL;
>  
> +	folio_batch_init(&free_folios);
>  	memset(stat, 0, sizeof(*stat));
>  	cond_resched();
>  	do_demote_pass = can_demote(pgdat->node_id, sc);
> @@ -1723,7 +1725,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
>  		struct address_space *mapping;
>  		struct folio *folio;
>  		enum folio_references references = FOLIOREF_RECLAIM;
> -		bool dirty, writeback;
> +		bool dirty, writeback, mapped = false;
>  		unsigned int nr_pages;
>  
>  		cond_resched();
> @@ -1957,6 +1959,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
>  					stat->nr_lazyfree_fail += nr_pages;
>  				goto activate_locked;
>  			}
> +			mapped = true;
>  		}
>  
>  		/*
> @@ -2111,14 +2114,12 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
>  		 */
>  		nr_reclaimed += nr_pages;
>  
> -		/*
> -		 * Is there need to periodically free_folio_list? It would
> -		 * appear not as the counts should be low
> -		 */
> -		if (unlikely(folio_test_large(folio)))
> -			destroy_large_folio(folio);
> -		else
> -			list_add(&folio->lru, &free_folios);
> +		if (mapped) {
> +			list_add(&folio->lru, &mapped_folios);
> +		} else if (folio_batch_add(&free_folios, folio) == 0) {
> +			mem_cgroup_uncharge_folios(&free_folios);
> +			free_unref_folios(&free_folios);
> +		}
>  		continue;
>  
>  activate_locked_split:
> @@ -2182,9 +2183,22 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
>  
>  	pgactivate = stat->nr_activate[0] + stat->nr_activate[1];
>  
> -	mem_cgroup_uncharge_list(&free_folios);
>  	try_to_unmap_flush();
> -	free_unref_page_list(&free_folios);
> +	while (!list_empty(&mapped_folios)) {
> +		struct folio *folio = list_first_entry(&mapped_folios,
> +					struct folio, lru);
> +
> +		list_del(&folio->lru);
> +		if (folio_batch_add(&free_folios, folio) > 0)
> +			continue;
> +		mem_cgroup_uncharge_folios(&free_folios);
> +		free_unref_folios(&free_folios);
> +	}
> +
> +	if (free_folios.nr) {
> +		mem_cgroup_uncharge_folios(&free_folios);
> +		free_unref_folios(&free_folios);
> +	}
>  
>  	list_splice(&ret_folios, folio_list);
>  	count_vm_events(PGACTIVATE, pgactivate);
> 


  reply	other threads:[~2024-01-05 17:00 UTC|newest]

Thread overview: 49+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-08-25 13:59 [RFC PATCH 00/14] Rearrange batched folio freeing Matthew Wilcox (Oracle)
2023-08-25 13:59 ` [RFC PATCH 01/14] mm: Make folios_put() the basis of release_pages() Matthew Wilcox (Oracle)
2023-08-31 14:21   ` Ryan Roberts
2023-09-01  3:58     ` Matthew Wilcox
2023-09-01  8:14       ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 02/14] mm: Convert free_unref_page_list() to use folios Matthew Wilcox (Oracle)
2023-08-31 14:29   ` Ryan Roberts
2023-09-01  4:03     ` Matthew Wilcox
2023-09-01  8:15       ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 03/14] mm: Add free_unref_folios() Matthew Wilcox (Oracle)
2023-08-31 14:39   ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 04/14] mm: Use folios_put() in __folio_batch_release() Matthew Wilcox (Oracle)
2023-08-31 14:41   ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 05/14] memcg: Add mem_cgroup_uncharge_folios() Matthew Wilcox (Oracle)
2023-08-31 14:49   ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 06/14] mm: Remove use of folio list from folios_put() Matthew Wilcox (Oracle)
2023-08-31 14:53   ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 07/14] mm: Use free_unref_folios() in put_pages_list() Matthew Wilcox (Oracle)
2023-08-25 13:59 ` [RFC PATCH 08/14] mm: use __page_cache_release() in folios_put() Matthew Wilcox (Oracle)
2023-08-25 13:59 ` [RFC PATCH 09/14] mm: Handle large folios in free_unref_folios() Matthew Wilcox (Oracle)
2023-08-31 15:21   ` Ryan Roberts
2023-09-01  4:09     ` Matthew Wilcox
2023-08-25 13:59 ` [RFC PATCH 10/14] mm: Allow non-hugetlb large folios to be batch processed Matthew Wilcox (Oracle)
2023-08-31 15:28   ` Ryan Roberts
2023-09-01  4:10     ` Matthew Wilcox
2023-08-25 13:59 ` [RFC PATCH 11/14] mm: Free folios in a batch in shrink_folio_list() Matthew Wilcox (Oracle)
2023-09-04  3:43   ` Matthew Wilcox
2024-01-05 17:00     ` Matthew Wilcox [this message]
2023-08-25 13:59 ` [RFC PATCH 12/14] mm: Free folios directly in move_folios_to_lru() Matthew Wilcox (Oracle)
2023-08-31 15:46   ` Ryan Roberts
2023-09-01  4:16     ` Matthew Wilcox
2023-08-25 13:59 ` [RFC PATCH 13/14] memcg: Remove mem_cgroup_uncharge_list() Matthew Wilcox (Oracle)
2023-08-31 18:26   ` Ryan Roberts
2023-08-25 13:59 ` [RFC PATCH 14/14] mm: Remove free_unref_page_list() Matthew Wilcox (Oracle)
2023-08-31 18:27   ` Ryan Roberts
2023-08-30 18:50 ` [RFC PATCH 15/18] mm: Convert free_pages_and_swap_cache() to use folios_put() Matthew Wilcox (Oracle)
2023-08-30 18:50 ` [RFC PATCH 16/18] mm: Use a folio in __collapse_huge_page_copy_succeeded() Matthew Wilcox (Oracle)
2023-08-30 18:50 ` [RFC PATCH 17/18] mm: Convert free_swap_cache() to take a folio Matthew Wilcox (Oracle)
2023-08-31 18:49   ` Ryan Roberts
2023-08-30 18:50 ` [RFC PATCH 18/18] mm: Add pfn_range_put() Matthew Wilcox (Oracle)
2023-08-31 19:03   ` Ryan Roberts
2023-09-01  4:27     ` Matthew Wilcox
2023-09-01  7:59       ` Ryan Roberts
2023-09-04 13:25 ` [RFC PATCH 00/14] Rearrange batched folio freeing Ryan Roberts
2023-09-05 13:15   ` Matthew Wilcox
2023-09-05 13:26     ` Ryan Roberts
2023-09-05 14:00       ` Matthew Wilcox
2023-09-06  3:48         ` Matthew Wilcox
2023-09-06 10:23           ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZZg1u2bb1sN8n+Ks@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox