From: David Hildenbrand <david@redhat.com>
To: Rik van Riel <riel@surriel.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: "Huang, Ying" <ying.huang@intel.com>,
Chris Li <chrisl@kernel.org>, Ryan Roberts <ryan.roberts@arm.com>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
kernel-team@meta.com
Subject: Re: [PATCH] mm: add maybe_lru_add_drain() that only drains when threshold is exceeded
Date: Thu, 19 Dec 2024 14:47:05 +0100 [thread overview]
Message-ID: <189f4767-e7c2-4522-b943-b644126bf897@redhat.com> (raw)
In-Reply-To: <20241218115604.7e56bedb@fangorn>
On 18.12.24 17:56, Rik van Riel wrote:
> The lru_add_drain() call in zap_page_range_single() always takes some locks,
> and will drain the buffers even when there is only a single page pending.
>
> We probably don't need to do that, since we already deal fine with zap_page_range
> encountering pages that are still in the buffers of other CPUs.
>
> On an AMD Milan CPU, will-it-scale the tlb_flush2_threads test performance with
> 36 threads (one for each core) increases from 526k to 730k loops per second.
>
> The overhead in this case was on the lruvec locks, taking the lock to flush
> a single page. There may be other spots where this variant could be appropriate.
>
> Signed-off-by: Rik van Riel <riel@surriel.com>
> ---
> include/linux/swap.h | 1 +
> mm/memory.c | 2 +-
> mm/swap.c | 18 ++++++++++++++++++
> mm/swap_state.c | 2 +-
> 4 files changed, 21 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index dd5ac833150d..a2f06317bd4b 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -391,6 +391,7 @@ static inline void lru_cache_enable(void)
> }
>
> extern void lru_cache_disable(void);
> +extern void maybe_lru_add_drain(void);
> extern void lru_add_drain(void);
> extern void lru_add_drain_cpu(int cpu);
> extern void lru_add_drain_cpu_zone(struct zone *zone);
> diff --git a/mm/memory.c b/mm/memory.c
> index 2635f7bceab5..1767c65b93ad 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -1919,7 +1919,7 @@ void zap_page_range_single(struct vm_area_struct *vma, unsigned long address,
> struct mmu_notifier_range range;
> struct mmu_gather tlb;
>
> - lru_add_drain();
> + maybe_lru_add_drain();
> mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma->vm_mm,
> address, end);
> hugetlb_zap_begin(vma, &range.start, &range.end);
> diff --git a/mm/swap.c b/mm/swap.c
> index 9caf6b017cf0..001664a652ff 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -777,6 +777,24 @@ void lru_add_drain(void)
> mlock_drain_local();
> }
>
> +static bool should_lru_add_drain(void)
> +{
> + struct cpu_fbatches *fbatches = this_cpu_ptr(&cpu_fbatches);
> + int pending = folio_batch_count(&fbatches->lru_add);
> + pending += folio_batch_count(&fbatches->lru_deactivate);
> + pending += folio_batch_count(&fbatches->lru_deactivate_file);
> + pending += folio_batch_count(&fbatches->lru_lazyfree);
> +
> + /* Don't bother draining unless we have several pages pending. */
> + return pending > SWAP_CLUSTER_MAX;
> +}
> +
> +void maybe_lru_add_drain(void)
> +{
> + if (should_lru_add_drain())
> + lru_add_drain();
> +}
> +
> /*
> * It's called from per-cpu workqueue context in SMP case so
> * lru_add_drain_cpu and invalidate_bh_lrus_cpu should run on
> diff --git a/mm/swap_state.c b/mm/swap_state.c
> index 3a0cf965f32b..1ae4cd7b041e 100644
> --- a/mm/swap_state.c
> +++ b/mm/swap_state.c
> @@ -317,7 +317,7 @@ void free_pages_and_swap_cache(struct encoded_page **pages, int nr)
> struct folio_batch folios;
> unsigned int refs[PAGEVEC_SIZE];
>
> - lru_add_drain();
> + maybe_lru_add_drain();
I'm wondering about the reason+effect of this existing call.
Seems to date back to the beginning of git.
Likely it doesn't make sense to have effectively-free pages in the
LRU+mlock cache. But then, this only considers the local CPU LRU/mlock
caches ... hmmm
So .... do we need this at all? :)
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-12-19 13:47 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-18 16:56 Rik van Riel
2024-12-18 20:20 ` Shakeel Butt
2024-12-19 3:13 ` Rik van Riel
2024-12-19 17:00 ` Shakeel Butt
2024-12-19 13:47 ` David Hildenbrand [this message]
2024-12-19 14:11 ` Rik van Riel
2024-12-19 17:23 ` David Hildenbrand
2024-12-19 17:50 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=189f4767-e7c2-4522-b943-b644126bf897@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=chrisl@kernel.org \
--cc=kernel-team@meta.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=riel@surriel.com \
--cc=ryan.roberts@arm.com \
--cc=willy@infradead.org \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox