From: Ryan Roberts <ryan.roberts@arm.com>
To: David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Catalin Marinas <catalin.marinas@arm.com>,
Yin Fengwei <fengwei.yin@intel.com>,
Michal Hocko <mhocko@suse.com>, Will Deacon <will@kernel.org>,
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
Nick Piggin <npiggin@gmail.com>,
Peter Zijlstra <peterz@infradead.org>,
Michael Ellerman <mpe@ellerman.id.au>,
Christophe Leroy <christophe.leroy@csgroup.eu>,
"Naveen N. Rao" <naveen.n.rao@linux.ibm.com>,
Heiko Carstens <hca@linux.ibm.com>,
Vasily Gorbik <gor@linux.ibm.com>,
Alexander Gordeev <agordeev@linux.ibm.com>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Sven Schnelle <svens@linux.ibm.com>,
Arnd Bergmann <arnd@arndb.de>,
linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
linux-s390@vger.kernel.org
Subject: Re: [PATCH v2 09/10] mm/mmu_gather: improve cond_resched() handling with large folios and expensive page freeing
Date: Mon, 12 Feb 2024 09:26:36 +0000 [thread overview]
Message-ID: <f1578e92-4de0-4718-bf79-ec29e9a19fe0@arm.com> (raw)
In-Reply-To: <20240209221509.585251-10-david@redhat.com>
On 09/02/2024 22:15, David Hildenbrand wrote:
> It's a pain that we have to handle cond_resched() in
> tlb_batch_pages_flush() manually and cannot simply handle it in
> release_pages() -- release_pages() can be called from atomic context.
> Well, in a perfect world we wouldn't have to make our code more at all.
>
> With page poisoning and init_on_free, we might now run into soft lockups
> when we free a lot of rather large folio fragments, because page freeing
> time then depends on the actual memory size we are freeing instead of on
> the number of folios that are involved.
>
> In the absolute (unlikely) worst case, on arm64 with 64k we will be able
> to free up to 256 folio fragments that each span 512 MiB: zeroing out 128
> GiB does sound like it might take a while. But instead of ignoring this
> unlikely case, let's just handle it.
>
> So, let's teach tlb_batch_pages_flush() that there are some
> configurations where page freeing is horribly slow, and let's reschedule
> more frequently -- similarly like we did for now before we had large folio
> fragments in there. Note that we might end up freeing only a single folio
> fragment at a time that might exceed the old 512 pages limit: but if we
> cannot even free a single MAX_ORDER page on a system without running into
> soft lockups, something else is already completely bogus.
>
> In the future, we might want to detect if handling cond_resched() is
> required at all, and just not do any of that with full preemption enabled.
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> mm/mmu_gather.c | 50 ++++++++++++++++++++++++++++++++++++++++---------
> 1 file changed, 41 insertions(+), 9 deletions(-)
>
> diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
> index d175c0f1e2c8..2774044b5790 100644
> --- a/mm/mmu_gather.c
> +++ b/mm/mmu_gather.c
> @@ -91,18 +91,19 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
> }
> #endif
>
> -static void tlb_batch_pages_flush(struct mmu_gather *tlb)
> +static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch)
> {
> - struct mmu_gather_batch *batch;
> -
> - for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
> - struct encoded_page **pages = batch->encoded_pages;
> + struct encoded_page **pages = batch->encoded_pages;
> + unsigned int nr, nr_pages;
>
> + /*
> + * We might end up freeing a lot of pages. Reschedule on a regular
> + * basis to avoid soft lockups in configurations without full
> + * preemption enabled. The magic number of 512 folios seems to work.
> + */
> + if (!page_poisoning_enabled_static() && !want_init_on_free()) {
Is the performance win really worth 2 separate implementations keyed off this?
It seems a bit fragile, in case any other operations get added to free which are
proportional to size in future. Why not just always do the conservative version?
> while (batch->nr) {
> - /*
> - * limit free batch count when PAGE_SIZE > 4K
> - */
> - unsigned int nr = min(512U, batch->nr);
> + nr = min(512, batch->nr);
If any entries are for more than 1 page, nr_pages will also be encoded in the
batch, so effectively this could be limiting to 256 actual folios (half of 512).
Is it worth checking for ENCODED_PAGE_BIT_NR_PAGES_NEXT and limiting accordingly?
nit: You're using 512 magic number in 2 places now; perhaps make a macro?
>
> /*
> * Make sure we cover page + nr_pages, and don't leave
> @@ -119,6 +120,37 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
> cond_resched();
> }
> }
> +
> + /*
> + * With page poisoning and init_on_free, the time it takes to free
> + * memory grows proportionally with the actual memory size. Therefore,
> + * limit based on the actual memory size and not the number of involved
> + * folios.
> + */
> + while (batch->nr) {
> + for (nr = 0, nr_pages = 0;
> + nr < batch->nr && nr_pages < 512; nr++) {
> + if (unlikely(encoded_page_flags(pages[nr]) &
> + ENCODED_PAGE_BIT_NR_PAGES_NEXT))
> + nr_pages += encoded_nr_pages(pages[++nr]);
> + else
> + nr_pages++;
> + }
I guess worst case here is freeing (511 + 8192) * 64K pages = ~544M. That's up
from the old limit of 512 * 64K = 32M, and 511 pages bigger than your statement
in the commit log. Are you comfortable with this? I guess the only alternative
is to start splitting a batch which would be really messy. I agree your approach
is preferable if 544M is acceptable.
> +
> + free_pages_and_swap_cache(pages, nr);
> + pages += nr;
> + batch->nr -= nr;
> +
> + cond_resched();
> + }
> +}
> +
> +static void tlb_batch_pages_flush(struct mmu_gather *tlb)
> +{
> + struct mmu_gather_batch *batch;
> +
> + for (batch = &tlb->local; batch && batch->nr; batch = batch->next)
> + __tlb_batch_free_encoded_pages(batch);
> tlb->active = &tlb->local;
> }
>
next prev parent reply other threads:[~2024-02-12 9:26 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-09 22:14 [PATCH v2 00/10] mm/memory: optimize unmap/zap with PTE-mapped THP David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 01/10] mm/memory: factor out zapping of present pte into zap_present_pte() David Hildenbrand
2024-02-12 8:37 ` Ryan Roberts
2024-02-09 22:15 ` [PATCH v2 02/10] mm/memory: handle !page case in zap_present_pte() separately David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 03/10] mm/memory: further separate anon and pagecache folio handling in zap_present_pte() David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 04/10] mm/memory: factor out zapping folio pte into zap_present_folio_pte() David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 05/10] mm/mmu_gather: pass "delay_rmap" instead of encoded page to __tlb_remove_page_size() David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 06/10] mm/mmu_gather: define ENCODED_PAGE_FLAG_DELAY_RMAP David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 07/10] mm/mmu_gather: add tlb_remove_tlb_entries() David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 08/10] mm/mmu_gather: add __tlb_remove_folio_pages() David Hildenbrand
2024-02-12 8:51 ` Ryan Roberts
2024-02-12 9:03 ` David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 09/10] mm/mmu_gather: improve cond_resched() handling with large folios and expensive page freeing David Hildenbrand
2024-02-12 9:26 ` Ryan Roberts [this message]
2024-02-12 10:11 ` David Hildenbrand
2024-02-12 10:32 ` Ryan Roberts
2024-02-12 10:56 ` David Hildenbrand
2024-02-12 11:05 ` David Hildenbrand
2024-02-12 11:21 ` Ryan Roberts
2024-02-12 11:39 ` David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 10/10] mm/memory: optimize unmap/zap with PTE-mapped THP David Hildenbrand
2024-02-12 9:37 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f1578e92-4de0-4718-bf79-ec29e9a19fe0@arm.com \
--to=ryan.roberts@arm.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=arnd@arndb.de \
--cc=borntraeger@linux.ibm.com \
--cc=catalin.marinas@arm.com \
--cc=christophe.leroy@csgroup.eu \
--cc=david@redhat.com \
--cc=fengwei.yin@intel.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-s390@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mhocko@suse.com \
--cc=mpe@ellerman.id.au \
--cc=naveen.n.rao@linux.ibm.com \
--cc=npiggin@gmail.com \
--cc=peterz@infradead.org \
--cc=svens@linux.ibm.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox