From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, David Hildenbrand <david@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Matthew Wilcox <willy@infradead.org>,
Ryan Roberts <ryan.roberts@arm.com>,
Catalin Marinas <catalin.marinas@arm.com>,
Yin Fengwei <fengwei.yin@intel.com>,
Michal Hocko <mhocko@suse.com>, Will Deacon <will@kernel.org>,
"Aneesh Kumar K.V" <aneesh.kumar@linux.ibm.com>,
Nick Piggin <npiggin@gmail.com>,
Peter Zijlstra <peterz@infradead.org>,
Michael Ellerman <mpe@ellerman.id.au>,
Christophe Leroy <christophe.leroy@csgroup.eu>,
"Naveen N. Rao" <naveen.n.rao@linux.ibm.com>,
Heiko Carstens <hca@linux.ibm.com>,
Vasily Gorbik <gor@linux.ibm.com>,
Alexander Gordeev <agordeev@linux.ibm.com>,
Christian Borntraeger <borntraeger@linux.ibm.com>,
Sven Schnelle <svens@linux.ibm.com>,
Arnd Bergmann <arnd@arndb.de>,
linux-arch@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
linux-s390@vger.kernel.org
Subject: [PATCH v2 09/10] mm/mmu_gather: improve cond_resched() handling with large folios and expensive page freeing
Date: Fri, 9 Feb 2024 23:15:08 +0100 [thread overview]
Message-ID: <20240209221509.585251-10-david@redhat.com> (raw)
In-Reply-To: <20240209221509.585251-1-david@redhat.com>
It's a pain that we have to handle cond_resched() in
tlb_batch_pages_flush() manually and cannot simply handle it in
release_pages() -- release_pages() can be called from atomic context.
Well, in a perfect world we wouldn't have to make our code more at all.
With page poisoning and init_on_free, we might now run into soft lockups
when we free a lot of rather large folio fragments, because page freeing
time then depends on the actual memory size we are freeing instead of on
the number of folios that are involved.
In the absolute (unlikely) worst case, on arm64 with 64k we will be able
to free up to 256 folio fragments that each span 512 MiB: zeroing out 128
GiB does sound like it might take a while. But instead of ignoring this
unlikely case, let's just handle it.
So, let's teach tlb_batch_pages_flush() that there are some
configurations where page freeing is horribly slow, and let's reschedule
more frequently -- similarly like we did for now before we had large folio
fragments in there. Note that we might end up freeing only a single folio
fragment at a time that might exceed the old 512 pages limit: but if we
cannot even free a single MAX_ORDER page on a system without running into
soft lockups, something else is already completely bogus.
In the future, we might want to detect if handling cond_resched() is
required at all, and just not do any of that with full preemption enabled.
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/mmu_gather.c | 50 ++++++++++++++++++++++++++++++++++++++++---------
1 file changed, 41 insertions(+), 9 deletions(-)
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index d175c0f1e2c8..2774044b5790 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -91,18 +91,19 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
}
#endif
-static void tlb_batch_pages_flush(struct mmu_gather *tlb)
+static void __tlb_batch_free_encoded_pages(struct mmu_gather_batch *batch)
{
- struct mmu_gather_batch *batch;
-
- for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
- struct encoded_page **pages = batch->encoded_pages;
+ struct encoded_page **pages = batch->encoded_pages;
+ unsigned int nr, nr_pages;
+ /*
+ * We might end up freeing a lot of pages. Reschedule on a regular
+ * basis to avoid soft lockups in configurations without full
+ * preemption enabled. The magic number of 512 folios seems to work.
+ */
+ if (!page_poisoning_enabled_static() && !want_init_on_free()) {
while (batch->nr) {
- /*
- * limit free batch count when PAGE_SIZE > 4K
- */
- unsigned int nr = min(512U, batch->nr);
+ nr = min(512, batch->nr);
/*
* Make sure we cover page + nr_pages, and don't leave
@@ -119,6 +120,37 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
cond_resched();
}
}
+
+ /*
+ * With page poisoning and init_on_free, the time it takes to free
+ * memory grows proportionally with the actual memory size. Therefore,
+ * limit based on the actual memory size and not the number of involved
+ * folios.
+ */
+ while (batch->nr) {
+ for (nr = 0, nr_pages = 0;
+ nr < batch->nr && nr_pages < 512; nr++) {
+ if (unlikely(encoded_page_flags(pages[nr]) &
+ ENCODED_PAGE_BIT_NR_PAGES_NEXT))
+ nr_pages += encoded_nr_pages(pages[++nr]);
+ else
+ nr_pages++;
+ }
+
+ free_pages_and_swap_cache(pages, nr);
+ pages += nr;
+ batch->nr -= nr;
+
+ cond_resched();
+ }
+}
+
+static void tlb_batch_pages_flush(struct mmu_gather *tlb)
+{
+ struct mmu_gather_batch *batch;
+
+ for (batch = &tlb->local; batch && batch->nr; batch = batch->next)
+ __tlb_batch_free_encoded_pages(batch);
tlb->active = &tlb->local;
}
--
2.43.0
next prev parent reply other threads:[~2024-02-09 22:16 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-09 22:14 [PATCH v2 00/10] mm/memory: optimize unmap/zap with PTE-mapped THP David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 01/10] mm/memory: factor out zapping of present pte into zap_present_pte() David Hildenbrand
2024-02-12 8:37 ` Ryan Roberts
2024-02-09 22:15 ` [PATCH v2 02/10] mm/memory: handle !page case in zap_present_pte() separately David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 03/10] mm/memory: further separate anon and pagecache folio handling in zap_present_pte() David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 04/10] mm/memory: factor out zapping folio pte into zap_present_folio_pte() David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 05/10] mm/mmu_gather: pass "delay_rmap" instead of encoded page to __tlb_remove_page_size() David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 06/10] mm/mmu_gather: define ENCODED_PAGE_FLAG_DELAY_RMAP David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 07/10] mm/mmu_gather: add tlb_remove_tlb_entries() David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 08/10] mm/mmu_gather: add __tlb_remove_folio_pages() David Hildenbrand
2024-02-12 8:51 ` Ryan Roberts
2024-02-12 9:03 ` David Hildenbrand
2024-02-09 22:15 ` David Hildenbrand [this message]
2024-02-12 9:26 ` [PATCH v2 09/10] mm/mmu_gather: improve cond_resched() handling with large folios and expensive page freeing Ryan Roberts
2024-02-12 10:11 ` David Hildenbrand
2024-02-12 10:32 ` Ryan Roberts
2024-02-12 10:56 ` David Hildenbrand
2024-02-12 11:05 ` David Hildenbrand
2024-02-12 11:21 ` Ryan Roberts
2024-02-12 11:39 ` David Hildenbrand
2024-02-09 22:15 ` [PATCH v2 10/10] mm/memory: optimize unmap/zap with PTE-mapped THP David Hildenbrand
2024-02-12 9:37 ` Ryan Roberts
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240209221509.585251-10-david@redhat.com \
--to=david@redhat.com \
--cc=agordeev@linux.ibm.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=arnd@arndb.de \
--cc=borntraeger@linux.ibm.com \
--cc=catalin.marinas@arm.com \
--cc=christophe.leroy@csgroup.eu \
--cc=fengwei.yin@intel.com \
--cc=gor@linux.ibm.com \
--cc=hca@linux.ibm.com \
--cc=linux-arch@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-s390@vger.kernel.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=mhocko@suse.com \
--cc=mpe@ellerman.id.au \
--cc=naveen.n.rao@linux.ibm.com \
--cc=npiggin@gmail.com \
--cc=peterz@infradead.org \
--cc=ryan.roberts@arm.com \
--cc=svens@linux.ibm.com \
--cc=will@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox