* [PATCH v2 0/5] Optimize mmap_exit for large folios
@ 2023-08-30 9:50 Ryan Roberts
2023-08-30 9:50 ` [PATCH v2 1/5] mm: Implement folio_remove_rmap_range() Ryan Roberts
` (4 more replies)
0 siblings, 5 replies; 15+ messages in thread
From: Ryan Roberts @ 2023-08-30 9:50 UTC (permalink / raw)
To: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, Matthew Wilcox (Oracle),
David Hildenbrand, Yu Zhao, Kirill A. Shutemov, Yin Fengwei,
Yang Shi, Huang, Ying, Zi Yan
Cc: Ryan Roberts, linux-mm, linux-kernel
Hi All,
This is v2 of a series to improve performance of process teardown,
taking advantage of the fact that large folios are increasingly
regularly pte-mapped in user space; supporting filesystems already use
large folios for pagecache memory, and large folios for anonymous memory
are (hopefully) on the horizon.
See last patch for performance numbers, including measurements that show
this approach doesn't regress (and actually improves a little bit) when
all folios are small.
The basic approach is to accumulate contiguous ranges of pages in the
mmu_gather structure (instead of storing each individual page pointer),
then take advantage of this internal format to efficiently batch rmap
removal, swapcache removal and page release - see the commit messages
for more details.
This series replaces the previous approach I took at [2], which was much
smaller in scope, only attempting to batch rmap removal for anon pages.
Feedback was that I should do something more general that would also
batch-remove pagecache pages from the rmap. But while designing that, I
found it was also possible to improve swapcache removal and page
release. Hopefully I haven't gone too far the other way now! Note that
patch 1 is unchanged from that originl series.
Note that this series will conflict with Matthew's series at [3]. I
figure we both race to mm-unstable and the loser has to do the conflict
resolution?
This series is based on mm-unstable (b93868dbf9bc).
Changes since v1 [1]
--------------------
- Now using pfns for start and end of page ranges within a folio.
`struct page`s may not be contiguous on some setups so using pointers
breaks these systems. (Thanks to Zi Yan).
- Fixed zone_device folio reference putting. (Thanks to Matthew and
David).
- Refactored release_pages() and folios_put_refs() so that they now
share a common implementation.
[1] https://lore.kernel.org/linux-mm/20230810103332.3062143-1-ryan.roberts@arm.com/
[2] https://lore.kernel.org/linux-mm/20230727141837.3386072-1-ryan.roberts@arm.com/
[3] https://lore.kernel.org/linux-mm/20230825135918.4164671-1-willy@infradead.org/
Thanks,
Ryan
Ryan Roberts (5):
mm: Implement folio_remove_rmap_range()
mm/mmu_gather: generalize mmu_gather rmap removal mechanism
mm/mmu_gather: Remove encoded_page infrastructure
mm: Refector release_pages()
mm/mmu_gather: Store and process pages in contig ranges
arch/s390/include/asm/tlb.h | 9 +-
include/asm-generic/tlb.h | 49 ++++-----
include/linux/mm.h | 11 +-
include/linux/mm_types.h | 34 +-----
include/linux/rmap.h | 2 +
include/linux/swap.h | 6 +-
mm/memory.c | 24 +++--
mm/mmu_gather.c | 114 ++++++++++++++------
mm/rmap.c | 125 ++++++++++++++++------
mm/swap.c | 201 ++++++++++++++++++++++--------------
mm/swap_state.c | 11 +-
11 files changed, 367 insertions(+), 219 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 1/5] mm: Implement folio_remove_rmap_range()
2023-08-30 9:50 [PATCH v2 0/5] Optimize mmap_exit for large folios Ryan Roberts
@ 2023-08-30 9:50 ` Ryan Roberts
2023-08-30 14:51 ` Matthew Wilcox
2023-08-30 9:50 ` [PATCH v2 2/5] mm/mmu_gather: generalize mmu_gather rmap removal mechanism Ryan Roberts
` (3 subsequent siblings)
4 siblings, 1 reply; 15+ messages in thread
From: Ryan Roberts @ 2023-08-30 9:50 UTC (permalink / raw)
To: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, Matthew Wilcox (Oracle),
David Hildenbrand, Yu Zhao, Kirill A. Shutemov, Yin Fengwei,
Yang Shi, Huang, Ying, Zi Yan
Cc: Ryan Roberts, linux-mm, linux-kernel
Like page_remove_rmap() but batch-removes the rmap for a range of pages
belonging to a folio. This can provide a small speedup due to less
manipuation of the various counters. But more crucially, if removing the
rmap for all pages of a folio in a batch, there is no need to
(spuriously) add it to the deferred split list, which saves significant
cost when there is contention for the split queue lock.
All contained pages are accounted using the order-0 folio (or base page)
scheme.
page_remove_rmap() is refactored so that it forwards to
folio_remove_rmap_range() for !compound cases, and both functions now
share a common epilogue function. The intention here is to avoid
duplication of code.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
include/linux/rmap.h | 2 +
mm/rmap.c | 125 ++++++++++++++++++++++++++++++++-----------
2 files changed, 97 insertions(+), 30 deletions(-)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index 51cc21ebb568..f2e5af3c0e7f 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -202,6 +202,8 @@ void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr,
struct vm_area_struct *, bool compound);
void page_remove_rmap(struct page *, struct vm_area_struct *,
bool compound);
+void folio_remove_rmap_range(struct folio *folio, struct page *page,
+ int nr, struct vm_area_struct *vma);
void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *,
unsigned long address, rmap_t flags);
diff --git a/mm/rmap.c b/mm/rmap.c
index ec7f8e6c9e48..5ea51c70ecd6 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1379,6 +1379,94 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
folio_add_file_rmap_range(folio, page, nr_pages, vma, compound);
}
+/**
+ * __remove_rmap_finish - common operations when taking down a mapping.
+ * @folio: Folio containing all pages taken down.
+ * @vma: The VM area containing the range.
+ * @compound: True if pages were taken down from PMD or false if from PTE(s).
+ * @nr_unmapped: Number of pages within folio that are now unmapped.
+ * @nr_mapped: Number of pages within folio that are still mapped.
+ */
+static void __remove_rmap_finish(struct folio *folio,
+ struct vm_area_struct *vma, bool compound,
+ int nr_unmapped, int nr_mapped)
+{
+ enum node_stat_item idx;
+
+ if (nr_unmapped) {
+ idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED;
+ __lruvec_stat_mod_folio(folio, idx, -nr_unmapped);
+
+ /*
+ * Queue large anon folio for deferred split if at least one
+ * page of the folio is unmapped and at least one page is still
+ * mapped.
+ */
+ if (folio_test_large(folio) &&
+ folio_test_anon(folio) && nr_mapped)
+ deferred_split_folio(folio);
+ }
+
+ /*
+ * It would be tidy to reset folio_test_anon mapping when fully
+ * unmapped, but that might overwrite a racing page_add_anon_rmap
+ * which increments mapcount after us but sets mapping before us:
+ * so leave the reset to free_pages_prepare, and remember that
+ * it's only reliable while mapped.
+ */
+
+ munlock_vma_folio(folio, vma, compound);
+}
+
+/**
+ * folio_remove_rmap_range - Take down PTE mappings from a range of pages.
+ * @folio: Folio containing all pages in range.
+ * @page: First page in range to unmap.
+ * @nr: Number of pages to unmap.
+ * @vma: The VM area containing the range.
+ *
+ * All pages in the range must belong to the same VMA & folio. They must be
+ * mapped with PTEs, not a PMD.
+ *
+ * Context: Caller holds the pte lock.
+ */
+void folio_remove_rmap_range(struct folio *folio, struct page *page,
+ int nr, struct vm_area_struct *vma)
+{
+ atomic_t *mapped = &folio->_nr_pages_mapped;
+ int nr_unmapped = 0;
+ int nr_mapped = 0;
+ bool last;
+
+ if (unlikely(folio_test_hugetlb(folio))) {
+ VM_WARN_ON_FOLIO(1, folio);
+ return;
+ }
+
+ VM_WARN_ON_ONCE(page < &folio->page ||
+ page + nr > (&folio->page + folio_nr_pages(folio)));
+
+ if (!folio_test_large(folio)) {
+ /* Is this the page's last map to be removed? */
+ last = atomic_add_negative(-1, &page->_mapcount);
+ nr_unmapped = last;
+ } else {
+ for (; nr != 0; nr--, page++) {
+ /* Is this the page's last map to be removed? */
+ last = atomic_add_negative(-1, &page->_mapcount);
+ if (last)
+ nr_unmapped++;
+ }
+
+ /* Pages still mapped if folio mapped entirely */
+ nr_mapped = atomic_sub_return_relaxed(nr_unmapped, mapped);
+ if (nr_mapped >= COMPOUND_MAPPED)
+ nr_unmapped = 0;
+ }
+
+ __remove_rmap_finish(folio, vma, false, nr_unmapped, nr_mapped);
+}
+
/**
* page_remove_rmap - take down pte mapping from a page
* @page: page to remove mapping from
@@ -1405,15 +1493,13 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
return;
}
- /* Is page being unmapped by PTE? Is this its last map to be removed? */
+ /* Is page being unmapped by PTE? */
if (likely(!compound)) {
- last = atomic_add_negative(-1, &page->_mapcount);
- nr = last;
- if (last && folio_test_large(folio)) {
- nr = atomic_dec_return_relaxed(mapped);
- nr = (nr < COMPOUND_MAPPED);
- }
- } else if (folio_test_pmd_mappable(folio)) {
+ folio_remove_rmap_range(folio, page, 1, vma);
+ return;
+ }
+
+ if (folio_test_pmd_mappable(folio)) {
/* That test is redundant: it's for safety or to optimize out */
last = atomic_add_negative(-1, &folio->_entire_mapcount);
@@ -1441,29 +1527,8 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
idx = NR_FILE_PMDMAPPED;
__lruvec_stat_mod_folio(folio, idx, -nr_pmdmapped);
}
- if (nr) {
- idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED;
- __lruvec_stat_mod_folio(folio, idx, -nr);
-
- /*
- * Queue anon THP for deferred split if at least one
- * page of the folio is unmapped and at least one page
- * is still mapped.
- */
- if (folio_test_pmd_mappable(folio) && folio_test_anon(folio))
- if (!compound || nr < nr_pmdmapped)
- deferred_split_folio(folio);
- }
-
- /*
- * It would be tidy to reset folio_test_anon mapping when fully
- * unmapped, but that might overwrite a racing page_add_anon_rmap
- * which increments mapcount after us but sets mapping before us:
- * so leave the reset to free_pages_prepare, and remember that
- * it's only reliable while mapped.
- */
- munlock_vma_folio(folio, vma, compound);
+ __remove_rmap_finish(folio, vma, compound, nr, nr_pmdmapped - nr);
}
/*
--
2.25.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 2/5] mm/mmu_gather: generalize mmu_gather rmap removal mechanism
2023-08-30 9:50 [PATCH v2 0/5] Optimize mmap_exit for large folios Ryan Roberts
2023-08-30 9:50 ` [PATCH v2 1/5] mm: Implement folio_remove_rmap_range() Ryan Roberts
@ 2023-08-30 9:50 ` Ryan Roberts
2023-08-30 9:50 ` [PATCH v2 3/5] mm/mmu_gather: Remove encoded_page infrastructure Ryan Roberts
` (2 subsequent siblings)
4 siblings, 0 replies; 15+ messages in thread
From: Ryan Roberts @ 2023-08-30 9:50 UTC (permalink / raw)
To: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, Matthew Wilcox (Oracle),
David Hildenbrand, Yu Zhao, Kirill A. Shutemov, Yin Fengwei,
Yang Shi, Huang, Ying, Zi Yan
Cc: Ryan Roberts, linux-mm, linux-kernel
Commit 5df397dec7c4 ("mm: delay page_remove_rmap() until after the TLB
has been flushed") added a mechanism whereby pages added to the
mmu_gather buffer could indicate whether they should also be removed
from the rmap. Then a call to the new tlb_flush_rmaps() API would
iterate though the buffer and remove each flagged page from the rmap.
This mechanism was intended for use with !PageAnon(page) pages only.
Let's generalize this rmap removal mechanism so that any type of page
can be removed from the rmap. This is done as preparation for batching
rmap removals with folio_remove_rmap_range(), whereby we will pass a
contiguous range of pages belonging to the same folio to be removed in
one shot for a performance improvement.
The mmu_gather now maintains a "pointer" that points to batch and index
within that batch of the next page in the queue that is yet to be
removed from the rmap. tlb_discard_rmaps() resets this "pointer" to the
first empty location in the queue. Whenever tlb_flush_rmaps() is called,
every page from "pointer" to the end of the queue is removed from the
rmap. Once the mmu is flushed (tlb_flush_mmu()/tlb_finish_mmu()) any
pending rmap removals are discarded. This pointer mechanism ensures that
tlb_flush_rmaps() only has to walk the part of the queue for which rmap
removal is pending, avoids the (potentially large) early portion of the
queue for which rmap removal has already been performed but for which
tlb invalidation/page freeing is still pending.
tlb_flush_rmaps() must always be called under the same PTL as was used
to clear the corresponding PTEs. So in practice rmap removal will be
done in a batch for each PTE table, while the tlbi/freeing can continue
to be done in much bigger batches outside the PTL. See this example
flow:
tlb_gather_mmu()
for each pte table {
with ptl held {
for each pte {
tlb_remove_tlb_entry()
__tlb_remove_page()
}
if (any removed pages require rmap after tlbi)
tlb_flush_mmu_tlbonly()
tlb_flush_rmaps()
}
if (full)
tlb_flush_mmu()
}
tlb_finish_mmu()
So this more general mechanism is no longer just for delaying rmap
removal until after tlbi, but can be used that way when required.
Note that s390 does not gather pages, but does immediate tlbi and page
freeing. In this case we continue to do the rmap removal page-by-page
without gathering them in the mmu_gather.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
include/asm-generic/tlb.h | 34 ++++++++++++------------
mm/memory.c | 24 ++++++++++-------
mm/mmu_gather.c | 54 +++++++++++++++++++++++----------------
3 files changed, 65 insertions(+), 47 deletions(-)
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 129a3a759976..f339d68cf44f 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -266,25 +266,30 @@ extern bool __tlb_remove_page_size(struct mmu_gather *tlb,
#ifdef CONFIG_SMP
/*
- * This both sets 'delayed_rmap', and returns true. It would be an inline
- * function, except we define it before the 'struct mmu_gather'.
+ * For configurations that support batching the rmap removal, the removal is
+ * triggered by calling tlb_flush_rmaps(), which must be called after the pte(s)
+ * are cleared and the page has been added to the mmu_gather, and before the ptl
+ * lock that was held for clearing the pte is released.
*/
-#define tlb_delay_rmap(tlb) (((tlb)->delayed_rmap = 1), true)
+#define tlb_batch_rmap(tlb) (true)
extern void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma);
+extern void tlb_discard_rmaps(struct mmu_gather *tlb);
#endif
#endif
/*
- * We have a no-op version of the rmap removal that doesn't
- * delay anything. That is used on S390, which flushes remote
- * TLBs synchronously, and on UP, which doesn't have any
- * remote TLBs to flush and is not preemptible due to this
- * all happening under the page table lock.
+ * We have a no-op version of the rmap removal that doesn't do anything. That is
+ * used on S390, which flushes remote TLBs synchronously, and on UP, which
+ * doesn't have any remote TLBs to flush and is not preemptible due to this all
+ * happening under the page table lock. Here, the caller must manage each rmap
+ * removal separately.
*/
-#ifndef tlb_delay_rmap
-#define tlb_delay_rmap(tlb) (false)
-static inline void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma) { }
+#ifndef tlb_batch_rmap
+#define tlb_batch_rmap(tlb) (false)
+static inline void tlb_flush_rmaps(struct mmu_gather *tlb,
+ struct vm_area_struct *vma) { }
+static inline void tlb_discard_rmaps(struct mmu_gather *tlb) { }
#endif
/*
@@ -317,11 +322,6 @@ struct mmu_gather {
*/
unsigned int freed_tables : 1;
- /*
- * Do we have pending delayed rmap removals?
- */
- unsigned int delayed_rmap : 1;
-
/*
* at which levels have we cleared entries?
*/
@@ -343,6 +343,8 @@ struct mmu_gather {
struct mmu_gather_batch *active;
struct mmu_gather_batch local;
struct page *__pages[MMU_GATHER_BUNDLE];
+ struct mmu_gather_batch *rmap_pend;
+ unsigned int rmap_pend_first;
#ifdef CONFIG_MMU_GATHER_PAGE_SIZE
unsigned int page_size;
diff --git a/mm/memory.c b/mm/memory.c
index 12647d139a13..823c8a6813d1 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1405,6 +1405,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
swp_entry_t entry;
tlb_change_page_size(tlb, PAGE_SIZE);
+ tlb_discard_rmaps(tlb);
init_rss_vec(rss);
start_pte = pte = pte_offset_map_lock(mm, pmd, addr, &ptl);
if (!pte)
@@ -1423,7 +1424,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
break;
if (pte_present(ptent)) {
- unsigned int delay_rmap;
+ unsigned int batch_rmap;
page = vm_normal_page(vma, addr, ptent);
if (unlikely(!should_zap_page(details, page)))
@@ -1438,12 +1439,15 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
continue;
}
- delay_rmap = 0;
+ batch_rmap = tlb_batch_rmap(tlb);
if (!PageAnon(page)) {
if (pte_dirty(ptent)) {
set_page_dirty(page);
- if (tlb_delay_rmap(tlb)) {
- delay_rmap = 1;
+ if (batch_rmap) {
+ /*
+ * Ensure tlb flush happens
+ * before rmap remove.
+ */
force_flush = 1;
}
}
@@ -1451,12 +1455,12 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
mark_page_accessed(page);
}
rss[mm_counter(page)]--;
- if (!delay_rmap) {
+ if (!batch_rmap) {
page_remove_rmap(page, vma, false);
if (unlikely(page_mapcount(page) < 0))
print_bad_pte(vma, addr, ptent, page);
}
- if (unlikely(__tlb_remove_page(tlb, page, delay_rmap))) {
+ if (unlikely(__tlb_remove_page(tlb, page, 0))) {
force_flush = 1;
addr += PAGE_SIZE;
break;
@@ -1517,10 +1521,12 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
arch_leave_lazy_mmu_mode();
/* Do the actual TLB flush before dropping ptl */
- if (force_flush) {
+ if (force_flush)
tlb_flush_mmu_tlbonly(tlb);
- tlb_flush_rmaps(tlb, vma);
- }
+
+ /* Rmap removal must always happen before dropping ptl */
+ tlb_flush_rmaps(tlb, vma);
+
pte_unmap_unlock(start_pte, ptl);
/*
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index 4f559f4ddd21..fb34151c0da9 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -19,10 +19,6 @@ static bool tlb_next_batch(struct mmu_gather *tlb)
{
struct mmu_gather_batch *batch;
- /* Limit batching if we have delayed rmaps pending */
- if (tlb->delayed_rmap && tlb->active != &tlb->local)
- return false;
-
batch = tlb->active;
if (batch->next) {
tlb->active = batch->next;
@@ -48,37 +44,49 @@ static bool tlb_next_batch(struct mmu_gather *tlb)
}
#ifdef CONFIG_SMP
-static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch, struct vm_area_struct *vma)
+static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch,
+ unsigned int first,
+ struct vm_area_struct *vma)
{
- for (int i = 0; i < batch->nr; i++) {
+ for (int i = first; i < batch->nr; i++) {
struct encoded_page *enc = batch->encoded_pages[i];
+ struct page *page = encoded_page_ptr(enc);
- if (encoded_page_flags(enc)) {
- struct page *page = encoded_page_ptr(enc);
- page_remove_rmap(page, vma, false);
- }
+ page_remove_rmap(page, vma, false);
}
}
/**
- * tlb_flush_rmaps - do pending rmap removals after we have flushed the TLB
+ * tlb_flush_rmaps - do pending rmap removals
* @tlb: the current mmu_gather
* @vma: The memory area from which the pages are being removed.
*
- * Note that because of how tlb_next_batch() above works, we will
- * never start multiple new batches with pending delayed rmaps, so
- * we only need to walk through the current active batch and the
- * original local one.
+ * Removes rmap from all pages added via (e.g.) __tlb_remove_page_size() since
+ * the last call to tlb_discard_rmaps() or tlb_flush_rmaps(). All of those pages
+ * must have been mapped by vma. Must be called after the pte(s) are cleared,
+ * and before the ptl lock that was held for clearing the pte is released. Pages
+ * are accounted using the order-0 folio (or base page) scheme.
*/
void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
{
- if (!tlb->delayed_rmap)
- return;
+ struct mmu_gather_batch *batch = tlb->rmap_pend;
- tlb_flush_rmap_batch(&tlb->local, vma);
- if (tlb->active != &tlb->local)
- tlb_flush_rmap_batch(tlb->active, vma);
- tlb->delayed_rmap = 0;
+ tlb_flush_rmap_batch(batch, tlb->rmap_pend_first, vma);
+
+ for (batch = batch->next; batch && batch->nr; batch = batch->next)
+ tlb_flush_rmap_batch(batch, 0, vma);
+
+ tlb_discard_rmaps(tlb);
+}
+
+/**
+ * tlb_discard_rmaps - discard any pending rmap removals
+ * @tlb: the current mmu_gather
+ */
+void tlb_discard_rmaps(struct mmu_gather *tlb)
+{
+ tlb->rmap_pend = tlb->active;
+ tlb->rmap_pend_first = tlb->active->nr;
}
#endif
@@ -103,6 +111,7 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
} while (batch->nr);
}
tlb->active = &tlb->local;
+ tlb_discard_rmaps(tlb);
}
static void tlb_batch_list_free(struct mmu_gather *tlb)
@@ -313,8 +322,9 @@ static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm,
tlb->local.max = ARRAY_SIZE(tlb->__pages);
tlb->active = &tlb->local;
tlb->batch_count = 0;
+ tlb->rmap_pend = &tlb->local;
+ tlb->rmap_pend_first = 0;
#endif
- tlb->delayed_rmap = 0;
tlb_table_init(tlb);
#ifdef CONFIG_MMU_GATHER_PAGE_SIZE
--
2.25.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 3/5] mm/mmu_gather: Remove encoded_page infrastructure
2023-08-30 9:50 [PATCH v2 0/5] Optimize mmap_exit for large folios Ryan Roberts
2023-08-30 9:50 ` [PATCH v2 1/5] mm: Implement folio_remove_rmap_range() Ryan Roberts
2023-08-30 9:50 ` [PATCH v2 2/5] mm/mmu_gather: generalize mmu_gather rmap removal mechanism Ryan Roberts
@ 2023-08-30 9:50 ` Ryan Roberts
2023-08-30 9:50 ` [PATCH v2 4/5] mm: Refector release_pages() Ryan Roberts
2023-08-30 9:50 ` [PATCH v2 5/5] mm/mmu_gather: Store and process pages in contig ranges Ryan Roberts
4 siblings, 0 replies; 15+ messages in thread
From: Ryan Roberts @ 2023-08-30 9:50 UTC (permalink / raw)
To: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, Matthew Wilcox (Oracle),
David Hildenbrand, Yu Zhao, Kirill A. Shutemov, Yin Fengwei,
Yang Shi, Huang, Ying, Zi Yan
Cc: Ryan Roberts, linux-mm, linux-kernel
commit 70fb4fdff582 ("mm: introduce 'encoded' page pointers with
embedded extra bits") and commit 7cc8f9c7146a ("mm: mmu_gather: prepare
to gather encoded page pointers with flags") converted mmu_gather for
dealing with encoded_page, where the bottom 2 bits could encode extra
flags. Only 1 bit was ever used; to flag whether the page should
participate in a delayed rmap removal.
Now that the mmu_gather batched rmap removal mechanism has been
generalized, all pages participate and therefore the flag is unused. So
let's remove encoded_page to simplify the code. It also gets in the way
of further optimization which will be done in a follow up patch.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
arch/s390/include/asm/tlb.h | 9 +++------
include/asm-generic/tlb.h | 10 +++++-----
include/linux/mm.h | 4 +---
include/linux/mm_types.h | 34 +---------------------------------
include/linux/swap.h | 2 +-
mm/memory.c | 2 +-
mm/mmu_gather.c | 11 +++++------
mm/swap.c | 8 +++-----
mm/swap_state.c | 4 ++--
9 files changed, 22 insertions(+), 62 deletions(-)
diff --git a/arch/s390/include/asm/tlb.h b/arch/s390/include/asm/tlb.h
index 383b1f91442c..c40b44f6a31b 100644
--- a/arch/s390/include/asm/tlb.h
+++ b/arch/s390/include/asm/tlb.h
@@ -25,7 +25,7 @@
void __tlb_remove_table(void *_table);
static inline void tlb_flush(struct mmu_gather *tlb);
static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
- struct encoded_page *page,
+ struct page *page,
int page_size);
#define tlb_flush tlb_flush
@@ -41,15 +41,12 @@ static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
* Release the page cache reference for a pte removed by
* tlb_ptep_clear_flush. In both flush modes the tlb for a page cache page
* has already been freed, so just do free_page_and_swap_cache.
- *
- * s390 doesn't delay rmap removal, so there is nothing encoded in
- * the page pointer.
*/
static inline bool __tlb_remove_page_size(struct mmu_gather *tlb,
- struct encoded_page *page,
+ struct page *page,
int page_size)
{
- free_page_and_swap_cache(encoded_page_ptr(page));
+ free_page_and_swap_cache(page);
return false;
}
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index f339d68cf44f..d874415aaa33 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -246,7 +246,7 @@ struct mmu_gather_batch {
struct mmu_gather_batch *next;
unsigned int nr;
unsigned int max;
- struct encoded_page *encoded_pages[];
+ struct page *pages[];
};
#define MAX_GATHER_BATCH \
@@ -261,7 +261,7 @@ struct mmu_gather_batch {
#define MAX_GATHER_BATCH_COUNT (10000UL/MAX_GATHER_BATCH)
extern bool __tlb_remove_page_size(struct mmu_gather *tlb,
- struct encoded_page *page,
+ struct page *page,
int page_size);
#ifdef CONFIG_SMP
@@ -464,13 +464,13 @@ static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
static inline void tlb_remove_page_size(struct mmu_gather *tlb,
struct page *page, int page_size)
{
- if (__tlb_remove_page_size(tlb, encode_page(page, 0), page_size))
+ if (__tlb_remove_page_size(tlb, page, page_size))
tlb_flush_mmu(tlb);
}
-static __always_inline bool __tlb_remove_page(struct mmu_gather *tlb, struct page *page, unsigned int flags)
+static __always_inline bool __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
{
- return __tlb_remove_page_size(tlb, encode_page(page, flags), PAGE_SIZE);
+ return __tlb_remove_page_size(tlb, page, PAGE_SIZE);
}
/* tlb_remove_page
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 53efddc4d178..9cd20a38089c 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1481,8 +1481,7 @@ static inline void folio_put_refs(struct folio *folio, int refs)
*
* release_pages() releases a simple array of multiple pages, and
* accepts various different forms of said page array: either
- * a regular old boring array of pages, an array of folios, or
- * an array of encoded page pointers.
+ * a regular old boring array of pages or an array of folios.
*
* The transparent union syntax for this kind of "any of these
* argument types" is all kinds of ugly, so look away.
@@ -1490,7 +1489,6 @@ static inline void folio_put_refs(struct folio *folio, int refs)
typedef union {
struct page **pages;
struct folio **folios;
- struct encoded_page **encoded_pages;
} release_pages_arg __attribute__ ((__transparent_union__));
void release_pages(release_pages_arg, int nr);
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 5a995089cbf5..e9a0daf0c8d4 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -68,7 +68,7 @@ struct mem_cgroup;
#ifdef CONFIG_HAVE_ALIGNED_STRUCT_PAGE
#define _struct_page_alignment __aligned(2 * sizeof(unsigned long))
#else
-#define _struct_page_alignment __aligned(sizeof(unsigned long))
+#define _struct_page_alignment
#endif
struct page {
@@ -216,38 +216,6 @@ struct page {
#endif
} _struct_page_alignment;
-/*
- * struct encoded_page - a nonexistent type marking this pointer
- *
- * An 'encoded_page' pointer is a pointer to a regular 'struct page', but
- * with the low bits of the pointer indicating extra context-dependent
- * information. Not super-common, but happens in mmu_gather and mlock
- * handling, and this acts as a type system check on that use.
- *
- * We only really have two guaranteed bits in general, although you could
- * play with 'struct page' alignment (see CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
- * for more.
- *
- * Use the supplied helper functions to endcode/decode the pointer and bits.
- */
-struct encoded_page;
-#define ENCODE_PAGE_BITS 3ul
-static __always_inline struct encoded_page *encode_page(struct page *page, unsigned long flags)
-{
- BUILD_BUG_ON(flags > ENCODE_PAGE_BITS);
- return (struct encoded_page *)(flags | (unsigned long)page);
-}
-
-static inline unsigned long encoded_page_flags(struct encoded_page *page)
-{
- return ENCODE_PAGE_BITS & (unsigned long)page;
-}
-
-static inline struct page *encoded_page_ptr(struct encoded_page *page)
-{
- return (struct page *)(~ENCODE_PAGE_BITS & (unsigned long)page);
-}
-
/*
* A swap entry has to fit into a "unsigned long", as the entry is hidden
* in the "index" field of the swapper address space.
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 493487ed7c38..9e12c6d49997 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -451,7 +451,7 @@ static inline unsigned long total_swapcache_pages(void)
extern void free_swap_cache(struct page *page);
extern void free_page_and_swap_cache(struct page *);
-extern void free_pages_and_swap_cache(struct encoded_page **, int);
+extern void free_pages_and_swap_cache(struct page **, int);
/* linux/mm/swapfile.c */
extern atomic_long_t nr_swap_pages;
extern long total_swap_pages;
diff --git a/mm/memory.c b/mm/memory.c
index 823c8a6813d1..3d5d395caba4 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1460,7 +1460,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
if (unlikely(page_mapcount(page) < 0))
print_bad_pte(vma, addr, ptent, page);
}
- if (unlikely(__tlb_remove_page(tlb, page, 0))) {
+ if (unlikely(__tlb_remove_page(tlb, page))) {
force_flush = 1;
addr += PAGE_SIZE;
break;
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index fb34151c0da9..cdebb5b9f5c4 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -49,8 +49,7 @@ static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch,
struct vm_area_struct *vma)
{
for (int i = first; i < batch->nr; i++) {
- struct encoded_page *enc = batch->encoded_pages[i];
- struct page *page = encoded_page_ptr(enc);
+ struct page *page = batch->pages[i];
page_remove_rmap(page, vma, false);
}
@@ -95,7 +94,7 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
struct mmu_gather_batch *batch;
for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
- struct encoded_page **pages = batch->encoded_pages;
+ struct page **pages = batch->pages;
do {
/*
@@ -125,7 +124,7 @@ static void tlb_batch_list_free(struct mmu_gather *tlb)
tlb->local.next = NULL;
}
-bool __tlb_remove_page_size(struct mmu_gather *tlb, struct encoded_page *page, int page_size)
+bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size)
{
struct mmu_gather_batch *batch;
@@ -140,13 +139,13 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct encoded_page *page, i
* Add the page and check if we are full. If so
* force a flush.
*/
- batch->encoded_pages[batch->nr++] = page;
+ batch->pages[batch->nr++] = page;
if (batch->nr == batch->max) {
if (!tlb_next_batch(tlb))
return true;
batch = tlb->active;
}
- VM_BUG_ON_PAGE(batch->nr > batch->max, encoded_page_ptr(page));
+ VM_BUG_ON_PAGE(batch->nr > batch->max, page);
return false;
}
diff --git a/mm/swap.c b/mm/swap.c
index cd8f0150ba3a..b05cce475202 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -953,14 +953,12 @@ void lru_cache_disable(void)
* Decrement the reference count on all the pages in @arg. If it
* fell to zero, remove the page from the LRU and free it.
*
- * Note that the argument can be an array of pages, encoded pages,
- * or folio pointers. We ignore any encoded bits, and turn any of
- * them into just a folio that gets free'd.
+ * Note that the argument can be an array of pages or folio pointers.
*/
void release_pages(release_pages_arg arg, int nr)
{
int i;
- struct encoded_page **encoded = arg.encoded_pages;
+ struct page **pages = arg.pages;
LIST_HEAD(pages_to_free);
struct lruvec *lruvec = NULL;
unsigned long flags = 0;
@@ -970,7 +968,7 @@ void release_pages(release_pages_arg arg, int nr)
struct folio *folio;
/* Turn any of the argument types into a folio */
- folio = page_folio(encoded_page_ptr(encoded[i]));
+ folio = page_folio(pages[i]);
/*
* Make sure the IRQ-safe lock-holding time does not get
diff --git a/mm/swap_state.c b/mm/swap_state.c
index b3b14bd0dd64..2132340c6e61 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -307,11 +307,11 @@ void free_page_and_swap_cache(struct page *page)
* Passed an array of pages, drop them all from swapcache and then release
* them. They are removed from the LRU and freed if this is their last use.
*/
-void free_pages_and_swap_cache(struct encoded_page **pages, int nr)
+void free_pages_and_swap_cache(struct page **pages, int nr)
{
lru_add_drain();
for (int i = 0; i < nr; i++)
- free_swap_cache(encoded_page_ptr(pages[i]));
+ free_swap_cache(pages[i]);
release_pages(pages, nr);
}
--
2.25.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 4/5] mm: Refector release_pages()
2023-08-30 9:50 [PATCH v2 0/5] Optimize mmap_exit for large folios Ryan Roberts
` (2 preceding siblings ...)
2023-08-30 9:50 ` [PATCH v2 3/5] mm/mmu_gather: Remove encoded_page infrastructure Ryan Roberts
@ 2023-08-30 9:50 ` Ryan Roberts
2023-08-30 19:11 ` Matthew Wilcox
2023-08-30 9:50 ` [PATCH v2 5/5] mm/mmu_gather: Store and process pages in contig ranges Ryan Roberts
4 siblings, 1 reply; 15+ messages in thread
From: Ryan Roberts @ 2023-08-30 9:50 UTC (permalink / raw)
To: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, Matthew Wilcox (Oracle),
David Hildenbrand, Yu Zhao, Kirill A. Shutemov, Yin Fengwei,
Yang Shi, Huang, Ying, Zi Yan
Cc: Ryan Roberts, linux-mm, linux-kernel
In preparation for implementing folios_put_refs() in the next patch,
refactor release_pages() into a set of helper functions, which can be
reused. The primary difference between release_pages() and
folios_put_refs() is how they iterate over the set of folios. The
per-folio actions are identical.
No functional change intended.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/swap.c | 167 +++++++++++++++++++++++++++++++-----------------------
1 file changed, 97 insertions(+), 70 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index b05cce475202..5d3e35668929 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -945,6 +945,98 @@ void lru_cache_disable(void)
#endif
}
+struct folios_put_refs_ctx {
+ struct list_head pages_to_free;
+ struct lruvec *lruvec;
+ unsigned long flags;
+ unsigned int lock_batch;
+};
+
+static void __folios_put_refs_init(struct folios_put_refs_ctx *ctx)
+{
+ *ctx = (struct folios_put_refs_ctx) {
+ .pages_to_free = LIST_HEAD_INIT(ctx->pages_to_free),
+ .lruvec = NULL,
+ .flags = 0,
+ };
+}
+
+static void __folios_put_refs_complete(struct folios_put_refs_ctx *ctx)
+{
+ if (ctx->lruvec)
+ unlock_page_lruvec_irqrestore(ctx->lruvec, ctx->flags);
+
+ mem_cgroup_uncharge_list(&ctx->pages_to_free);
+ free_unref_page_list(&ctx->pages_to_free);
+}
+
+static void __folios_put_refs_do_one(struct folios_put_refs_ctx *ctx,
+ struct folio *folio, int refs)
+{
+ /*
+ * Make sure the IRQ-safe lock-holding time does not get
+ * excessive with a continuous string of pages from the
+ * same lruvec. The lock is held only if lruvec != NULL.
+ */
+ if (ctx->lruvec && ++ctx->lock_batch == SWAP_CLUSTER_MAX) {
+ unlock_page_lruvec_irqrestore(ctx->lruvec, ctx->flags);
+ ctx->lruvec = NULL;
+ }
+
+ if (is_huge_zero_page(&folio->page))
+ return;
+
+ if (folio_is_zone_device(folio)) {
+ if (ctx->lruvec) {
+ unlock_page_lruvec_irqrestore(ctx->lruvec, ctx->flags);
+ ctx->lruvec = NULL;
+ }
+ if (put_devmap_managed_page_refs(&folio->page, refs))
+ return;
+ if (folio_ref_sub_and_test(folio, refs))
+ free_zone_device_page(&folio->page);
+ return;
+ }
+
+ if (!folio_ref_sub_and_test(folio, refs))
+ return;
+
+ if (folio_test_large(folio)) {
+ if (ctx->lruvec) {
+ unlock_page_lruvec_irqrestore(ctx->lruvec, ctx->flags);
+ ctx->lruvec = NULL;
+ }
+ __folio_put_large(folio);
+ return;
+ }
+
+ if (folio_test_lru(folio)) {
+ struct lruvec *prev_lruvec = ctx->lruvec;
+
+ ctx->lruvec = folio_lruvec_relock_irqsave(folio, ctx->lruvec,
+ &ctx->flags);
+ if (prev_lruvec != ctx->lruvec)
+ ctx->lock_batch = 0;
+
+ lruvec_del_folio(ctx->lruvec, folio);
+ __folio_clear_lru_flags(folio);
+ }
+
+ /*
+ * In rare cases, when truncation or holepunching raced with
+ * munlock after VM_LOCKED was cleared, Mlocked may still be
+ * found set here. This does not indicate a problem, unless
+ * "unevictable_pgs_cleared" appears worryingly large.
+ */
+ if (unlikely(folio_test_mlocked(folio))) {
+ __folio_clear_mlocked(folio);
+ zone_stat_sub_folio(folio, NR_MLOCK);
+ count_vm_event(UNEVICTABLE_PGCLEARED);
+ }
+
+ list_add(&folio->lru, &ctx->pages_to_free);
+}
+
/**
* release_pages - batched put_page()
* @arg: array of pages to release
@@ -959,10 +1051,9 @@ void release_pages(release_pages_arg arg, int nr)
{
int i;
struct page **pages = arg.pages;
- LIST_HEAD(pages_to_free);
- struct lruvec *lruvec = NULL;
- unsigned long flags = 0;
- unsigned int lock_batch;
+ struct folios_put_refs_ctx ctx;
+
+ __folios_put_refs_init(&ctx);
for (i = 0; i < nr; i++) {
struct folio *folio;
@@ -970,74 +1061,10 @@ void release_pages(release_pages_arg arg, int nr)
/* Turn any of the argument types into a folio */
folio = page_folio(pages[i]);
- /*
- * Make sure the IRQ-safe lock-holding time does not get
- * excessive with a continuous string of pages from the
- * same lruvec. The lock is held only if lruvec != NULL.
- */
- if (lruvec && ++lock_batch == SWAP_CLUSTER_MAX) {
- unlock_page_lruvec_irqrestore(lruvec, flags);
- lruvec = NULL;
- }
-
- if (is_huge_zero_page(&folio->page))
- continue;
-
- if (folio_is_zone_device(folio)) {
- if (lruvec) {
- unlock_page_lruvec_irqrestore(lruvec, flags);
- lruvec = NULL;
- }
- if (put_devmap_managed_page(&folio->page))
- continue;
- if (folio_put_testzero(folio))
- free_zone_device_page(&folio->page);
- continue;
- }
-
- if (!folio_put_testzero(folio))
- continue;
-
- if (folio_test_large(folio)) {
- if (lruvec) {
- unlock_page_lruvec_irqrestore(lruvec, flags);
- lruvec = NULL;
- }
- __folio_put_large(folio);
- continue;
- }
-
- if (folio_test_lru(folio)) {
- struct lruvec *prev_lruvec = lruvec;
-
- lruvec = folio_lruvec_relock_irqsave(folio, lruvec,
- &flags);
- if (prev_lruvec != lruvec)
- lock_batch = 0;
-
- lruvec_del_folio(lruvec, folio);
- __folio_clear_lru_flags(folio);
- }
-
- /*
- * In rare cases, when truncation or holepunching raced with
- * munlock after VM_LOCKED was cleared, Mlocked may still be
- * found set here. This does not indicate a problem, unless
- * "unevictable_pgs_cleared" appears worryingly large.
- */
- if (unlikely(folio_test_mlocked(folio))) {
- __folio_clear_mlocked(folio);
- zone_stat_sub_folio(folio, NR_MLOCK);
- count_vm_event(UNEVICTABLE_PGCLEARED);
- }
-
- list_add(&folio->lru, &pages_to_free);
+ __folios_put_refs_do_one(&ctx, folio, 1);
}
- if (lruvec)
- unlock_page_lruvec_irqrestore(lruvec, flags);
- mem_cgroup_uncharge_list(&pages_to_free);
- free_unref_page_list(&pages_to_free);
+ __folios_put_refs_complete(&ctx);
}
EXPORT_SYMBOL(release_pages);
--
2.25.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* [PATCH v2 5/5] mm/mmu_gather: Store and process pages in contig ranges
2023-08-30 9:50 [PATCH v2 0/5] Optimize mmap_exit for large folios Ryan Roberts
` (3 preceding siblings ...)
2023-08-30 9:50 ` [PATCH v2 4/5] mm: Refector release_pages() Ryan Roberts
@ 2023-08-30 9:50 ` Ryan Roberts
2023-08-30 15:07 ` Matthew Wilcox
4 siblings, 1 reply; 15+ messages in thread
From: Ryan Roberts @ 2023-08-30 9:50 UTC (permalink / raw)
To: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, Matthew Wilcox (Oracle),
David Hildenbrand, Yu Zhao, Kirill A. Shutemov, Yin Fengwei,
Yang Shi, Huang, Ying, Zi Yan
Cc: Ryan Roberts, linux-mm, linux-kernel
mmu_gather accumulates a set of pages into a buffer for later rmap
removal and freeing. Page pointers were previously stored in a "linked
list of arrays", then at flush time, each page in the buffer was removed
from the rmap, removed from the swapcache and its refcount was
decremented; if the refcount reached 0, then it was freed.
With increasing numbers of large folios (or at least contiguous parts of
large folios) mapped into userspace processes (pagecache pages for
supporting filesystems currently, but in future also large anonymous
folios), we can measurably improve performance of process teardown:
- For rmap removal, we can batch-remove a range of pages belonging to
the same folio with folio_remove_rmap_range(), which is more efficient
because atomics can be manipulated just once per range. In the common
case, it also allows us to elide adding the (anon) folio to the
deferred split queue, only to remove it a bit later, once all pages of
the folio have been removed fro mthe rmap.
- For swapcache removal, we only need to check and remove the folio from
the swap cache once, rather than trying for each individual page.
- For page release, we can batch-decrement the refcount for each page in
the folio and free it if it hits zero.
Change the page pointer storage format within the mmu_gather batch
structure to store "pfn_range"s; a [start, end) pfn pair. This allows us
to run length encode a contiguous range of pages that all belong to the
same folio. This likely allows us to improve cache locality a bit. But
it also gives us a convenient format for implementing the above 3
optimizations.
Of course if running on a system that does not extensively use large
pte-mapped folios, then the RLE approach uses twice as much memory,
because each range is 1 page long and uses 2 words. But performance
measurements show no impact in terms of performance.
Macro Performance Results
-------------------------
Test: Timed kernel compilation on Ampere Altra (arm64), 80 jobs
Configs: Comparing with and without large anon folios
Without large anon folios:
| kernel | real-time | kern-time | user-time |
|:-----------------|------------:|------------:|------------:|
| baseline-laf-off | 0.0% | 0.0% | 0.0% |
| mmugather-range | -0.3% | -0.3% | 0.1% |
With large anon folios (order-3):
| kernel | real-time | kern-time | user-time |
|:-----------------|------------:|------------:|------------:|
| baseline-laf-on | 0.0% | 0.0% | 0.0% |
| mmugather-range | -0.7% | -3.9% | -0.1% |
Test: Timed kernel compilation in VM on Apple M2 MacBook Pro, 8 jobs
Configs: Comparing with and without large anon folios
Without large anon folios:
| kernel | real-time | kern-time | user-time |
|:-----------------|------------:|------------:|------------:|
| baseline-laf-off | 0.0% | 0.0% | 0.0% |
| mmugather-range | -0.9% | -2.9% | -0.6% |
With large anon folios (order-3):
| kernel | real-time | kern-time | user-time |
|:-----------------|------------:|------------:|------------:|
| baseline-laf-on | 0.0% | 0.0% | 0.0% |
| mmugather-range | -0.4% | -3.7% | -0.2% |
Micro Performance Results
-------------------------
Flame graphs for kernel compilation on Ampere Altra show reduction in
cycles consumed by __arm64_sys_exit_group syscall:
Without large anon folios: -2%
With large anon folios: -26%
For the large anon folios case, it also shows a big difference in cost
of rmap removal:
baseline: cycles in page_remove_rmap(): 24.7B
mmugather-range: cycles in folio_remove_rmap_range(): 5.5B
Furthermore, the baseline shows 5.2B cycles used by
deferred_split_folio() which has completely disappeared after
applying this series.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
include/asm-generic/tlb.h | 7 +++--
include/linux/mm.h | 7 +++++
include/linux/swap.h | 6 ++--
mm/mmu_gather.c | 59 +++++++++++++++++++++++++++++++++------
mm/swap.c | 26 +++++++++++++++++
mm/swap_state.c | 11 ++++----
6 files changed, 96 insertions(+), 20 deletions(-)
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index d874415aaa33..04e23cad6d1f 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -246,11 +246,11 @@ struct mmu_gather_batch {
struct mmu_gather_batch *next;
unsigned int nr;
unsigned int max;
- struct page *pages[];
+ struct pfn_range folios[];
};
#define MAX_GATHER_BATCH \
- ((PAGE_SIZE - sizeof(struct mmu_gather_batch)) / sizeof(void *))
+ ((PAGE_SIZE - sizeof(struct mmu_gather_batch)) / sizeof(struct pfn_range))
/*
* Limit the maximum number of mmu_gather batches to reduce a risk of soft
@@ -342,7 +342,8 @@ struct mmu_gather {
#ifndef CONFIG_MMU_GATHER_NO_GATHER
struct mmu_gather_batch *active;
struct mmu_gather_batch local;
- struct page *__pages[MMU_GATHER_BUNDLE];
+ struct pfn_range __folios[MMU_GATHER_BUNDLE];
+ unsigned long range_limit;
struct mmu_gather_batch *rmap_pend;
unsigned int rmap_pend_first;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 9cd20a38089c..f62d3d8ea0e4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1476,6 +1476,13 @@ static inline void folio_put_refs(struct folio *folio, int refs)
__folio_put(folio);
}
+struct pfn_range {
+ unsigned long start;
+ unsigned long end;
+};
+
+void folios_put_refs(struct pfn_range *folios, int nr);
+
/*
* union release_pages_arg - an array of pages or folios
*
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 9e12c6d49997..cd12056a6a5f 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -451,7 +451,7 @@ static inline unsigned long total_swapcache_pages(void)
extern void free_swap_cache(struct page *page);
extern void free_page_and_swap_cache(struct page *);
-extern void free_pages_and_swap_cache(struct page **, int);
+extern void free_folios_and_swap_cache(struct pfn_range *, int);
/* linux/mm/swapfile.c */
extern atomic_long_t nr_swap_pages;
extern long total_swap_pages;
@@ -528,8 +528,8 @@ static inline void put_swap_device(struct swap_info_struct *si)
* so leave put_page and release_pages undeclared... */
#define free_page_and_swap_cache(page) \
put_page(page)
-#define free_pages_and_swap_cache(pages, nr) \
- release_pages((pages), (nr));
+#define free_folios_and_swap_cache(folios, nr) \
+ folios_put_refs((folios), (nr))
/* used to sanity check ptes in zap_pte_range when CONFIG_SWAP=0 */
#define free_swap_and_cache(e) is_pfn_swap_entry(e)
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index cdebb5b9f5c4..c8687ce07ce0 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -22,6 +22,7 @@ static bool tlb_next_batch(struct mmu_gather *tlb)
batch = tlb->active;
if (batch->next) {
tlb->active = batch->next;
+ tlb->range_limit = ULONG_MAX;
return true;
}
@@ -39,6 +40,7 @@ static bool tlb_next_batch(struct mmu_gather *tlb)
tlb->active->next = batch;
tlb->active = batch;
+ tlb->range_limit = ULONG_MAX;
return true;
}
@@ -49,9 +51,12 @@ static void tlb_flush_rmap_batch(struct mmu_gather_batch *batch,
struct vm_area_struct *vma)
{
for (int i = first; i < batch->nr; i++) {
- struct page *page = batch->pages[i];
+ struct pfn_range *range = &batch->folios[i];
+ int nr = range->end - range->start;
+ struct page *start = pfn_to_page(range->start);
+ struct folio *folio = page_folio(start);
- page_remove_rmap(page, vma, false);
+ folio_remove_rmap_range(folio, start, nr, vma);
}
}
@@ -75,6 +80,11 @@ void tlb_flush_rmaps(struct mmu_gather *tlb, struct vm_area_struct *vma)
for (batch = batch->next; batch && batch->nr; batch = batch->next)
tlb_flush_rmap_batch(batch, 0, vma);
+ /*
+ * Move to the next range on next page insertion to prevent any future
+ * pages from being accumulated into the range we just did the rmap for.
+ */
+ tlb->range_limit = ULONG_MAX;
tlb_discard_rmaps(tlb);
}
@@ -94,7 +104,7 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
struct mmu_gather_batch *batch;
for (batch = &tlb->local; batch && batch->nr; batch = batch->next) {
- struct page **pages = batch->pages;
+ struct pfn_range *folios = batch->folios;
do {
/*
@@ -102,14 +112,15 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
*/
unsigned int nr = min(512U, batch->nr);
- free_pages_and_swap_cache(pages, nr);
- pages += nr;
+ free_folios_and_swap_cache(folios, nr);
+ folios += nr;
batch->nr -= nr;
cond_resched();
} while (batch->nr);
}
tlb->active = &tlb->local;
+ tlb->range_limit = ULONG_MAX;
tlb_discard_rmaps(tlb);
}
@@ -127,6 +138,8 @@ static void tlb_batch_list_free(struct mmu_gather *tlb)
bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_size)
{
struct mmu_gather_batch *batch;
+ struct pfn_range *range;
+ unsigned long pfn = page_to_pfn(page);
VM_BUG_ON(!tlb->end);
@@ -135,11 +148,38 @@ bool __tlb_remove_page_size(struct mmu_gather *tlb, struct page *page, int page_
#endif
batch = tlb->active;
+ range = &batch->folios[batch->nr - 1];
+
+ /*
+ * If there is a range being accumulated, add the pfn to the range if
+ * its contiguous, else start the next range. range_limit is always
+ * ULONG_MAX when nr is 0, which protects the batch->folios[-1] case.
+ */
+ if (tlb->range_limit != ULONG_MAX && pfn == range->end) {
+ range->end++;
+ } else {
+ struct folio *folio = page_folio(page);
+
+ range = &batch->folios[batch->nr++];
+ range->start = pfn;
+ range->end = pfn + 1;
+
+ tlb->range_limit = page_to_pfn(&folio->page) +
+ folio_nr_pages(folio);
+ }
+
+ /*
+ * If we have reached the end of the folio, move to the next range when
+ * we add the next page; Never span multiple folios in the same range.
+ */
+ if (range->end == tlb->range_limit)
+ tlb->range_limit = ULONG_MAX;
+
/*
- * Add the page and check if we are full. If so
- * force a flush.
+ * Check if we are full. If so force a flush. In order to ensure we
+ * always have a free range for the next added page, the last range in a
+ * batch always only has a single page.
*/
- batch->pages[batch->nr++] = page;
if (batch->nr == batch->max) {
if (!tlb_next_batch(tlb))
return true;
@@ -318,8 +358,9 @@ static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm,
tlb->need_flush_all = 0;
tlb->local.next = NULL;
tlb->local.nr = 0;
- tlb->local.max = ARRAY_SIZE(tlb->__pages);
+ tlb->local.max = ARRAY_SIZE(tlb->__folios);
tlb->active = &tlb->local;
+ tlb->range_limit = ULONG_MAX;
tlb->batch_count = 0;
tlb->rmap_pend = &tlb->local;
tlb->rmap_pend_first = 0;
diff --git a/mm/swap.c b/mm/swap.c
index 5d3e35668929..d777f2b47674 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -1037,6 +1037,32 @@ static void __folios_put_refs_do_one(struct folios_put_refs_ctx *ctx,
list_add(&folio->lru, &ctx->pages_to_free);
}
+/**
+ * folios_put_refs - batched folio_put_refs()
+ * @folios: array of `struct pfn_range`s to release
+ * @nr: number of folio ranges
+ *
+ * Each `struct pfn_range` describes the start and end pfn of a range within a
+ * single folio. The folio reference count is decremented once for each pfn in
+ * the range. If it fell to zero, remove the page from the LRU and free it.
+ */
+void folios_put_refs(struct pfn_range *folios, int nr)
+{
+ int i;
+ struct folios_put_refs_ctx ctx;
+
+ __folios_put_refs_init(&ctx);
+
+ for (i = 0; i < nr; i++) {
+ struct folio *folio = pfn_folio(folios[i].start);
+ int refs = folios[i].end - folios[i].start;
+
+ __folios_put_refs_do_one(&ctx, folio, refs);
+ }
+
+ __folios_put_refs_complete(&ctx);
+}
+
/**
* release_pages - batched put_page()
* @arg: array of pages to release
diff --git a/mm/swap_state.c b/mm/swap_state.c
index 2132340c6e61..900e57bf9882 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -304,15 +304,16 @@ void free_page_and_swap_cache(struct page *page)
}
/*
- * Passed an array of pages, drop them all from swapcache and then release
- * them. They are removed from the LRU and freed if this is their last use.
+ * Passed an array of folio ranges, drop all folios from swapcache and then put
+ * a folio reference for each page in the range. They are removed from the LRU
+ * and freed if this is their last use.
*/
-void free_pages_and_swap_cache(struct page **pages, int nr)
+void free_folios_and_swap_cache(struct pfn_range *folios, int nr)
{
lru_add_drain();
for (int i = 0; i < nr; i++)
- free_swap_cache(pages[i]);
- release_pages(pages, nr);
+ free_swap_cache(pfn_to_page(folios[i].start));
+ folios_put_refs(folios, nr);
}
static inline bool swap_use_vma_readahead(void)
--
2.25.1
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 1/5] mm: Implement folio_remove_rmap_range()
2023-08-30 9:50 ` [PATCH v2 1/5] mm: Implement folio_remove_rmap_range() Ryan Roberts
@ 2023-08-30 14:51 ` Matthew Wilcox
2023-08-30 15:42 ` Ryan Roberts
0 siblings, 1 reply; 15+ messages in thread
From: Matthew Wilcox @ 2023-08-30 14:51 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, David Hildenbrand, Yu Zhao, Kirill A. Shutemov,
Yin Fengwei, Yang Shi, Huang, Ying, Zi Yan, linux-mm,
linux-kernel
On Wed, Aug 30, 2023 at 10:50:07AM +0100, Ryan Roberts wrote:
> Like page_remove_rmap() but batch-removes the rmap for a range of pages
> belonging to a folio. This can provide a small speedup due to less
> manipuation of the various counters. But more crucially, if removing the
> rmap for all pages of a folio in a batch, there is no need to
> (spuriously) add it to the deferred split list, which saves significant
> cost when there is contention for the split queue lock.
>
> All contained pages are accounted using the order-0 folio (or base page)
> scheme.
>
> page_remove_rmap() is refactored so that it forwards to
> folio_remove_rmap_range() for !compound cases, and both functions now
> share a common epilogue function. The intention here is to avoid
> duplication of code.
What would you think to doing it like this instead? This probably doesn't
even compile and it's definitely not sanity checked; just trying to get
across an idea of the shape of this code. I think this is more like
what DavidH was asking for (but he's on holiday this week so won't be
able to confirm).
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index a3825ce81102..d442d1e5425d 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -202,6 +202,8 @@ void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr,
struct vm_area_struct *, bool compound);
void page_remove_rmap(struct page *, struct vm_area_struct *,
bool compound);
+void folio_remove_rmap_range(struct folio *folio, struct page *page,
+ int nr, struct vm_area_struct *vma);
void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *,
unsigned long address, rmap_t flags);
diff --git a/mm/rmap.c b/mm/rmap.c
index ec7f8e6c9e48..2592be47452e 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1380,24 +1380,26 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
}
/**
- * page_remove_rmap - take down pte mapping from a page
- * @page: page to remove mapping from
- * @vma: the vm area from which the mapping is removed
- * @compound: uncharge the page as compound or small page
+ * folio_remove_rmap_range - Take down PTE mappings from a range of pages.
+ * @folio: Folio containing all pages in range.
+ * @page: First page in range to unmap.
+ * @nr: Number of pages to unmap. -1 to unmap a PMD.
+ * @vma: The VM area containing the range.
*
- * The caller needs to hold the pte lock.
+ * All pages in the range must belong to the same VMA & folio.
+ *
+ * Context: Caller holds the pte lock.
*/
-void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
- bool compound)
+void folio_remove_rmap_range(struct folio *folio, struct page *page,
+ int pages, struct vm_area_struct *vma)
{
- struct folio *folio = page_folio(page);
atomic_t *mapped = &folio->_nr_pages_mapped;
+ int nr_unmapped = 0;
+ int nr_mapped = 0;
int nr = 0, nr_pmdmapped = 0;
bool last;
enum node_stat_item idx;
- VM_BUG_ON_PAGE(compound && !PageHead(page), page);
-
/* Hugetlb pages are not counted in NR_*MAPPED */
if (unlikely(folio_test_hugetlb(folio))) {
/* hugetlb pages are always mapped with pmds */
@@ -1405,14 +1407,25 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
return;
}
- /* Is page being unmapped by PTE? Is this its last map to be removed? */
- if (likely(!compound)) {
- last = atomic_add_negative(-1, &page->_mapcount);
- nr = last;
- if (last && folio_test_large(folio)) {
- nr = atomic_dec_return_relaxed(mapped);
- nr = (nr < COMPOUND_MAPPED);
+ /* Are we taking down a PMD mapping? */
+ if (likely(pages > 0)) {
+ VM_WARN_ON_ONCE(page < folio_page(folio, 0) ||
+ page + pages > folio_page(folio,
+ folio_nr_pages(folio)));
+ while (pages) {
+ /* Is this the page's last map to be removed? */
+ last = atomic_add_negative(-1, &page->_mapcount);
+ if (last)
+ nr_unmapped++;
+ pages--;
+ page++;
}
+
+ /* Pages still mapped if folio mapped entirely */
+ nr_mapped = atomic_sub_return_relaxed(nr_unmapped, mapped);
+ if (nr_mapped >= COMPOUND_MAPPED)
+ nr_unmapped = 0;
+
} else if (folio_test_pmd_mappable(folio)) {
/* That test is redundant: it's for safety or to optimize out */
@@ -1441,18 +1454,19 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
idx = NR_FILE_PMDMAPPED;
__lruvec_stat_mod_folio(folio, idx, -nr_pmdmapped);
}
+
if (nr) {
idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED;
__lruvec_stat_mod_folio(folio, idx, -nr);
/*
- * Queue anon THP for deferred split if at least one
- * page of the folio is unmapped and at least one page
- * is still mapped.
+ * Queue large anon folio for deferred split if at least one
+ * page of the folio is unmapped and at least one page is still
+ * mapped.
*/
- if (folio_test_pmd_mappable(folio) && folio_test_anon(folio))
- if (!compound || nr < nr_pmdmapped)
- deferred_split_folio(folio);
+ if (folio_test_large(folio) && folio_test_anon(folio) &&
+ nr_mapped)
+ deferred_split_folio(folio);
}
/*
@@ -1466,6 +1480,20 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
munlock_vma_folio(folio, vma, compound);
}
+/**
+ * page_remove_rmap - take down pte mapping from a page
+ * @page: page to remove mapping from
+ * @vma: the vm area from which the mapping is removed
+ * @compound: uncharge the page as compound or small page
+ *
+ * The caller needs to hold the pte lock.
+ */
+void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
+ bool compound)
+{
+ folio_remove_rmap_range(page_folio(page), page, compound ? -1 : 1, vma);
+}
+
/*
* @arg: enum ttu_flags will be passed to this argument
*/
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 5/5] mm/mmu_gather: Store and process pages in contig ranges
2023-08-30 9:50 ` [PATCH v2 5/5] mm/mmu_gather: Store and process pages in contig ranges Ryan Roberts
@ 2023-08-30 15:07 ` Matthew Wilcox
2023-08-30 15:32 ` Ryan Roberts
0 siblings, 1 reply; 15+ messages in thread
From: Matthew Wilcox @ 2023-08-30 15:07 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, David Hildenbrand, Yu Zhao, Kirill A. Shutemov,
Yin Fengwei, Yang Shi, Huang, Ying, Zi Yan, linux-mm,
linux-kernel
On Wed, Aug 30, 2023 at 10:50:11AM +0100, Ryan Roberts wrote:
> +++ b/include/asm-generic/tlb.h
> @@ -246,11 +246,11 @@ struct mmu_gather_batch {
> struct mmu_gather_batch *next;
> unsigned int nr;
> unsigned int max;
> - struct page *pages[];
> + struct pfn_range folios[];
I think it's dangerous to call this 'folios' as it lets you think that
each entry is a single folio. But as I understand this patch, you can
coagulate contiguous ranges across multiple folios.
> -void free_pages_and_swap_cache(struct page **pages, int nr)
> +void free_folios_and_swap_cache(struct pfn_range *folios, int nr)
> {
> lru_add_drain();
> for (int i = 0; i < nr; i++)
> - free_swap_cache(pages[i]);
> - release_pages(pages, nr);
> + free_swap_cache(pfn_to_page(folios[i].start));
... but here, you only put the swapcache for the first folio covered by
the range, not for each folio.
> + folios_put_refs(folios, nr);
It's kind of confusing to have folios_put() which takes a struct folio *
and then folios_put_refs() which takes a struct pfn_range *.
pfn_range_put()?
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 5/5] mm/mmu_gather: Store and process pages in contig ranges
2023-08-30 15:07 ` Matthew Wilcox
@ 2023-08-30 15:32 ` Ryan Roberts
0 siblings, 0 replies; 15+ messages in thread
From: Ryan Roberts @ 2023-08-30 15:32 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, David Hildenbrand, Yu Zhao, Kirill A. Shutemov,
Yin Fengwei, Yang Shi, Huang, Ying, Zi Yan, linux-mm,
linux-kernel
On 30/08/2023 16:07, Matthew Wilcox wrote:
> On Wed, Aug 30, 2023 at 10:50:11AM +0100, Ryan Roberts wrote:
>> +++ b/include/asm-generic/tlb.h
>> @@ -246,11 +246,11 @@ struct mmu_gather_batch {
>> struct mmu_gather_batch *next;
>> unsigned int nr;
>> unsigned int max;
>> - struct page *pages[];
>> + struct pfn_range folios[];
>
> I think it's dangerous to call this 'folios' as it lets you think that
> each entry is a single folio. But as I understand this patch, you can
> coagulate contiguous ranges across multiple folios.
No that's not quite the case; each contiguous range only ever spans a *single*
folio. If there are 2 contiguous folios, they will be represented as separate
ranges. This is done so that we can subsequently do the per-folio operations
without having to figure out how many folios are within each range - one range =
one (contiguous part of a) folio.
On naming, I was calling this variable "ranges" in v1 but thought folios was
actually clearer. How about "folio_regions"?
>
>> -void free_pages_and_swap_cache(struct page **pages, int nr)
>> +void free_folios_and_swap_cache(struct pfn_range *folios, int nr)
>> {
>> lru_add_drain();
>> for (int i = 0; i < nr; i++)
>> - free_swap_cache(pages[i]);
>> - release_pages(pages, nr);
>> + free_swap_cache(pfn_to_page(folios[i].start));
>
> ... but here, you only put the swapcache for the first folio covered by
> the range, not for each folio.
Yes that's intentional - one range only ever covers one folio, so I only need to
call free_swap_cache() once for the folio. Unless I've misunderstood and
free_swap_cache() is actually decrementing a reference count and needs to be
called for every page? (but it doesn't look like that in the code).
>
>> + folios_put_refs(folios, nr);
>
> It's kind of confusing to have folios_put() which takes a struct folio *
> and then folios_put_refs() which takes a struct pfn_range *.
> pfn_range_put()?
I think it's less confusing if you know that each pfn_range represents a single
contig range of pages within a *single* folio. pfn_range_put() would make it
sound like its ok to pass a pfn_range that spans multiple folios (this would
break). I could rename `struct pfn_range` to `struct sub_folio` or something
like that. Would that help make the semantic clearer?
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 1/5] mm: Implement folio_remove_rmap_range()
2023-08-30 14:51 ` Matthew Wilcox
@ 2023-08-30 15:42 ` Ryan Roberts
2023-08-30 16:24 ` David Hildenbrand
0 siblings, 1 reply; 15+ messages in thread
From: Ryan Roberts @ 2023-08-30 15:42 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, David Hildenbrand, Yu Zhao, Kirill A. Shutemov,
Yin Fengwei, Yang Shi, Huang, Ying, Zi Yan, linux-mm,
linux-kernel
On 30/08/2023 15:51, Matthew Wilcox wrote:
> On Wed, Aug 30, 2023 at 10:50:07AM +0100, Ryan Roberts wrote:
>> Like page_remove_rmap() but batch-removes the rmap for a range of pages
>> belonging to a folio. This can provide a small speedup due to less
>> manipuation of the various counters. But more crucially, if removing the
>> rmap for all pages of a folio in a batch, there is no need to
>> (spuriously) add it to the deferred split list, which saves significant
>> cost when there is contention for the split queue lock.
>>
>> All contained pages are accounted using the order-0 folio (or base page)
>> scheme.
>>
>> page_remove_rmap() is refactored so that it forwards to
>> folio_remove_rmap_range() for !compound cases, and both functions now
>> share a common epilogue function. The intention here is to avoid
>> duplication of code.
>
> What would you think to doing it like this instead? This probably doesn't
> even compile and it's definitely not sanity checked; just trying to get
> across an idea of the shape of this code. I think this is more like
> what DavidH was asking for (but he's on holiday this week so won't be
> able to confirm).
I think it was actually Yu Zhou who was arguing for something more like this?
I don't love it, because as I've expressed previously, I don't think an API that
operates on a "range" of pages has any business manipulating a PMD mapping,
because it can't be subdivided. So my preference is for what I've already got.
But certainly your proposal to use nr=-1 to mean compound is the nicest way I've
seen since you no longer make nr redundant in this case.
Given you and Yu both have a lot more experience with this code than me, I'll
follow your advice and do it this way for the next rev.
Thanks,
Ryan
>
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index a3825ce81102..d442d1e5425d 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -202,6 +202,8 @@ void folio_add_file_rmap_range(struct folio *, struct page *, unsigned int nr,
> struct vm_area_struct *, bool compound);
> void page_remove_rmap(struct page *, struct vm_area_struct *,
> bool compound);
> +void folio_remove_rmap_range(struct folio *folio, struct page *page,
> + int nr, struct vm_area_struct *vma);
>
> void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *,
> unsigned long address, rmap_t flags);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index ec7f8e6c9e48..2592be47452e 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1380,24 +1380,26 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
> }
>
> /**
> - * page_remove_rmap - take down pte mapping from a page
> - * @page: page to remove mapping from
> - * @vma: the vm area from which the mapping is removed
> - * @compound: uncharge the page as compound or small page
> + * folio_remove_rmap_range - Take down PTE mappings from a range of pages.
> + * @folio: Folio containing all pages in range.
> + * @page: First page in range to unmap.
> + * @nr: Number of pages to unmap. -1 to unmap a PMD.
> + * @vma: The VM area containing the range.
> *
> - * The caller needs to hold the pte lock.
> + * All pages in the range must belong to the same VMA & folio.
> + *
> + * Context: Caller holds the pte lock.
> */
> -void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
> - bool compound)
> +void folio_remove_rmap_range(struct folio *folio, struct page *page,
> + int pages, struct vm_area_struct *vma)
> {
> - struct folio *folio = page_folio(page);
> atomic_t *mapped = &folio->_nr_pages_mapped;
> + int nr_unmapped = 0;
> + int nr_mapped = 0;
> int nr = 0, nr_pmdmapped = 0;
> bool last;
> enum node_stat_item idx;
>
> - VM_BUG_ON_PAGE(compound && !PageHead(page), page);
> -
> /* Hugetlb pages are not counted in NR_*MAPPED */
> if (unlikely(folio_test_hugetlb(folio))) {
> /* hugetlb pages are always mapped with pmds */
> @@ -1405,14 +1407,25 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
> return;
> }
>
> - /* Is page being unmapped by PTE? Is this its last map to be removed? */
> - if (likely(!compound)) {
> - last = atomic_add_negative(-1, &page->_mapcount);
> - nr = last;
> - if (last && folio_test_large(folio)) {
> - nr = atomic_dec_return_relaxed(mapped);
> - nr = (nr < COMPOUND_MAPPED);
> + /* Are we taking down a PMD mapping? */
> + if (likely(pages > 0)) {
> + VM_WARN_ON_ONCE(page < folio_page(folio, 0) ||
> + page + pages > folio_page(folio,
> + folio_nr_pages(folio)));
> + while (pages) {
> + /* Is this the page's last map to be removed? */
> + last = atomic_add_negative(-1, &page->_mapcount);
> + if (last)
> + nr_unmapped++;
> + pages--;
> + page++;
> }
> +
> + /* Pages still mapped if folio mapped entirely */
> + nr_mapped = atomic_sub_return_relaxed(nr_unmapped, mapped);
> + if (nr_mapped >= COMPOUND_MAPPED)
> + nr_unmapped = 0;
> +
> } else if (folio_test_pmd_mappable(folio)) {
> /* That test is redundant: it's for safety or to optimize out */
>
> @@ -1441,18 +1454,19 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
> idx = NR_FILE_PMDMAPPED;
> __lruvec_stat_mod_folio(folio, idx, -nr_pmdmapped);
> }
> +
> if (nr) {
> idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED;
> __lruvec_stat_mod_folio(folio, idx, -nr);
>
> /*
> - * Queue anon THP for deferred split if at least one
> - * page of the folio is unmapped and at least one page
> - * is still mapped.
> + * Queue large anon folio for deferred split if at least one
> + * page of the folio is unmapped and at least one page is still
> + * mapped.
> */
> - if (folio_test_pmd_mappable(folio) && folio_test_anon(folio))
> - if (!compound || nr < nr_pmdmapped)
> - deferred_split_folio(folio);
> + if (folio_test_large(folio) && folio_test_anon(folio) &&
> + nr_mapped)
> + deferred_split_folio(folio);
> }
>
> /*
> @@ -1466,6 +1480,20 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
> munlock_vma_folio(folio, vma, compound);
> }
>
> +/**
> + * page_remove_rmap - take down pte mapping from a page
> + * @page: page to remove mapping from
> + * @vma: the vm area from which the mapping is removed
> + * @compound: uncharge the page as compound or small page
> + *
> + * The caller needs to hold the pte lock.
> + */
> +void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
> + bool compound)
> +{
> + folio_remove_rmap_range(page_folio(page), page, compound ? -1 : 1, vma);
> +}
> +
> /*
> * @arg: enum ttu_flags will be passed to this argument
> */
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 1/5] mm: Implement folio_remove_rmap_range()
2023-08-30 15:42 ` Ryan Roberts
@ 2023-08-30 16:24 ` David Hildenbrand
0 siblings, 0 replies; 15+ messages in thread
From: David Hildenbrand @ 2023-08-30 16:24 UTC (permalink / raw)
To: Ryan Roberts, Matthew Wilcox
Cc: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, Yu Zhao, Kirill A. Shutemov, Yin Fengwei,
Yang Shi, Huang, Ying, Zi Yan, linux-mm, linux-kernel
On 30.08.23 17:42, Ryan Roberts wrote:
> On 30/08/2023 15:51, Matthew Wilcox wrote:
>> On Wed, Aug 30, 2023 at 10:50:07AM +0100, Ryan Roberts wrote:
>>> Like page_remove_rmap() but batch-removes the rmap for a range of pages
>>> belonging to a folio. This can provide a small speedup due to less
>>> manipuation of the various counters. But more crucially, if removing the
>>> rmap for all pages of a folio in a batch, there is no need to
>>> (spuriously) add it to the deferred split list, which saves significant
>>> cost when there is contention for the split queue lock.
>>>
>>> All contained pages are accounted using the order-0 folio (or base page)
>>> scheme.
>>>
>>> page_remove_rmap() is refactored so that it forwards to
>>> folio_remove_rmap_range() for !compound cases, and both functions now
>>> share a common epilogue function. The intention here is to avoid
>>> duplication of code.
>>
>> What would you think to doing it like this instead? This probably doesn't
>> even compile and it's definitely not sanity checked; just trying to get
>> across an idea of the shape of this code. I think this is more like
>> what DavidH was asking for (but he's on holiday this week so won't be
>> able to confirm).
>
> I think it was actually Yu Zhou who was arguing for something more like this?
I think so, not me.
... but the second variant is certainly shorter.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 4/5] mm: Refector release_pages()
2023-08-30 9:50 ` [PATCH v2 4/5] mm: Refector release_pages() Ryan Roberts
@ 2023-08-30 19:11 ` Matthew Wilcox
2023-08-30 21:17 ` Matthew Wilcox
2023-08-31 19:57 ` Ryan Roberts
0 siblings, 2 replies; 15+ messages in thread
From: Matthew Wilcox @ 2023-08-30 19:11 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, David Hildenbrand, Yu Zhao, Kirill A. Shutemov,
Yin Fengwei, Yang Shi, Huang, Ying, Zi Yan, linux-mm,
linux-kernel
On Wed, Aug 30, 2023 at 10:50:10AM +0100, Ryan Roberts wrote:
> In preparation for implementing folios_put_refs() in the next patch,
> refactor release_pages() into a set of helper functions, which can be
> reused. The primary difference between release_pages() and
> folios_put_refs() is how they iterate over the set of folios. The
> per-folio actions are identical.
As you noted, we have colliding patchsets. I'm not hugely happy with
how patch 4 turned out, so I thought I'd send some addendum patches to
my RFC series that implement pfn_range_put() (maybe should have been
pfn_ranges_put()?) on top of my patch series. I think it's a bit nicer,
but not quite as nice as it could be.
I'm thinking about doing ...
void release_unref_folios(struct folio_batch *folios)
{
struct lruvec *lruvec = NULL;
unsigned long flags = 0;
int i;
for (i = 0; i < folios->nr; i++) {
struct folio *folio = folios->folios[i];
free_swap_cache(folio);
__page_cache_release(folio, &lruvec, &flags);
}
mem_cgroup_uncharge_folios(folios);
free_unref_folios(folios);
}
then this becomes:
void folios_put(struct folio_batch *folios)
{
int i, j;
for (i = 0, j = 0; i < folios->nr; i++) {
struct folio *folio = folios->folios[i];
if (is_huge_zero_page(&folio->page))
continue;
if (folio_is_zone_device(folio)) {
if (put_devmap_managed_page(&folio->page))
continue;
if (folio_put_testzero(folio))
free_zone_device_page(&folio->page);
continue;
}
if (!folio_put_testzero(folio))
continue;
if (folio_test_hugetlb(folio)) {
free_huge_folio(folio);
continue;
}
if (j != i)
folios->folios[j] = folio;
j++;
}
folios->nr = j;
if (!j)
return;
release_unref_folios(folios);
}
and pfn_range_put() also becomes shorter and loses all the lruvec work.
Thoughts?
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 4/5] mm: Refector release_pages()
2023-08-30 19:11 ` Matthew Wilcox
@ 2023-08-30 21:17 ` Matthew Wilcox
2023-08-31 19:57 ` Ryan Roberts
1 sibling, 0 replies; 15+ messages in thread
From: Matthew Wilcox @ 2023-08-30 21:17 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, David Hildenbrand, Yu Zhao, Kirill A. Shutemov,
Yin Fengwei, Yang Shi, Huang, Ying, Zi Yan, linux-mm,
linux-kernel
On Wed, Aug 30, 2023 at 08:11:07PM +0100, Matthew Wilcox wrote:
> I'm thinking about doing ...
>
> void release_unref_folios(struct folio_batch *folios)
> {
> struct lruvec *lruvec = NULL;
> unsigned long flags = 0;
> int i;
>
> for (i = 0; i < folios->nr; i++) {
> struct folio *folio = folios->folios[i];
> free_swap_cache(folio);
No, can't do that here. Swap cache has refs on the folio, so it'll
never trigger.
> __page_cache_release(folio, &lruvec, &flags);
> }
> mem_cgroup_uncharge_folios(folios);
> free_unref_folios(folios);
> }
>
> then this becomes:
>
> void folios_put(struct folio_batch *folios)
> {
> int i, j;
>
> for (i = 0, j = 0; i < folios->nr; i++) {
> struct folio *folio = folios->folios[i];
>
> if (is_huge_zero_page(&folio->page))
> continue;
> if (folio_is_zone_device(folio)) {
> if (put_devmap_managed_page(&folio->page))
> continue;
> if (folio_put_testzero(folio))
> free_zone_device_page(&folio->page);
> continue;
> }
Must go at least here, maybe earlier.
> if (!folio_put_testzero(folio))
> continue;
> if (folio_test_hugetlb(folio)) {
> free_huge_folio(folio);
> continue;
> }
>
> if (j != i)
> folios->folios[j] = folio;
> j++;
> }
>
> folios->nr = j;
> if (!j)
> return;
> release_unref_folios(folios);
> }
>
> and pfn_range_put() also becomes shorter and loses all the lruvec work.
>
> Thoughts?
>
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 4/5] mm: Refector release_pages()
2023-08-30 19:11 ` Matthew Wilcox
2023-08-30 21:17 ` Matthew Wilcox
@ 2023-08-31 19:57 ` Ryan Roberts
2023-09-01 4:36 ` Matthew Wilcox
1 sibling, 1 reply; 15+ messages in thread
From: Ryan Roberts @ 2023-08-31 19:57 UTC (permalink / raw)
To: Matthew Wilcox
Cc: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, David Hildenbrand, Yu Zhao, Kirill A. Shutemov,
Yin Fengwei, Yang Shi, Huang, Ying, Zi Yan, linux-mm,
linux-kernel
On 30/08/2023 20:11, Matthew Wilcox wrote:
> On Wed, Aug 30, 2023 at 10:50:10AM +0100, Ryan Roberts wrote:
>> In preparation for implementing folios_put_refs() in the next patch,
>> refactor release_pages() into a set of helper functions, which can be
>> reused. The primary difference between release_pages() and
>> folios_put_refs() is how they iterate over the set of folios. The
>> per-folio actions are identical.
>
> As you noted, we have colliding patchsets. I'm not hugely happy with
> how patch 4 turned out,
Could you describe the issues as you see them? I'm keen not to repeat the same
bad patterns in future.
so I thought I'd send some addendum patches to
> my RFC series that implement pfn_range_put() (maybe should have been
> pfn_ranges_put()?) on top of my patch series. I think it's a bit nicer,
> but not quite as nice as it could be.
>
> I'm thinking about doing ...
>
> void release_unref_folios(struct folio_batch *folios)
> {
> struct lruvec *lruvec = NULL;
> unsigned long flags = 0;
> int i;
>
> for (i = 0; i < folios->nr; i++) {
> struct folio *folio = folios->folios[i];
> free_swap_cache(folio);
Agree this can't go here - would put it in pfn_range_put(). But would not want
it in folios_put() as you suggeted in the other email - that would surely change
the behaviour of folios_put()?
> __page_cache_release(folio, &lruvec, &flags);
> }
I think you would need to add:
if (lruvec)
unlock_page_lruvec_irqrestore(lruvec, flags);
> mem_cgroup_uncharge_folios(folios);
> free_unref_folios(folios);
> }
>
> then this becomes:
>
> void folios_put(struct folio_batch *folios)
> {
> int i, j;
>
> for (i = 0, j = 0; i < folios->nr; i++) {
> struct folio *folio = folios->folios[i];
>
> if (is_huge_zero_page(&folio->page))
> continue;
> if (folio_is_zone_device(folio)) {
> if (put_devmap_managed_page(&folio->page))
> continue;
> if (folio_put_testzero(folio))
> free_zone_device_page(&folio->page);
> continue;
> }
>
> if (!folio_put_testzero(folio))
> continue;
> if (folio_test_hugetlb(folio)) {
> free_huge_folio(folio);
> continue;
> }
>
> if (j != i)
> folios->folios[j] = folio;
> j++;
> }
>
> folios->nr = j;
> if (!j)
> return;
> release_unref_folios(folios);
> }
>
> and pfn_range_put() also becomes shorter and loses all the lruvec work.
something like this?
void pfn_range_put(struct pfn_range *range, unsigned int nr)
{
struct folio_batch folios;
unsigned int i;
folio_batch_init(&folios);
for (i = 0; i < nr; i++) {
struct folio *folio = pfn_folio(range[i].start);
unsigned int refs = range[i].end - range[i].start;
free_swap_cache(folio); // <<<<< HERE? To be equivalent to
// free_pages_and_swap_cache()
if (is_huge_zero_page(&folio->page))
continue;
if (folio_is_zone_device(folio)) {
if (put_devmap_managed_page_refs(&folio->page, refs))
continue;
if (folio_ref_sub_and_test(folio, refs))
free_zone_device_page(&folio->page);
continue;
}
if (!folio_ref_sub_and_test(folio, refs))
continue;
/* hugetlb has its own memcg */
if (folio_test_hugetlb(folio)) {
free_huge_folio(folio);
continue;
}
if (folio_batch_add(&folios, folio) == 0)
release_unref_folios(&folios);
}
if (folios.nr)
release_unref_folios(&folios);
}
>
> Thoughts?
Looks like its getting there to me. Although the bulk of the logic inside the
loop is still common between folios_put() and pfn_range_put(); perhaps we can
have a common helper for that, which would just need to take (folio, refs)?
So by adding free_swap_cache() above, we can call pfn_range_put() direct and can
remove free_pages_and_swap_cache() entirely.
What's the best way to move forward here? Two options as I see it:
- I drop patch 4 and create a version of pfn_range_put() that has the same
semantic as above but essentially just copies the old release_pages() (similar
to my v1). That keeps my series independent. Then you can replace with the new
pfn_range_put() as part of your series.
- We can just hook my patches up to the end of your series and do it all in one go.
Opinion?
Thanks,
Ryan
^ permalink raw reply [flat|nested] 15+ messages in thread
* Re: [PATCH v2 4/5] mm: Refector release_pages()
2023-08-31 19:57 ` Ryan Roberts
@ 2023-09-01 4:36 ` Matthew Wilcox
0 siblings, 0 replies; 15+ messages in thread
From: Matthew Wilcox @ 2023-09-01 4:36 UTC (permalink / raw)
To: Ryan Roberts
Cc: Will Deacon, Aneesh Kumar K.V, Andrew Morton, Nick Piggin,
Peter Zijlstra, Christian Borntraeger, Sven Schnelle,
Arnd Bergmann, David Hildenbrand, Yu Zhao, Kirill A. Shutemov,
Yin Fengwei, Yang Shi, Huang, Ying, Zi Yan, linux-mm,
linux-kernel
On Thu, Aug 31, 2023 at 08:57:51PM +0100, Ryan Roberts wrote:
> On 30/08/2023 20:11, Matthew Wilcox wrote:
> > On Wed, Aug 30, 2023 at 10:50:10AM +0100, Ryan Roberts wrote:
> >> In preparation for implementing folios_put_refs() in the next patch,
> >> refactor release_pages() into a set of helper functions, which can be
> >> reused. The primary difference between release_pages() and
> >> folios_put_refs() is how they iterate over the set of folios. The
> >> per-folio actions are identical.
> >
> > As you noted, we have colliding patchsets. I'm not hugely happy with
> > how patch 4 turned out,
>
> Could you describe the issues as you see them? I'm keen not to repeat the same
> bad patterns in future.
I think you absolutely had the right idea. I'd've probably done the same
as you in the same circumstances as you. It's just that I'd done this
patch series getting rid of / simplifying a bunch of the code you were
modifying, and I thought it'd make your 4/5 irrelevant and 5/5 simpler.
> > void release_unref_folios(struct folio_batch *folios)
> > {
> > struct lruvec *lruvec = NULL;
> > unsigned long flags = 0;
> > int i;
> >
> > for (i = 0; i < folios->nr; i++) {
> > struct folio *folio = folios->folios[i];
> > free_swap_cache(folio);
>
> Agree this can't go here - would put it in pfn_range_put(). But would not want
> it in folios_put() as you suggeted in the other email - that would surely change
> the behaviour of folios_put()?
yes; perhaps there are times when we want to put a batch of folios and
_don't_ want to rip them out of the swapcache. I haven't thought about
this in much detail, and we're definitely venturing into areas of the
MM where I get myself in trouble (ie anonymous memory).
> > __page_cache_release(folio, &lruvec, &flags);
> > }
>
> I think you would need to add:
>
> if (lruvec)
> unlock_page_lruvec_irqrestore(lruvec, flags);
Indeed.
> > mem_cgroup_uncharge_folios(folios);
> > free_unref_folios(folios);
> > }
> >
> > then this becomes:
> >
> > void folios_put(struct folio_batch *folios)
> > {
> > int i, j;
> >
> > for (i = 0, j = 0; i < folios->nr; i++) {
> > struct folio *folio = folios->folios[i];
> >
> > if (is_huge_zero_page(&folio->page))
> > continue;
> > if (folio_is_zone_device(folio)) {
> > if (put_devmap_managed_page(&folio->page))
> > continue;
> > if (folio_put_testzero(folio))
> > free_zone_device_page(&folio->page);
> > continue;
> > }
> >
> > if (!folio_put_testzero(folio))
> > continue;
> > if (folio_test_hugetlb(folio)) {
> > free_huge_folio(folio);
> > continue;
> > }
> >
> > if (j != i)
> > folios->folios[j] = folio;
> > j++;
> > }
> >
> > folios->nr = j;
> > if (!j)
> > return;
> > release_unref_folios(folios);
> > }
> >
> > and pfn_range_put() also becomes shorter and loses all the lruvec work.
>
> something like this?
>
> void pfn_range_put(struct pfn_range *range, unsigned int nr)
> {
> struct folio_batch folios;
> unsigned int i;
>
> folio_batch_init(&folios);
> for (i = 0; i < nr; i++) {
> struct folio *folio = pfn_folio(range[i].start);
> unsigned int refs = range[i].end - range[i].start;
>
> free_swap_cache(folio); // <<<<< HERE? To be equivalent to
> // free_pages_and_swap_cache()
>
> if (is_huge_zero_page(&folio->page))
> continue;
>
> if (folio_is_zone_device(folio)) {
> if (put_devmap_managed_page_refs(&folio->page, refs))
> continue;
> if (folio_ref_sub_and_test(folio, refs))
> free_zone_device_page(&folio->page);
> continue;
> }
>
> if (!folio_ref_sub_and_test(folio, refs))
> continue;
>
> /* hugetlb has its own memcg */
> if (folio_test_hugetlb(folio)) {
> free_huge_folio(folio);
> continue;
> }
>
> if (folio_batch_add(&folios, folio) == 0)
> release_unref_folios(&folios);
> }
>
> if (folios.nr)
> release_unref_folios(&folios);
> }
>
> >
> > Thoughts?
>
> Looks like its getting there to me. Although the bulk of the logic inside the
> loop is still common between folios_put() and pfn_range_put(); perhaps we can
> have a common helper for that, which would just need to take (folio, refs)?
>
> So by adding free_swap_cache() above, we can call pfn_range_put() direct and can
> remove free_pages_and_swap_cache() entirely.
Yes. Maybe this works? I'd like it to!
> What's the best way to move forward here? Two options as I see it:
>
> - I drop patch 4 and create a version of pfn_range_put() that has the same
> semantic as above but essentially just copies the old release_pages() (similar
> to my v1). That keeps my series independent. Then you can replace with the new
> pfn_range_put() as part of your series.
>
> - We can just hook my patches up to the end of your series and do it all in one go.
>
> Opinion?
I'm reluctant to tell you "hey, delay until after this series of mine".
We don't have good data yet on whether my patch series helps or hurts.
Plus I'm about to take a few days off (I'll be back for next Wednesday's
meeting). I think your first option is better (and do feel free to use
a different name from pfn_range_put()) just to keep us from colliding
in ways that ultimately don't matter.
^ permalink raw reply [flat|nested] 15+ messages in thread
end of thread, other threads:[~2023-09-01 4:36 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-30 9:50 [PATCH v2 0/5] Optimize mmap_exit for large folios Ryan Roberts
2023-08-30 9:50 ` [PATCH v2 1/5] mm: Implement folio_remove_rmap_range() Ryan Roberts
2023-08-30 14:51 ` Matthew Wilcox
2023-08-30 15:42 ` Ryan Roberts
2023-08-30 16:24 ` David Hildenbrand
2023-08-30 9:50 ` [PATCH v2 2/5] mm/mmu_gather: generalize mmu_gather rmap removal mechanism Ryan Roberts
2023-08-30 9:50 ` [PATCH v2 3/5] mm/mmu_gather: Remove encoded_page infrastructure Ryan Roberts
2023-08-30 9:50 ` [PATCH v2 4/5] mm: Refector release_pages() Ryan Roberts
2023-08-30 19:11 ` Matthew Wilcox
2023-08-30 21:17 ` Matthew Wilcox
2023-08-31 19:57 ` Ryan Roberts
2023-09-01 4:36 ` Matthew Wilcox
2023-08-30 9:50 ` [PATCH v2 5/5] mm/mmu_gather: Store and process pages in contig ranges Ryan Roberts
2023-08-30 15:07 ` Matthew Wilcox
2023-08-30 15:32 ` Ryan Roberts
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox