* [PATCH v2 0/3] Optimize large folio interaction with deferred split
@ 2023-07-19 13:54 Ryan Roberts
2023-07-19 13:54 ` [PATCH v2 1/3] mm: Allow deferred splitting of arbitrary large anon folios Ryan Roberts
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Ryan Roberts @ 2023-07-19 13:54 UTC (permalink / raw)
To: Andrew Morton, Matthew Wilcox, Yin Fengwei, David Hildenbrand,
Yu Zhao, Yang Shi, Huang, Ying, Zi Yan
Cc: Ryan Roberts, linux-kernel, linux-mm
Hi All,
This is v2 of a small series in support of my work to enable the use of large
folios for anonymous memory (known as "FLEXIBLE_THP" or "LARGE_ANON_FOLIO") [1].
It first makes it possible to add large, non-pmd-mappable folios to the deferred
split queue. Then it modifies zap_pte_range() to batch-remove spans of
physically contiguous pages from the rmap, which means that in the common case,
we elide the need to ever put the folio on the deferred split queue, thus
reducing lock contention and improving performance.
This becomes more visible once we have lots of large anonymous folios in the
system, and Huang Ying has suggested solving this needs to be a prerequisit for
merging the main FLEXIBLE_THP/LARGE_ANON_FOLIO work.
The series applies on top of v6.5-rc2 and a branch is available at [2].
I don't have a full test run with the latest versions of all the patches on top
of the latest baseline, so not posting results formally. I can get these if
people feel they are neccessary though. But anecdotally, for the kernel
compilation workload, this series reduces kernel time by ~4% and reduces
real-time by ~0.4%, compared with [1].
Changes since v1 [3]
--------------------
- patch 2: Modified doc comment for folio_remove_rmap_range()
- patch 2: Hoisted _nr_pages_mapped manipulation out of page loop so its now
modified once per folio_remove_rmap_range() call.
- patch 2: Added check that page range is fully contained by folio in
folio_remove_rmap_range()
- patch 2: Fixed some nits raised by Huang, Ying for folio_remove_rmap_range()
- patch 3: Support batch-zap of all anon pages, not just those in anon vmas
- patch 3: Renamed various functions to make their use clear
- patch 3: Various minor refactoring/cleanups
- Added Reviewed-By tags - thanks!
[1] https://lore.kernel.org/linux-mm/20230714160407.4142030-1-ryan.roberts@arm.com/
[2] https://gitlab.arm.com/linux-arm/linux-rr/-/tree/features/granule_perf/deferredsplit-lkml_v2
[3] https://lore.kernel.org/linux-mm/20230717143110.260162-1-ryan.roberts@arm.com/
Thanks,
Ryan
Ryan Roberts (3):
mm: Allow deferred splitting of arbitrary large anon folios
mm: Implement folio_remove_rmap_range()
mm: Batch-zap large anonymous folio PTE mappings
include/linux/rmap.h | 2 +
mm/memory.c | 120 +++++++++++++++++++++++++++++++++++++++++++
mm/rmap.c | 76 ++++++++++++++++++++++++++-
3 files changed, 196 insertions(+), 2 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2 1/3] mm: Allow deferred splitting of arbitrary large anon folios
2023-07-19 13:54 [PATCH v2 0/3] Optimize large folio interaction with deferred split Ryan Roberts
@ 2023-07-19 13:54 ` Ryan Roberts
2023-07-19 13:54 ` [PATCH v2 2/3] mm: Implement folio_remove_rmap_range() Ryan Roberts
2023-07-19 13:54 ` [PATCH v2 3/3] mm: Batch-zap large anonymous folio PTE mappings Ryan Roberts
2 siblings, 0 replies; 6+ messages in thread
From: Ryan Roberts @ 2023-07-19 13:54 UTC (permalink / raw)
To: Andrew Morton, Matthew Wilcox, Yin Fengwei, David Hildenbrand,
Yu Zhao, Yang Shi, Huang, Ying, Zi Yan
Cc: Ryan Roberts, linux-kernel, linux-mm
In preparation for the introduction of large folios for anonymous
memory, we would like to be able to split them when they have unmapped
subpages, in order to free those unused pages under memory pressure. So
remove the artificial requirement that the large folio needed to be at
least PMD-sized.
Reviewed-by: Yu Zhao <yuzhao@google.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/rmap.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/rmap.c b/mm/rmap.c
index 0c0d8857dfce..eb0bb00dae34 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1426,11 +1426,11 @@ void page_remove_rmap(struct page *page, struct vm_area_struct *vma,
__lruvec_stat_mod_folio(folio, idx, -nr);
/*
- * Queue anon THP for deferred split if at least one
+ * Queue anon large folio for deferred split if at least one
* page of the folio is unmapped and at least one page
* is still mapped.
*/
- if (folio_test_pmd_mappable(folio) && folio_test_anon(folio))
+ if (folio_test_large(folio) && folio_test_anon(folio))
if (!compound || nr < nr_pmdmapped)
deferred_split_folio(folio);
}
--
2.25.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2 2/3] mm: Implement folio_remove_rmap_range()
2023-07-19 13:54 [PATCH v2 0/3] Optimize large folio interaction with deferred split Ryan Roberts
2023-07-19 13:54 ` [PATCH v2 1/3] mm: Allow deferred splitting of arbitrary large anon folios Ryan Roberts
@ 2023-07-19 13:54 ` Ryan Roberts
2023-07-19 18:23 ` Yu Zhao
2023-07-19 13:54 ` [PATCH v2 3/3] mm: Batch-zap large anonymous folio PTE mappings Ryan Roberts
2 siblings, 1 reply; 6+ messages in thread
From: Ryan Roberts @ 2023-07-19 13:54 UTC (permalink / raw)
To: Andrew Morton, Matthew Wilcox, Yin Fengwei, David Hildenbrand,
Yu Zhao, Yang Shi, Huang, Ying, Zi Yan
Cc: Ryan Roberts, linux-kernel, linux-mm
Like page_remove_rmap() but batch-removes the rmap for a range of pages
belonging to a folio. This can provide a small speedup due to less
manipuation of the various counters. But more crucially, if removing the
rmap for all pages of a folio in a batch, there is no need to
(spuriously) add it to the deferred split list, which saves significant
cost when there is contention for the split queue lock.
All contained pages are accounted using the order-0 folio (or base page)
scheme.
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
include/linux/rmap.h | 2 ++
mm/rmap.c | 72 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 74 insertions(+)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index b87d01660412..f578975c12c0 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -200,6 +200,8 @@ void page_add_file_rmap(struct page *, struct vm_area_struct *,
bool compound);
void page_remove_rmap(struct page *, struct vm_area_struct *,
bool compound);
+void folio_remove_rmap_range(struct folio *folio, struct page *page,
+ int nr, struct vm_area_struct *vma);
void hugepage_add_anon_rmap(struct page *, struct vm_area_struct *,
unsigned long address, rmap_t flags);
diff --git a/mm/rmap.c b/mm/rmap.c
index eb0bb00dae34..4022a3ab73a8 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1359,6 +1359,78 @@ void page_add_file_rmap(struct page *page, struct vm_area_struct *vma,
mlock_vma_folio(folio, vma, compound);
}
+/**
+ * folio_remove_rmap_range - Take down PTE mappings from a range of pages.
+ * @folio: Folio containing all pages in range.
+ * @page: First page in range to unmap.
+ * @nr: Number of pages to unmap.
+ * @vma: The VM area containing the range.
+ *
+ * All pages in the range must belong to the same VMA & folio. They
+ * must be mapped with PTEs, not a PMD.
+ *
+ * Context: Caller holds the pte lock.
+ */
+void folio_remove_rmap_range(struct folio *folio, struct page *page,
+ int nr, struct vm_area_struct *vma)
+{
+ atomic_t *mapped = &folio->_nr_pages_mapped;
+ int nr_unmapped = 0;
+ int nr_mapped;
+ bool last;
+ enum node_stat_item idx;
+
+ if (unlikely(folio_test_hugetlb(folio))) {
+ VM_WARN_ON_FOLIO(1, folio);
+ return;
+ }
+
+ VM_WARN_ON_ONCE(page < &folio->page ||
+ page + nr > (&folio->page + folio_nr_pages(folio)));
+
+ if (!folio_test_large(folio)) {
+ /* Is this the page's last map to be removed? */
+ last = atomic_add_negative(-1, &page->_mapcount);
+ nr_unmapped = last;
+ } else {
+ for (; nr != 0; nr--, page++) {
+ /* Is this the page's last map to be removed? */
+ last = atomic_add_negative(-1, &page->_mapcount);
+ if (last)
+ nr_unmapped++;
+ }
+
+ /* Pages still mapped if folio mapped entirely */
+ nr_mapped = atomic_sub_return_relaxed(nr_unmapped, mapped);
+ if (nr_mapped >= COMPOUND_MAPPED)
+ nr_unmapped = 0;
+ }
+
+ if (nr_unmapped) {
+ idx = folio_test_anon(folio) ? NR_ANON_MAPPED : NR_FILE_MAPPED;
+ __lruvec_stat_mod_folio(folio, idx, -nr_unmapped);
+
+ /*
+ * Queue anon large folio for deferred split if at least one
+ * page of the folio is unmapped and at least one page is still
+ * mapped.
+ */
+ if (folio_test_large(folio) &&
+ folio_test_anon(folio) && nr_mapped)
+ deferred_split_folio(folio);
+ }
+
+ /*
+ * It would be tidy to reset folio_test_anon mapping when fully
+ * unmapped, but that might overwrite a racing page_add_anon_rmap
+ * which increments mapcount after us but sets mapping before us:
+ * so leave the reset to free_pages_prepare, and remember that
+ * it's only reliable while mapped.
+ */
+
+ munlock_vma_folio(folio, vma, false);
+}
+
/**
* page_remove_rmap - take down pte mapping from a page
* @page: page to remove mapping from
--
2.25.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2 3/3] mm: Batch-zap large anonymous folio PTE mappings
2023-07-19 13:54 [PATCH v2 0/3] Optimize large folio interaction with deferred split Ryan Roberts
2023-07-19 13:54 ` [PATCH v2 1/3] mm: Allow deferred splitting of arbitrary large anon folios Ryan Roberts
2023-07-19 13:54 ` [PATCH v2 2/3] mm: Implement folio_remove_rmap_range() Ryan Roberts
@ 2023-07-19 13:54 ` Ryan Roberts
2 siblings, 0 replies; 6+ messages in thread
From: Ryan Roberts @ 2023-07-19 13:54 UTC (permalink / raw)
To: Andrew Morton, Matthew Wilcox, Yin Fengwei, David Hildenbrand,
Yu Zhao, Yang Shi, Huang, Ying, Zi Yan
Cc: Ryan Roberts, linux-kernel, linux-mm
This allows batching the rmap removal with folio_remove_rmap_range(),
which means we avoid spuriously adding a partially unmapped folio to the
deferred split queue in the common case, which reduces split queue lock
contention.
Previously each page was removed from the rmap individually with
page_remove_rmap(). If the first page belonged to a large folio, this
would cause page_remove_rmap() to conclude that the folio was now
partially mapped and add the folio to the deferred split queue. But
subsequent calls would cause the folio to become fully unmapped, meaning
there is no value to adding it to the split queue.
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/memory.c | 120 ++++++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 120 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c
index 01f39e8144ef..189b1cfd823d 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1391,6 +1391,94 @@ zap_install_uffd_wp_if_needed(struct vm_area_struct *vma,
pte_install_uffd_wp_if_needed(vma, addr, pte, pteval);
}
+static inline unsigned long page_cont_mapped_vaddr(struct page *page,
+ struct page *anchor, unsigned long anchor_vaddr)
+{
+ unsigned long offset;
+ unsigned long vaddr;
+
+ offset = (page_to_pfn(page) - page_to_pfn(anchor)) << PAGE_SHIFT;
+ vaddr = anchor_vaddr + offset;
+
+ if (anchor > page) {
+ if (vaddr > anchor_vaddr)
+ return 0;
+ } else {
+ if (vaddr < anchor_vaddr)
+ return ULONG_MAX;
+ }
+
+ return vaddr;
+}
+
+static int folio_nr_pages_cont_mapped(struct folio *folio,
+ struct page *page, pte_t *pte,
+ unsigned long addr, unsigned long end)
+{
+ pte_t ptent;
+ int floops;
+ int i;
+ unsigned long pfn;
+ struct page *folio_end;
+
+ if (!folio_test_large(folio))
+ return 1;
+
+ folio_end = &folio->page + folio_nr_pages(folio);
+ end = min(page_cont_mapped_vaddr(folio_end, page, addr), end);
+ floops = (end - addr) >> PAGE_SHIFT;
+ pfn = page_to_pfn(page);
+ pfn++;
+ pte++;
+
+ for (i = 1; i < floops; i++) {
+ ptent = ptep_get(pte);
+
+ if (!pte_present(ptent) || pte_pfn(ptent) != pfn)
+ break;
+
+ pfn++;
+ pte++;
+ }
+
+ return i;
+}
+
+static unsigned long try_zap_anon_pte_range(struct mmu_gather *tlb,
+ struct vm_area_struct *vma,
+ struct folio *folio,
+ struct page *page, pte_t *pte,
+ unsigned long addr, int nr_pages,
+ struct zap_details *details)
+{
+ struct mm_struct *mm = tlb->mm;
+ pte_t ptent;
+ bool full;
+ int i;
+
+ for (i = 0; i < nr_pages;) {
+ ptent = ptep_get_and_clear_full(mm, addr, pte, tlb->fullmm);
+ tlb_remove_tlb_entry(tlb, pte, addr);
+ zap_install_uffd_wp_if_needed(vma, addr, pte, details, ptent);
+ full = __tlb_remove_page(tlb, page, 0);
+
+ if (unlikely(page_mapcount(page) < 1))
+ print_bad_pte(vma, addr, ptent, page);
+
+ i++;
+ page++;
+ pte++;
+ addr += PAGE_SIZE;
+
+ if (unlikely(full))
+ break;
+ }
+
+ folio_remove_rmap_range(folio, page - i, i, vma);
+
+ return i;
+}
+
static unsigned long zap_pte_range(struct mmu_gather *tlb,
struct vm_area_struct *vma, pmd_t *pmd,
unsigned long addr, unsigned long end,
@@ -1428,6 +1516,38 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
page = vm_normal_page(vma, addr, ptent);
if (unlikely(!should_zap_page(details, page)))
continue;
+
+ /*
+ * Batch zap large anonymous folio mappings. This allows
+ * batching the rmap removal, which means we avoid
+ * spuriously adding a partially unmapped folio to the
+ * deferrred split queue in the common case, which
+ * reduces split queue lock contention.
+ */
+ if (page && PageAnon(page)) {
+ struct folio *folio = page_folio(page);
+ int nr_pages_req, nr_pages;
+
+ nr_pages_req = folio_nr_pages_cont_mapped(
+ folio, page, pte, addr, end);
+
+ nr_pages = try_zap_anon_pte_range(tlb, vma,
+ folio, page, pte, addr,
+ nr_pages_req, details);
+
+ rss[mm_counter(page)] -= nr_pages;
+ nr_pages--;
+ pte += nr_pages;
+ addr += nr_pages << PAGE_SHIFT;
+
+ if (unlikely(nr_pages < nr_pages_req)) {
+ force_flush = 1;
+ addr += PAGE_SIZE;
+ break;
+ }
+ continue;
+ }
+
ptent = ptep_get_and_clear_full(mm, addr, pte,
tlb->fullmm);
tlb_remove_tlb_entry(tlb, pte, addr);
--
2.25.1
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 2/3] mm: Implement folio_remove_rmap_range()
2023-07-19 13:54 ` [PATCH v2 2/3] mm: Implement folio_remove_rmap_range() Ryan Roberts
@ 2023-07-19 18:23 ` Yu Zhao
2023-07-19 18:46 ` Ryan Roberts
0 siblings, 1 reply; 6+ messages in thread
From: Yu Zhao @ 2023-07-19 18:23 UTC (permalink / raw)
To: Ryan Roberts
Cc: Andrew Morton, Matthew Wilcox, Yin Fengwei, David Hildenbrand,
Yang Shi, Huang, Ying, Zi Yan, linux-kernel, linux-mm
On Wed, Jul 19, 2023 at 7:55 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>
> Like page_remove_rmap() but batch-removes the rmap for a range of pages
> belonging to a folio. This can provide a small speedup due to less
> manipuation of the various counters. But more crucially, if removing the
> rmap for all pages of a folio in a batch, there is no need to
> (spuriously) add it to the deferred split list, which saves significant
> cost when there is contention for the split queue lock.
>
> All contained pages are accounted using the order-0 folio (or base page)
> scheme.
>
> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
I have asked for this before but let me be very clear this time: we
need to generalize the existing functions rather than add more
specialized functions. Otherwise it'd get even harder to maintain down
the road.
folio_remove_rmap_range() needs to replace page_remove_rmap(). IOW,
page_remove_rmap() is just a wrapper around folio_remove_rmap_range().
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH v2 2/3] mm: Implement folio_remove_rmap_range()
2023-07-19 18:23 ` Yu Zhao
@ 2023-07-19 18:46 ` Ryan Roberts
0 siblings, 0 replies; 6+ messages in thread
From: Ryan Roberts @ 2023-07-19 18:46 UTC (permalink / raw)
To: Yu Zhao
Cc: Andrew Morton, Matthew Wilcox, Yin Fengwei, David Hildenbrand,
Yang Shi, Huang, Ying, Zi Yan, linux-kernel, linux-mm
On 19/07/2023 19:23, Yu Zhao wrote:
> On Wed, Jul 19, 2023 at 7:55 AM Ryan Roberts <ryan.roberts@arm.com> wrote:
>>
>> Like page_remove_rmap() but batch-removes the rmap for a range of pages
>> belonging to a folio. This can provide a small speedup due to less
>> manipuation of the various counters. But more crucially, if removing the
>> rmap for all pages of a folio in a batch, there is no need to
>> (spuriously) add it to the deferred split list, which saves significant
>> cost when there is contention for the split queue lock.
>>
>> All contained pages are accounted using the order-0 folio (or base page)
>> scheme.
>>
>> Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
>
> I have asked for this before but let me be very clear this time: we
> need to generalize the existing functions rather than add more
> specialized functions. Otherwise it'd get even harder to maintain down
> the road.
Yeah fair enough, my fault; I wrote this before I had your feedback on the other
rmap function and overlooked it when refactoring this. I'll fix it and repost.
>
> folio_remove_rmap_range() needs to replace page_remove_rmap(). IOW,
> page_remove_rmap() is just a wrapper around folio_remove_rmap_range().
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-07-19 18:46 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-19 13:54 [PATCH v2 0/3] Optimize large folio interaction with deferred split Ryan Roberts
2023-07-19 13:54 ` [PATCH v2 1/3] mm: Allow deferred splitting of arbitrary large anon folios Ryan Roberts
2023-07-19 13:54 ` [PATCH v2 2/3] mm: Implement folio_remove_rmap_range() Ryan Roberts
2023-07-19 18:23 ` Yu Zhao
2023-07-19 18:46 ` Ryan Roberts
2023-07-19 13:54 ` [PATCH v2 3/3] mm: Batch-zap large anonymous folio PTE mappings Ryan Roberts
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox