* [PATCH 0/4] large folios swap-in: handle refault cases first
@ 2024-04-02 7:32 Barry Song
2024-04-02 7:32 ` [PATCH 1/4] mm: swap: introduce swap_free_nr() for batched swap_free() Barry Song
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Barry Song @ 2024-04-02 7:32 UTC (permalink / raw)
To: akpm, linux-mm
Cc: david, willy, ryan.roberts, yosryahmed, hughd, hannes, surenb,
xiang, yuzhao, ying.huang, chrisl, kasong, ziy, baolin.wang,
hanchuanhua, Barry Song
From: Barry Song <v-songbaohua@oppo.com>
This patch is extracted from the large folio swapin series[1], focusing
initially on handling the scenario where large folios are found in
the swap cache. This should facilitate code review and enable sooner
inclusion of this portion into the MM tree.
It relies on Ryan's swap-out series[2], leveraging the helper function
swap_pte_batch() introduced by that series.
Presently, do_swap_page only encounters a large folio in the swap
cache before the large folio is released by vmscan. However, the code
should remain equally useful once we support large folio swap-in via
swapin_readahead(). This approach can effectively reduce page faults
and eliminate most redundant checks and early exits for MTE restoration
in recent MTE patchset[3].
The large folio swap-in for SWP_SYNCHRONOUS_IO and swapin_readahead()
will be split into separate patch sets and sent at a later time.
Differences with the original large folios swap-in series
- collect r-o-b, acked;
- rename swap_nr_free to swap_free_nr, according to Ryan;
- limit the maximum kernel stack usage for swap_free_nr, Ryan;
- add output argument in swap_pte_batch to expose if all entries are
exclusive
- many clean refinements, handle the corner case folio's virtual addr
might not be naturally aligned
[1] https://lore.kernel.org/linux-mm/20240304081348.197341-1-21cnbao@gmail.com/
[2] https://lore.kernel.org/linux-mm/20240327144537.4165578-1-ryan.roberts@arm.com/
[3] https://lore.kernel.org/linux-mm/20240322114136.61386-1-21cnbao@gmail.com/
Barry Song (1):
mm: swap_pte_batch: add an output argument to reture if all swap
entries are exclusive
Chuanhua Han (3):
mm: swap: introduce swap_free_nr() for batched swap_free()
mm: swap: make should_try_to_free_swap() support large-folio
mm: swap: entirely map large folios found in swapcache
include/linux/swap.h | 5 ++++
mm/internal.h | 5 +++-
mm/madvise.c | 2 +-
mm/memory.c | 65 ++++++++++++++++++++++++++++++++++----------
mm/swapfile.c | 51 ++++++++++++++++++++++++++++++++++
5 files changed, 112 insertions(+), 16 deletions(-)
Appendix:
The following program can generate numerous instances where large folios
are hit in the swap cache if we enable 64KiB mTHP,
#echo always > /sys/kernel/mm/transparent_hugepage/hugepages-64kB/enabled
#define DATA_SIZE (128UL * 1024)
#define PAGE_SIZE (4UL * 1024)
#define LARGE_FOLIO_SIZE (64UL * 1024)
static void *write_data(void *addr)
{
unsigned long i;
for (i = 0; i < DATA_SIZE; i += PAGE_SIZE)
memset(addr + i, (char)i, PAGE_SIZE);
}
static void *read_data(void *addr)
{
unsigned long i;
for (i = 0; i < DATA_SIZE; i += PAGE_SIZE) {
if (*((char *)addr + i) != (char)i) {
perror("mismatched data");
_exit(-1);
}
}
}
static void *pgout_data(void *addr)
{
madvise(addr, DATA_SIZE, MADV_PAGEOUT);
}
int main(int argc, char **argv)
{
for (int i = 0; i < 10000; i++) {
pthread_t tid1, tid2;
void *addr = mmap(NULL, DATA_SIZE * 2, PROT_READ | PROT_WRITE,
MAP_ANONYMOUS | MAP_PRIVATE, -1, 0);
unsigned long aligned_addr = ((unsigned long)addr + LARGE_FOLIO_SIZE) &
~(LARGE_FOLIO_SIZE - 1);
if (addr == MAP_FAILED) {
perror("fail to malloc");
return -1;
}
write_data(aligned_addr);
if (pthread_create(&tid1, NULL, pgout_data, (void *)aligned_addr)) {
perror("fail to pthread_create");
return -1;
}
if (pthread_create(&tid2, NULL, read_data, (void *)aligned_addr)) {
perror("fail to pthread_create");
return -1;
}
pthread_join(tid1, NULL);
pthread_join(tid2, NULL);
munmap(addr, DATA_SIZE * 2);
}
return 0;
}
--
2.34.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 1/4] mm: swap: introduce swap_free_nr() for batched swap_free()
2024-04-02 7:32 [PATCH 0/4] large folios swap-in: handle refault cases first Barry Song
@ 2024-04-02 7:32 ` Barry Song
2024-04-02 7:32 ` [PATCH 2/4] mm: swap: make should_try_to_free_swap() support large-folio Barry Song
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Barry Song @ 2024-04-02 7:32 UTC (permalink / raw)
To: akpm, linux-mm
Cc: david, willy, ryan.roberts, yosryahmed, hughd, hannes, surenb,
xiang, yuzhao, ying.huang, chrisl, kasong, ziy, baolin.wang,
hanchuanhua, Barry Song
From: Chuanhua Han <hanchuanhua@oppo.com>
While swapping in a large folio, we need to free swaps related to the whole
folio. To avoid frequently acquiring and releasing swap locks, it is better
to introduce an API for batched free.
Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com>
Co-developed-by: Barry Song <v-songbaohua@oppo.com>
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
---
include/linux/swap.h | 5 +++++
mm/swapfile.c | 51 ++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 56 insertions(+)
diff --git a/include/linux/swap.h b/include/linux/swap.h
index 11c53692f65f..b7a107e983b8 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -483,6 +483,7 @@ extern void swap_shmem_alloc(swp_entry_t);
extern int swap_duplicate(swp_entry_t);
extern int swapcache_prepare(swp_entry_t);
extern void swap_free(swp_entry_t);
+extern void swap_free_nr(swp_entry_t entry, int nr_pages);
extern void swapcache_free_entries(swp_entry_t *entries, int n);
extern void free_swap_and_cache_nr(swp_entry_t entry, int nr);
int swap_type_of(dev_t device, sector_t offset);
@@ -564,6 +565,10 @@ static inline void swap_free(swp_entry_t swp)
{
}
+void swap_free_nr(swp_entry_t entry, int nr_pages)
+{
+}
+
static inline void put_swap_folio(struct folio *folio, swp_entry_t swp)
{
}
diff --git a/mm/swapfile.c b/mm/swapfile.c
index d56cdc547a06..b6a63095ae67 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1357,6 +1357,57 @@ void swap_free(swp_entry_t entry)
__swap_entry_free(p, entry);
}
+/*
+ * Free up the maximum number of swap entries at once to limit the
+ * maximum kernel stack usage.
+ */
+#define SWAP_BATCH_NR (SWAPFILE_CLUSTER > 512 ? 512 : SWAPFILE_CLUSTER)
+
+/*
+ * Called after swapping in a large folio, batched free swap entries
+ * for this large folio, entry should be for the first subpage and
+ * its offset is aligned with nr_pages
+ */
+void swap_free_nr(swp_entry_t entry, int nr_pages)
+{
+ int i, j;
+ struct swap_cluster_info *ci;
+ struct swap_info_struct *p;
+ unsigned int type = swp_type(entry);
+ unsigned long offset = swp_offset(entry);
+ int batch_nr, remain_nr;
+ DECLARE_BITMAP(usage, SWAP_BATCH_NR) = { 0 };
+
+ /* all swap entries are within a cluster for mTHP */
+ VM_BUG_ON(offset % SWAPFILE_CLUSTER + nr_pages > SWAPFILE_CLUSTER);
+
+ if (nr_pages == 1) {
+ swap_free(entry);
+ return;
+ }
+
+ remain_nr = nr_pages;
+ p = _swap_info_get(entry);
+ if (p) {
+ for (i = 0; i < nr_pages; i += batch_nr) {
+ batch_nr = min_t(int, SWAP_BATCH_NR, remain_nr);
+
+ ci = lock_cluster_or_swap_info(p, offset);
+ for (j = 0; j < batch_nr; j++) {
+ if (__swap_entry_free_locked(p, offset + i * SWAP_BATCH_NR + j, 1))
+ __bitmap_set(usage, j, 1);
+ }
+ unlock_cluster_or_swap_info(p, ci);
+
+ for_each_clear_bit(j, usage, batch_nr)
+ free_swap_slot(swp_entry(type, offset + i * SWAP_BATCH_NR + j));
+
+ bitmap_clear(usage, 0, SWAP_BATCH_NR);
+ remain_nr -= batch_nr;
+ }
+ }
+}
+
/*
* Called after dropping swapcache to decrease refcnt to swap entries.
*/
--
2.34.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 2/4] mm: swap: make should_try_to_free_swap() support large-folio
2024-04-02 7:32 [PATCH 0/4] large folios swap-in: handle refault cases first Barry Song
2024-04-02 7:32 ` [PATCH 1/4] mm: swap: introduce swap_free_nr() for batched swap_free() Barry Song
@ 2024-04-02 7:32 ` Barry Song
2024-04-02 7:32 ` [PATCH 3/4] mm: swap_pte_batch: add an output argument to reture if all swap entries are exclusive Barry Song
2024-04-02 7:32 ` [PATCH 4/4] mm: swap: entirely map large folios found in swapcache Barry Song
3 siblings, 0 replies; 9+ messages in thread
From: Barry Song @ 2024-04-02 7:32 UTC (permalink / raw)
To: akpm, linux-mm
Cc: david, willy, ryan.roberts, yosryahmed, hughd, hannes, surenb,
xiang, yuzhao, ying.huang, chrisl, kasong, ziy, baolin.wang,
hanchuanhua, Barry Song
From: Chuanhua Han <hanchuanhua@oppo.com>
The function should_try_to_free_swap() operates under the assumption that
swap-in always occurs at the normal page granularity, i.e., folio_nr_pages
= 1. However, in reality, for large folios, add_to_swap_cache() will
invoke folio_ref_add(folio, nr). To accommodate large folio swap-in,
this patch eliminates this assumption.
Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com>
Co-developed-by: Barry Song <v-songbaohua@oppo.com>
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
Acked-by: Chris Li <chrisl@kernel.org>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
---
mm/memory.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memory.c b/mm/memory.c
index 010e7bb20d2b..f6377cc4c1ca 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3850,7 +3850,7 @@ static inline bool should_try_to_free_swap(struct folio *folio,
* reference only in case it's likely that we'll be the exlusive user.
*/
return (fault_flags & FAULT_FLAG_WRITE) && !folio_test_ksm(folio) &&
- folio_ref_count(folio) == 2;
+ folio_ref_count(folio) == (1 + folio_nr_pages(folio));
}
static vm_fault_t pte_marker_clear(struct vm_fault *vmf)
--
2.34.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 3/4] mm: swap_pte_batch: add an output argument to reture if all swap entries are exclusive
2024-04-02 7:32 [PATCH 0/4] large folios swap-in: handle refault cases first Barry Song
2024-04-02 7:32 ` [PATCH 1/4] mm: swap: introduce swap_free_nr() for batched swap_free() Barry Song
2024-04-02 7:32 ` [PATCH 2/4] mm: swap: make should_try_to_free_swap() support large-folio Barry Song
@ 2024-04-02 7:32 ` Barry Song
2024-04-02 7:32 ` [PATCH 4/4] mm: swap: entirely map large folios found in swapcache Barry Song
3 siblings, 0 replies; 9+ messages in thread
From: Barry Song @ 2024-04-02 7:32 UTC (permalink / raw)
To: akpm, linux-mm
Cc: david, willy, ryan.roberts, yosryahmed, hughd, hannes, surenb,
xiang, yuzhao, ying.huang, chrisl, kasong, ziy, baolin.wang,
hanchuanhua, Barry Song
From: Barry Song <v-songbaohua@oppo.com>
Add a boolean argument named any_shared. If any of the swap entries are
non-exclusive, set any_shared to true. The function do_swap_page() can
then utilize this information to determine whether the entire large
folio can be reused.
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
---
mm/internal.h | 5 ++++-
mm/madvise.c | 2 +-
mm/memory.c | 2 +-
3 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/mm/internal.h b/mm/internal.h
index 9512de7398d5..ffdd1b049c77 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -216,7 +216,7 @@ static inline int folio_pte_batch(struct folio *folio, unsigned long addr,
* Return: the number of table entries in the batch.
*/
static inline int swap_pte_batch(pte_t *start_ptep, int max_nr,
- swp_entry_t entry)
+ swp_entry_t entry, bool *any_shared)
{
const pte_t *end_ptep = start_ptep + max_nr;
unsigned long expected_offset = swp_offset(entry) + 1;
@@ -239,6 +239,9 @@ static inline int swap_pte_batch(pte_t *start_ptep, int max_nr,
swp_offset(entry) != expected_offset)
break;
+ if (any_shared)
+ *any_shared |= !pte_swp_exclusive(pte);
+
expected_offset++;
ptep++;
}
diff --git a/mm/madvise.c b/mm/madvise.c
index bd00b83e7c50..d4624fb92665 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -672,7 +672,7 @@ static int madvise_free_pte_range(pmd_t *pmd, unsigned long addr,
entry = pte_to_swp_entry(ptent);
if (!non_swap_entry(entry)) {
max_nr = (end - addr) / PAGE_SIZE;
- nr = swap_pte_batch(pte, max_nr, entry);
+ nr = swap_pte_batch(pte, max_nr, entry, NULL);
nr_swap -= nr;
free_swap_and_cache_nr(entry, nr);
clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
diff --git a/mm/memory.c b/mm/memory.c
index f6377cc4c1ca..0a80e75af22c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1632,7 +1632,7 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
folio_put(folio);
} else if (!non_swap_entry(entry)) {
max_nr = (end - addr) / PAGE_SIZE;
- nr = swap_pte_batch(pte, max_nr, entry);
+ nr = swap_pte_batch(pte, max_nr, entry, NULL);
/* Genuine swap entries, hence a private anon pages */
if (!should_zap_cows(details))
continue;
--
2.34.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* [PATCH 4/4] mm: swap: entirely map large folios found in swapcache
2024-04-02 7:32 [PATCH 0/4] large folios swap-in: handle refault cases first Barry Song
` (2 preceding siblings ...)
2024-04-02 7:32 ` [PATCH 3/4] mm: swap_pte_batch: add an output argument to reture if all swap entries are exclusive Barry Song
@ 2024-04-02 7:32 ` Barry Song
2024-04-07 2:24 ` Barry Song
2024-04-08 7:18 ` Huang, Ying
3 siblings, 2 replies; 9+ messages in thread
From: Barry Song @ 2024-04-02 7:32 UTC (permalink / raw)
To: akpm, linux-mm
Cc: david, willy, ryan.roberts, yosryahmed, hughd, hannes, surenb,
xiang, yuzhao, ying.huang, chrisl, kasong, ziy, baolin.wang,
hanchuanhua, Barry Song
From: Chuanhua Han <hanchuanhua@oppo.com>
When a large folio is found in the swapcache, the current implementation
requires calling do_swap_page() nr_pages times, resulting in nr_pages
page faults. This patch opts to map the entire large folio at once to
minimize page faults. Additionally, redundant checks and early exits
for ARM64 MTE restoring are removed.
Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com>
Co-developed-by: Barry Song <v-songbaohua@oppo.com>
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
---
mm/memory.c | 61 ++++++++++++++++++++++++++++++++++++++++++-----------
1 file changed, 49 insertions(+), 12 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index 0a80e75af22c..5f52db6eb494 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3941,6 +3941,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
pte_t pte;
vm_fault_t ret = 0;
void *shadow = NULL;
+ int nr_pages = 1;
+ unsigned long start_address = vmf->address;
+ pte_t *start_pte = vmf->pte;
+ bool any_swap_shared = false;
if (!pte_unmap_same(vmf))
goto out;
@@ -4131,6 +4135,30 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
*/
vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
&vmf->ptl);
+
+ /* We hit large folios in swapcache */
+ if (start_pte && folio_test_large(folio) && folio_test_swapcache(folio)) {
+ unsigned long folio_start = vmf->address - folio_page_idx(folio, page) * PAGE_SIZE;
+ unsigned long folio_end = folio_start + folio_nr_pages(folio) * PAGE_SIZE;
+ pte_t *folio_pte = vmf->pte - folio_page_idx(folio, page);
+ int nr = folio_nr_pages(folio);
+
+ if (unlikely(folio_start < max(vmf->address & PMD_MASK, vma->vm_start)))
+ goto check_pte;
+ if (unlikely(folio_end > pmd_addr_end(vmf->address, vma->vm_end)))
+ goto check_pte;
+
+ if (swap_pte_batch(folio_pte, nr, folio->swap, &any_swap_shared) != nr)
+ goto check_pte;
+
+ start_address = folio_start;
+ start_pte = folio_pte;
+ nr_pages = nr;
+ entry = folio->swap;
+ page = &folio->page;
+ }
+
+check_pte:
if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte)))
goto out_nomap;
@@ -4184,6 +4212,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
*/
exclusive = false;
}
+
+ /* Reuse the whole large folio iff all entries are exclusive */
+ if (nr_pages > 1 && any_swap_shared)
+ exclusive = false;
}
/*
@@ -4198,12 +4230,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
* We're already holding a reference on the page but haven't mapped it
* yet.
*/
- swap_free(entry);
+ swap_free_nr(entry, nr_pages);
if (should_try_to_free_swap(folio, vma, vmf->flags))
folio_free_swap(folio);
- inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
- dec_mm_counter(vma->vm_mm, MM_SWAPENTS);
+ folio_ref_add(folio, nr_pages - 1);
+ add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages);
+ add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages);
+
pte = mk_pte(page, vma->vm_page_prot);
/*
@@ -4213,33 +4247,36 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
* exclusivity.
*/
if (!folio_test_ksm(folio) &&
- (exclusive || folio_ref_count(folio) == 1)) {
+ (exclusive || (folio_ref_count(folio) == nr_pages &&
+ folio_nr_pages(folio) == nr_pages))) {
if (vmf->flags & FAULT_FLAG_WRITE) {
pte = maybe_mkwrite(pte_mkdirty(pte), vma);
vmf->flags &= ~FAULT_FLAG_WRITE;
}
rmap_flags |= RMAP_EXCLUSIVE;
}
- flush_icache_page(vma, page);
+ flush_icache_pages(vma, page, nr_pages);
if (pte_swp_soft_dirty(vmf->orig_pte))
pte = pte_mksoft_dirty(pte);
if (pte_swp_uffd_wp(vmf->orig_pte))
pte = pte_mkuffd_wp(pte);
- vmf->orig_pte = pte;
/* ksm created a completely new copy */
if (unlikely(folio != swapcache && swapcache)) {
- folio_add_new_anon_rmap(folio, vma, vmf->address);
+ folio_add_new_anon_rmap(folio, vma, start_address);
folio_add_lru_vma(folio, vma);
+ } else if (!folio_test_anon(folio)) {
+ folio_add_new_anon_rmap(folio, vma, start_address);
} else {
- folio_add_anon_rmap_pte(folio, page, vma, vmf->address,
- rmap_flags);
+ folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, start_address,
+ rmap_flags);
}
VM_BUG_ON(!folio_test_anon(folio) ||
(pte_write(pte) && !PageAnonExclusive(page)));
- set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
- arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
+ set_ptes(vma->vm_mm, start_address, start_pte, pte, nr_pages);
+ vmf->orig_pte = ptep_get(vmf->pte);
+ arch_do_swap_page(vma->vm_mm, vma, start_address, pte, pte);
folio_unlock(folio);
if (folio != swapcache && swapcache) {
@@ -4263,7 +4300,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
}
/* No need to invalidate - it was non-present before */
- update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
+ update_mmu_cache_range(vmf, vma, start_address, start_pte, nr_pages);
unlock:
if (vmf->pte)
pte_unmap_unlock(vmf->pte, vmf->ptl);
--
2.34.1
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 4/4] mm: swap: entirely map large folios found in swapcache
2024-04-02 7:32 ` [PATCH 4/4] mm: swap: entirely map large folios found in swapcache Barry Song
@ 2024-04-07 2:24 ` Barry Song
2024-04-08 7:18 ` Huang, Ying
1 sibling, 0 replies; 9+ messages in thread
From: Barry Song @ 2024-04-07 2:24 UTC (permalink / raw)
To: akpm, linux-mm
Cc: david, willy, ryan.roberts, yosryahmed, hughd, hannes, surenb,
xiang, yuzhao, ying.huang, chrisl, kasong, ziy, baolin.wang,
hanchuanhua, Barry Song
On Tue, Apr 2, 2024 at 8:33 PM Barry Song <21cnbao@gmail.com> wrote:
>
> From: Chuanhua Han <hanchuanhua@oppo.com>
>
> When a large folio is found in the swapcache, the current implementation
> requires calling do_swap_page() nr_pages times, resulting in nr_pages
> page faults. This patch opts to map the entire large folio at once to
> minimize page faults. Additionally, redundant checks and early exits
> for ARM64 MTE restoring are removed.
>
> Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com>
> Co-developed-by: Barry Song <v-songbaohua@oppo.com>
> Signed-off-by: Barry Song <v-songbaohua@oppo.com>
> ---
> mm/memory.c | 61 ++++++++++++++++++++++++++++++++++++++++++-----------
> 1 file changed, 49 insertions(+), 12 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 0a80e75af22c..5f52db6eb494 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3941,6 +3941,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> pte_t pte;
> vm_fault_t ret = 0;
> void *shadow = NULL;
> + int nr_pages = 1;
> + unsigned long start_address = vmf->address;
> + pte_t *start_pte = vmf->pte;
> + bool any_swap_shared = false;
>
> if (!pte_unmap_same(vmf))
> goto out;
> @@ -4131,6 +4135,30 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> */
> vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
> &vmf->ptl);
> +
> + /* We hit large folios in swapcache */
> + if (start_pte && folio_test_large(folio) && folio_test_swapcache(folio)) {
> + unsigned long folio_start = vmf->address - folio_page_idx(folio, page) * PAGE_SIZE;
> + unsigned long folio_end = folio_start + folio_nr_pages(folio) * PAGE_SIZE;
> + pte_t *folio_pte = vmf->pte - folio_page_idx(folio, page);
> + int nr = folio_nr_pages(folio);
> +
> + if (unlikely(folio_start < max(vmf->address & PMD_MASK, vma->vm_start)))
> + goto check_pte;
> + if (unlikely(folio_end > pmd_addr_end(vmf->address, vma->vm_end)))
> + goto check_pte;
> +
> + if (swap_pte_batch(folio_pte, nr, folio->swap, &any_swap_shared) != nr)
> + goto check_pte;
> +
> + start_address = folio_start;
> + start_pte = folio_pte;
> + nr_pages = nr;
> + entry = folio->swap;
> + page = &folio->page;
> + }
> +
> +check_pte:
> if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte)))
> goto out_nomap;
>
> @@ -4184,6 +4212,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> */
> exclusive = false;
> }
> +
> + /* Reuse the whole large folio iff all entries are exclusive */
> + if (nr_pages > 1 && any_swap_shared)
> + exclusive = false;
> }
>
> /*
> @@ -4198,12 +4230,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> * We're already holding a reference on the page but haven't mapped it
> * yet.
> */
> - swap_free(entry);
> + swap_free_nr(entry, nr_pages);
> if (should_try_to_free_swap(folio, vma, vmf->flags))
> folio_free_swap(folio);
>
> - inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
> - dec_mm_counter(vma->vm_mm, MM_SWAPENTS);
> + folio_ref_add(folio, nr_pages - 1);
> + add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages);
> + add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages);
> +
> pte = mk_pte(page, vma->vm_page_prot);
>
> /*
> @@ -4213,33 +4247,36 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> * exclusivity.
> */
> if (!folio_test_ksm(folio) &&
> - (exclusive || folio_ref_count(folio) == 1)) {
> + (exclusive || (folio_ref_count(folio) == nr_pages &&
> + folio_nr_pages(folio) == nr_pages))) {
> if (vmf->flags & FAULT_FLAG_WRITE) {
> pte = maybe_mkwrite(pte_mkdirty(pte), vma);
> vmf->flags &= ~FAULT_FLAG_WRITE;
> }
> rmap_flags |= RMAP_EXCLUSIVE;
> }
> - flush_icache_page(vma, page);
> + flush_icache_pages(vma, page, nr_pages);
> if (pte_swp_soft_dirty(vmf->orig_pte))
> pte = pte_mksoft_dirty(pte);
> if (pte_swp_uffd_wp(vmf->orig_pte))
> pte = pte_mkuffd_wp(pte);
> - vmf->orig_pte = pte;
>
> /* ksm created a completely new copy */
> if (unlikely(folio != swapcache && swapcache)) {
> - folio_add_new_anon_rmap(folio, vma, vmf->address);
> + folio_add_new_anon_rmap(folio, vma, start_address);
> folio_add_lru_vma(folio, vma);
> + } else if (!folio_test_anon(folio)) {
> + folio_add_new_anon_rmap(folio, vma, start_address);
The above two lines of code should be removed. Since we're solely addressing
refault cases of large folios in this patchset.
We're constantly dealing with anonymous mappings of large folios now. However,
as we prepare to address non-refault cases of large folios swap-in, per David's
suggestion in a separate thread, we'll need to extend a wrapper function,
folio_add_shared_new_anon_rmap(), to accommodate non-exclusive
new anonymous folios[1].
[1] https://lore.kernel.org/linux-mm/CAGsJ_4xKTj1PwmJAAZAzAvEN53kze5wSPHb01pVg9LBy80axGA@mail.gmail.com/
> } else {
> - folio_add_anon_rmap_pte(folio, page, vma, vmf->address,
> - rmap_flags);
> + folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, start_address,
> + rmap_flags);
> }
>
> VM_BUG_ON(!folio_test_anon(folio) ||
> (pte_write(pte) && !PageAnonExclusive(page)));
> - set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
> - arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
> + set_ptes(vma->vm_mm, start_address, start_pte, pte, nr_pages);
> + vmf->orig_pte = ptep_get(vmf->pte);
> + arch_do_swap_page(vma->vm_mm, vma, start_address, pte, pte);
>
> folio_unlock(folio);
> if (folio != swapcache && swapcache) {
> @@ -4263,7 +4300,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> }
>
> /* No need to invalidate - it was non-present before */
> - update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
> + update_mmu_cache_range(vmf, vma, start_address, start_pte, nr_pages);
> unlock:
> if (vmf->pte)
> pte_unmap_unlock(vmf->pte, vmf->ptl);
> --
> 2.34.1
>
Thanks
Barry
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 4/4] mm: swap: entirely map large folios found in swapcache
2024-04-02 7:32 ` [PATCH 4/4] mm: swap: entirely map large folios found in swapcache Barry Song
2024-04-07 2:24 ` Barry Song
@ 2024-04-08 7:18 ` Huang, Ying
2024-04-08 7:27 ` Barry Song
1 sibling, 1 reply; 9+ messages in thread
From: Huang, Ying @ 2024-04-08 7:18 UTC (permalink / raw)
To: Barry Song
Cc: akpm, linux-mm, david, willy, ryan.roberts, yosryahmed, hughd,
hannes, surenb, xiang, yuzhao, chrisl, kasong, ziy, baolin.wang,
hanchuanhua, Barry Song
Barry Song <21cnbao@gmail.com> writes:
> From: Chuanhua Han <hanchuanhua@oppo.com>
>
> When a large folio is found in the swapcache, the current implementation
> requires calling do_swap_page() nr_pages times, resulting in nr_pages
> page faults. This patch opts to map the entire large folio at once to
> minimize page faults. Additionally, redundant checks and early exits
> for ARM64 MTE restoring are removed.
For large folios in reclaiming, it makes sense to restore all PTE
mappings to the large folio to reduce the number of page faults.
But for large folios swapped in, I think that it's better to map one PTE
which triggers the page fault only. Because this makes us get the
opportunity to trap the page accesses to the sub-pages of the large
folio that is swapped in ahead (kind of swap readahead). Then we can
decide the order of large folio swapin based on the readahead window
information. That is, we may need to check PageReadahead() to decide
whether to map all PTEs in the future.
--
Best Regards,
Huang, Ying
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 4/4] mm: swap: entirely map large folios found in swapcache
2024-04-08 7:18 ` Huang, Ying
@ 2024-04-08 7:27 ` Barry Song
2024-04-08 7:49 ` Huang, Ying
0 siblings, 1 reply; 9+ messages in thread
From: Barry Song @ 2024-04-08 7:27 UTC (permalink / raw)
To: Huang, Ying
Cc: akpm, linux-mm, david, willy, ryan.roberts, yosryahmed, hughd,
hannes, surenb, xiang, yuzhao, chrisl, kasong, ziy, baolin.wang,
hanchuanhua, Barry Song
On Mon, Apr 8, 2024 at 7:20 PM Huang, Ying <ying.huang@intel.com> wrote:
>
> Barry Song <21cnbao@gmail.com> writes:
>
> > From: Chuanhua Han <hanchuanhua@oppo.com>
> >
> > When a large folio is found in the swapcache, the current implementation
> > requires calling do_swap_page() nr_pages times, resulting in nr_pages
> > page faults. This patch opts to map the entire large folio at once to
> > minimize page faults. Additionally, redundant checks and early exits
> > for ARM64 MTE restoring are removed.
>
> For large folios in reclaiming, it makes sense to restore all PTE
> mappings to the large folio to reduce the number of page faults.
>
Indeed, this patch addresses the refault case first, much less controversial
then :-)
> But for large folios swapped in, I think that it's better to map one PTE
> which triggers the page fault only. Because this makes us get the
> opportunity to trap the page accesses to the sub-pages of the large
> folio that is swapped in ahead (kind of swap readahead). Then we can
> decide the order of large folio swapin based on the readahead window
> information. That is, we may need to check PageReadahead() to decide
> whether to map all PTEs in the future.
Another scenario occurs when a process opts to utilize large folios for
swap_readahead. Subsequently, another process encounters the large
folios introduced by the former process. In this case, would it be optimal
to fully map them just like the refault case?
>
> --
> Best Regards,
> Huang, Ying
Thanks
Barry
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [PATCH 4/4] mm: swap: entirely map large folios found in swapcache
2024-04-08 7:27 ` Barry Song
@ 2024-04-08 7:49 ` Huang, Ying
0 siblings, 0 replies; 9+ messages in thread
From: Huang, Ying @ 2024-04-08 7:49 UTC (permalink / raw)
To: Barry Song
Cc: akpm, linux-mm, david, willy, ryan.roberts, yosryahmed, hughd,
hannes, surenb, xiang, yuzhao, chrisl, kasong, ziy, baolin.wang,
hanchuanhua, Barry Song
Barry Song <21cnbao@gmail.com> writes:
> On Mon, Apr 8, 2024 at 7:20 PM Huang, Ying <ying.huang@intel.com> wrote:
>>
>> Barry Song <21cnbao@gmail.com> writes:
>>
>> > From: Chuanhua Han <hanchuanhua@oppo.com>
>> >
>> > When a large folio is found in the swapcache, the current implementation
>> > requires calling do_swap_page() nr_pages times, resulting in nr_pages
>> > page faults. This patch opts to map the entire large folio at once to
>> > minimize page faults. Additionally, redundant checks and early exits
>> > for ARM64 MTE restoring are removed.
>>
>> For large folios in reclaiming, it makes sense to restore all PTE
>> mappings to the large folio to reduce the number of page faults.
>>
>
> Indeed, this patch addresses the refault case first, much less controversial
> then :-)
>
>> But for large folios swapped in, I think that it's better to map one PTE
>> which triggers the page fault only. Because this makes us get the
>> opportunity to trap the page accesses to the sub-pages of the large
>> folio that is swapped in ahead (kind of swap readahead). Then we can
>> decide the order of large folio swapin based on the readahead window
>> information. That is, we may need to check PageReadahead() to decide
>> whether to map all PTEs in the future.
>
> Another scenario occurs when a process opts to utilize large folios for
> swap_readahead. Subsequently, another process encounters the large
> folios introduced by the former process. In this case, would it be optimal
> to fully map them just like the refault case?
We only need to trap the first access to the readahead sub-page. So, we
can map PTE for all sub-pages without PageReadahead().
IIUC, now readahead flag is per-folio, we may need to change it to
per-sub-page when needed.
--
Best Regards,
Huang, Ying
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2024-04-08 7:51 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-02 7:32 [PATCH 0/4] large folios swap-in: handle refault cases first Barry Song
2024-04-02 7:32 ` [PATCH 1/4] mm: swap: introduce swap_free_nr() for batched swap_free() Barry Song
2024-04-02 7:32 ` [PATCH 2/4] mm: swap: make should_try_to_free_swap() support large-folio Barry Song
2024-04-02 7:32 ` [PATCH 3/4] mm: swap_pte_batch: add an output argument to reture if all swap entries are exclusive Barry Song
2024-04-02 7:32 ` [PATCH 4/4] mm: swap: entirely map large folios found in swapcache Barry Song
2024-04-07 2:24 ` Barry Song
2024-04-08 7:18 ` Huang, Ying
2024-04-08 7:27 ` Barry Song
2024-04-08 7:49 ` Huang, Ying
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox