linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: akpm@linux-foundation.org, linux-mm@kvack.org
Cc: david@redhat.com, willy@infradead.org, ryan.roberts@arm.com,
	 yosryahmed@google.com, hughd@google.com, hannes@cmpxchg.org,
	 surenb@google.com, xiang@kernel.org, yuzhao@google.com,
	ying.huang@intel.com,  chrisl@kernel.org, kasong@tencent.com,
	ziy@nvidia.com,  baolin.wang@linux.alibaba.com,
	hanchuanhua@oppo.com,  Barry Song <v-songbaohua@oppo.com>
Subject: Re: [PATCH 4/4] mm: swap: entirely map large folios found in swapcache
Date: Sun, 7 Apr 2024 14:24:29 +1200	[thread overview]
Message-ID: <CAGsJ_4z6-_Kr_oMSfDEO7SG8b0wPFYC5HMoOZYX-rEW54G-ULA@mail.gmail.com> (raw)
In-Reply-To: <20240402073237.240995-5-21cnbao@gmail.com>

On Tue, Apr 2, 2024 at 8:33 PM Barry Song <21cnbao@gmail.com> wrote:
>
> From: Chuanhua Han <hanchuanhua@oppo.com>
>
> When a large folio is found in the swapcache, the current implementation
> requires calling do_swap_page() nr_pages times, resulting in nr_pages
> page faults. This patch opts to map the entire large folio at once to
> minimize page faults. Additionally, redundant checks and early exits
> for ARM64 MTE restoring are removed.
>
> Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com>
> Co-developed-by: Barry Song <v-songbaohua@oppo.com>
> Signed-off-by: Barry Song <v-songbaohua@oppo.com>
> ---
>  mm/memory.c | 61 ++++++++++++++++++++++++++++++++++++++++++-----------
>  1 file changed, 49 insertions(+), 12 deletions(-)
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 0a80e75af22c..5f52db6eb494 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3941,6 +3941,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>         pte_t pte;
>         vm_fault_t ret = 0;
>         void *shadow = NULL;
> +       int nr_pages = 1;
> +       unsigned long start_address = vmf->address;
> +       pte_t *start_pte = vmf->pte;
> +       bool any_swap_shared = false;
>
>         if (!pte_unmap_same(vmf))
>                 goto out;
> @@ -4131,6 +4135,30 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>          */
>         vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
>                         &vmf->ptl);
> +
> +       /* We hit large folios in swapcache */
> +       if (start_pte && folio_test_large(folio) && folio_test_swapcache(folio)) {
> +               unsigned long folio_start = vmf->address - folio_page_idx(folio, page) * PAGE_SIZE;
> +               unsigned long folio_end = folio_start + folio_nr_pages(folio) * PAGE_SIZE;
> +               pte_t *folio_pte = vmf->pte - folio_page_idx(folio, page);
> +               int nr = folio_nr_pages(folio);
> +
> +               if (unlikely(folio_start < max(vmf->address & PMD_MASK, vma->vm_start)))
> +                       goto check_pte;
> +               if (unlikely(folio_end > pmd_addr_end(vmf->address, vma->vm_end)))
> +                       goto check_pte;
> +
> +               if (swap_pte_batch(folio_pte, nr, folio->swap, &any_swap_shared) != nr)
> +                       goto check_pte;
> +
> +               start_address = folio_start;
> +               start_pte = folio_pte;
> +               nr_pages = nr;
> +               entry = folio->swap;
> +               page = &folio->page;
> +       }
> +
> +check_pte:
>         if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte)))
>                 goto out_nomap;
>
> @@ -4184,6 +4212,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>                          */
>                         exclusive = false;
>                 }
> +
> +               /* Reuse the whole large folio iff all entries are exclusive */
> +               if (nr_pages > 1 && any_swap_shared)
> +                       exclusive = false;
>         }
>
>         /*
> @@ -4198,12 +4230,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>          * We're already holding a reference on the page but haven't mapped it
>          * yet.
>          */
> -       swap_free(entry);
> +       swap_free_nr(entry, nr_pages);
>         if (should_try_to_free_swap(folio, vma, vmf->flags))
>                 folio_free_swap(folio);
>
> -       inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
> -       dec_mm_counter(vma->vm_mm, MM_SWAPENTS);
> +       folio_ref_add(folio, nr_pages - 1);
> +       add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages);
> +       add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages);
> +
>         pte = mk_pte(page, vma->vm_page_prot);
>
>         /*
> @@ -4213,33 +4247,36 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>          * exclusivity.
>          */
>         if (!folio_test_ksm(folio) &&
> -           (exclusive || folio_ref_count(folio) == 1)) {
> +           (exclusive || (folio_ref_count(folio) == nr_pages &&
> +                          folio_nr_pages(folio) == nr_pages))) {
>                 if (vmf->flags & FAULT_FLAG_WRITE) {
>                         pte = maybe_mkwrite(pte_mkdirty(pte), vma);
>                         vmf->flags &= ~FAULT_FLAG_WRITE;
>                 }
>                 rmap_flags |= RMAP_EXCLUSIVE;
>         }
> -       flush_icache_page(vma, page);
> +       flush_icache_pages(vma, page, nr_pages);
>         if (pte_swp_soft_dirty(vmf->orig_pte))
>                 pte = pte_mksoft_dirty(pte);
>         if (pte_swp_uffd_wp(vmf->orig_pte))
>                 pte = pte_mkuffd_wp(pte);
> -       vmf->orig_pte = pte;
>
>         /* ksm created a completely new copy */
>         if (unlikely(folio != swapcache && swapcache)) {
> -               folio_add_new_anon_rmap(folio, vma, vmf->address);
> +               folio_add_new_anon_rmap(folio, vma, start_address);
>                 folio_add_lru_vma(folio, vma);
> +       } else if (!folio_test_anon(folio)) {
> +               folio_add_new_anon_rmap(folio, vma, start_address);

The above two lines of code should be removed. Since we're solely addressing
refault cases of large folios in this patchset.
We're constantly dealing with anonymous mappings of large folios now. However,
as we prepare to address non-refault cases of large folios swap-in, per David's
suggestion in a separate thread, we'll need to extend a wrapper function,
folio_add_shared_new_anon_rmap(), to accommodate non-exclusive
new anonymous folios[1].

[1] https://lore.kernel.org/linux-mm/CAGsJ_4xKTj1PwmJAAZAzAvEN53kze5wSPHb01pVg9LBy80axGA@mail.gmail.com/

>         } else {
> -               folio_add_anon_rmap_pte(folio, page, vma, vmf->address,
> -                                       rmap_flags);
> +               folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, start_address,
> +                                        rmap_flags);
>         }
>
>         VM_BUG_ON(!folio_test_anon(folio) ||
>                         (pte_write(pte) && !PageAnonExclusive(page)));
> -       set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
> -       arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
> +       set_ptes(vma->vm_mm, start_address, start_pte, pte, nr_pages);
> +       vmf->orig_pte = ptep_get(vmf->pte);
> +       arch_do_swap_page(vma->vm_mm, vma, start_address, pte, pte);
>
>         folio_unlock(folio);
>         if (folio != swapcache && swapcache) {
> @@ -4263,7 +4300,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>         }
>
>         /* No need to invalidate - it was non-present before */
> -       update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
> +       update_mmu_cache_range(vmf, vma, start_address, start_pte, nr_pages);
>  unlock:
>         if (vmf->pte)
>                 pte_unmap_unlock(vmf->pte, vmf->ptl);
> --
> 2.34.1
>

Thanks
Barry


  reply	other threads:[~2024-04-07  2:24 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-02  7:32 [PATCH 0/4] large folios swap-in: handle refault cases first Barry Song
2024-04-02  7:32 ` [PATCH 1/4] mm: swap: introduce swap_free_nr() for batched swap_free() Barry Song
2024-04-02  7:32 ` [PATCH 2/4] mm: swap: make should_try_to_free_swap() support large-folio Barry Song
2024-04-02  7:32 ` [PATCH 3/4] mm: swap_pte_batch: add an output argument to reture if all swap entries are exclusive Barry Song
2024-04-02  7:32 ` [PATCH 4/4] mm: swap: entirely map large folios found in swapcache Barry Song
2024-04-07  2:24   ` Barry Song [this message]
2024-04-08  7:18   ` Huang, Ying
2024-04-08  7:27     ` Barry Song
2024-04-08  7:49       ` Huang, Ying

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAGsJ_4z6-_Kr_oMSfDEO7SG8b0wPFYC5HMoOZYX-rEW54G-ULA@mail.gmail.com \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=chrisl@kernel.org \
    --cc=david@redhat.com \
    --cc=hanchuanhua@oppo.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kasong@tencent.com \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=surenb@google.com \
    --cc=v-songbaohua@oppo.com \
    --cc=willy@infradead.org \
    --cc=xiang@kernel.org \
    --cc=ying.huang@intel.com \
    --cc=yosryahmed@google.com \
    --cc=yuzhao@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox