linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: Barry Song <21cnbao@gmail.com>,
	akpm@linux-foundation.org, linux-mm@kvack.org
Cc: baolin.wang@linux.alibaba.com, chrisl@kernel.org,
	david@redhat.com, hanchuanhua@oppo.com, hannes@cmpxchg.org,
	hughd@google.com, kasong@tencent.com, surenb@google.com,
	v-songbaohua@oppo.com, willy@infradead.org, xiang@kernel.org,
	ying.huang@intel.com, yosryahmed@google.com, yuzhao@google.com,
	ziy@nvidia.com, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2 4/5] mm: swap: entirely map large folios found in swapcache
Date: Thu, 11 Apr 2024 16:33:55 +0100	[thread overview]
Message-ID: <1008d688-757a-4c2d-86bd-793f5e787d30@arm.com> (raw)
In-Reply-To: <20240409082631.187483-5-21cnbao@gmail.com>

On 09/04/2024 09:26, Barry Song wrote:
> From: Chuanhua Han <hanchuanhua@oppo.com>
> 
> When a large folio is found in the swapcache, the current implementation
> requires calling do_swap_page() nr_pages times, resulting in nr_pages
> page faults. This patch opts to map the entire large folio at once to
> minimize page faults. Additionally, redundant checks and early exits
> for ARM64 MTE restoring are removed.
> 
> Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com>
> Co-developed-by: Barry Song <v-songbaohua@oppo.com>
> Signed-off-by: Barry Song <v-songbaohua@oppo.com>
> ---
>  mm/memory.c | 64 +++++++++++++++++++++++++++++++++++++++++++----------
>  1 file changed, 52 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index c4a52e8d740a..9818dc1893c8 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3947,6 +3947,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	pte_t pte;
>  	vm_fault_t ret = 0;
>  	void *shadow = NULL;
> +	int nr_pages = 1;
> +	unsigned long start_address = vmf->address;
> +	pte_t *start_pte = vmf->pte;

possible bug?: there are code paths that assign to vmf-pte below in this
function, so couldn't start_pte be stale in some cases? I'd just do the
assignment (all 4 of these variables in fact) in an else clause below, after any
messing about with them is complete.

nit: rename start_pte -> start_ptep ?

> +	bool any_swap_shared = false;

Suggest you defer initialization of this to your "We hit large folios in
swapcache" block below, and init it to:

	any_swap_shared = !pte_swp_exclusive(vmf->pte);

Then the any_shared semantic in swap_pte_batch() can be the same as for
folio_pte_batch().

>  
>  	if (!pte_unmap_same(vmf))
>  		goto out;
> @@ -4137,6 +4141,35 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	 */
>  	vmf->pte = pte_offset_map_lock(vma->vm_mm, vmf->pmd, vmf->address,
>  			&vmf->ptl);

bug: vmf->pte may be NULL and you are not checking it until check_pte:. Byt you
are using it in this block. It also seems odd to do all the work in the below
block under the PTL but before checking if the pte has changed. Suggest moving
both checks here.

> +
> +	/* We hit large folios in swapcache */
> +	if (start_pte && folio_test_large(folio) && folio_test_swapcache(folio)) {

What's the start_pte check protecting?

> +		int nr = folio_nr_pages(folio);
> +		int idx = folio_page_idx(folio, page);
> +		unsigned long folio_start = vmf->address - idx * PAGE_SIZE;
> +		unsigned long folio_end = folio_start + nr * PAGE_SIZE;
> +		pte_t *folio_ptep;
> +		pte_t folio_pte;
> +
> +		if (unlikely(folio_start < max(vmf->address & PMD_MASK, vma->vm_start)))
> +			goto check_pte;
> +		if (unlikely(folio_end > pmd_addr_end(vmf->address, vma->vm_end)))
> +			goto check_pte;
> +
> +		folio_ptep = vmf->pte - idx;
> +		folio_pte = ptep_get(folio_ptep);
> +		if (!is_swap_pte(folio_pte) || non_swap_entry(pte_to_swp_entry(folio_pte)) ||
> +		    swap_pte_batch(folio_ptep, nr, folio_pte, &any_swap_shared) != nr)
> +			goto check_pte;
> +
> +		start_address = folio_start;
> +		start_pte = folio_ptep;
> +		nr_pages = nr;
> +		entry = folio->swap;
> +		page = &folio->page;
> +	}
> +
> +check_pte:
>  	if (unlikely(!vmf->pte || !pte_same(ptep_get(vmf->pte), vmf->orig_pte)))
>  		goto out_nomap;
>  
> @@ -4190,6 +4223,10 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  			 */
>  			exclusive = false;
>  		}
> +
> +		/* Reuse the whole large folio iff all entries are exclusive */
> +		if (nr_pages > 1 && any_swap_shared)
> +			exclusive = false;

If you init any_shared with the firt pte as I suggested then you could just set
exclusive = !any_shared at the top of this if block without needing this
separate fixup.
>  	}
>  
>  	/*
> @@ -4204,12 +4241,14 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	 * We're already holding a reference on the page but haven't mapped it
>  	 * yet.
>  	 */
> -	swap_free(entry);
> +	swap_free_nr(entry, nr_pages);
>  	if (should_try_to_free_swap(folio, vma, vmf->flags))
>  		folio_free_swap(folio);
>  
> -	inc_mm_counter(vma->vm_mm, MM_ANONPAGES);
> -	dec_mm_counter(vma->vm_mm, MM_SWAPENTS);
> +	folio_ref_add(folio, nr_pages - 1);
> +	add_mm_counter(vma->vm_mm, MM_ANONPAGES, nr_pages);
> +	add_mm_counter(vma->vm_mm, MM_SWAPENTS, -nr_pages);
> +
>  	pte = mk_pte(page, vma->vm_page_prot);
>  
>  	/*
> @@ -4219,33 +4258,34 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	 * exclusivity.
>  	 */
>  	if (!folio_test_ksm(folio) &&
> -	    (exclusive || folio_ref_count(folio) == 1)) {
> +	    (exclusive || (folio_ref_count(folio) == nr_pages &&
> +			   folio_nr_pages(folio) == nr_pages))) {
>  		if (vmf->flags & FAULT_FLAG_WRITE) {
>  			pte = maybe_mkwrite(pte_mkdirty(pte), vma);
>  			vmf->flags &= ~FAULT_FLAG_WRITE;
>  		}
>  		rmap_flags |= RMAP_EXCLUSIVE;
>  	}
> -	flush_icache_page(vma, page);
> +	flush_icache_pages(vma, page, nr_pages);
>  	if (pte_swp_soft_dirty(vmf->orig_pte))
>  		pte = pte_mksoft_dirty(pte);
>  	if (pte_swp_uffd_wp(vmf->orig_pte))
>  		pte = pte_mkuffd_wp(pte);

I'm not sure about all this... you are smearing these SW bits from the faulting
PTE across all the ptes you are mapping. Although I guess actually that's ok
because swap_pte_batch() only returns a batch with all these bits the same?

> -	vmf->orig_pte = pte;

Instead of doing a readback below, perhaps:

	vmf->orig_pte = pte_advance_pfn(pte, nr_pages);

>  
>  	/* ksm created a completely new copy */
>  	if (unlikely(folio != swapcache && swapcache)) {
> -		folio_add_new_anon_rmap(folio, vma, vmf->address);
> +		folio_add_new_anon_rmap(folio, vma, start_address);
>  		folio_add_lru_vma(folio, vma);
>  	} else {
> -		folio_add_anon_rmap_pte(folio, page, vma, vmf->address,
> -					rmap_flags);
> +		folio_add_anon_rmap_ptes(folio, page, nr_pages, vma, start_address,
> +					 rmap_flags);
>  	}
>  
>  	VM_BUG_ON(!folio_test_anon(folio) ||
>  			(pte_write(pte) && !PageAnonExclusive(page)));
> -	set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte);
> -	arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte);
> +	set_ptes(vma->vm_mm, start_address, start_pte, pte, nr_pages);
> +	vmf->orig_pte = ptep_get(vmf->pte);
> +	arch_do_swap_page(vma->vm_mm, vma, start_address, pte, pte);
>  
>  	folio_unlock(folio);
>  	if (folio != swapcache && swapcache) {
> @@ -4269,7 +4309,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	}
>  
>  	/* No need to invalidate - it was non-present before */
> -	update_mmu_cache_range(vmf, vma, vmf->address, vmf->pte, 1);
> +	update_mmu_cache_range(vmf, vma, start_address, start_pte, nr_pages);
>  unlock:
>  	if (vmf->pte)
>  		pte_unmap_unlock(vmf->pte, vmf->ptl);



  reply	other threads:[~2024-04-11 15:34 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-09  8:26 [PATCH v2 0/5] large folios swap-in: handle refault cases first Barry Song
2024-04-09  8:26 ` [PATCH v2 1/5] mm: swap: introduce swap_free_nr() for batched swap_free() Barry Song
2024-04-10 23:37   ` SeongJae Park
2024-04-11  1:27     ` Barry Song
2024-04-11 14:30   ` Ryan Roberts
2024-04-12  2:07     ` Chuanhua Han
2024-04-12 11:28       ` Ryan Roberts
2024-04-12 11:38         ` Chuanhua Han
2024-04-15  6:17   ` Huang, Ying
2024-04-15  7:04     ` Barry Song
2024-04-15  8:06       ` Barry Song
2024-04-15  8:19       ` Huang, Ying
2024-04-15  8:34         ` Barry Song
2024-04-15  8:51           ` Huang, Ying
2024-04-15  9:01             ` Barry Song
2024-04-16  1:40               ` Huang, Ying
2024-04-16  2:08                 ` Barry Song
2024-04-16  3:11                   ` Huang, Ying
2024-04-16  4:32                     ` Barry Song
2024-04-17  0:32                       ` Huang, Ying
2024-04-17  1:35                         ` Barry Song
2024-04-18  5:27                           ` Barry Song
2024-04-18  8:55                             ` Huang, Ying
2024-04-18  9:14                               ` Barry Song
2024-05-02 23:05                                 ` Barry Song
2024-04-09  8:26 ` [PATCH v2 2/5] mm: swap: make should_try_to_free_swap() support large-folio Barry Song
2024-04-15  7:11   ` Huang, Ying
2024-04-09  8:26 ` [PATCH v2 3/5] mm: swap_pte_batch: add an output argument to reture if all swap entries are exclusive Barry Song
2024-04-11 14:54   ` Ryan Roberts
2024-04-11 15:00     ` David Hildenbrand
2024-04-11 15:36       ` Ryan Roberts
2024-04-09  8:26 ` [PATCH v2 4/5] mm: swap: entirely map large folios found in swapcache Barry Song
2024-04-11 15:33   ` Ryan Roberts [this message]
2024-04-11 23:30     ` Barry Song
2024-04-12 11:31       ` Ryan Roberts
2024-04-15  8:37   ` Huang, Ying
2024-04-15  8:53     ` Barry Song
2024-04-16  2:25       ` Huang, Ying
2024-04-16  2:36         ` Barry Song
2024-04-16  2:39           ` Huang, Ying
2024-04-16  2:52             ` Barry Song
2024-04-16  3:17               ` Huang, Ying
2024-04-16  4:40                 ` Barry Song
2024-04-18  9:55           ` Barry Song
2024-04-09  8:26 ` [PATCH v2 5/5] mm: add per-order mTHP swpin_refault counter Barry Song
2024-04-10 23:15   ` SeongJae Park
2024-04-11  1:46     ` Barry Song
2024-04-11 16:14       ` SeongJae Park
2024-04-11 15:53   ` Ryan Roberts
2024-04-11 23:01     ` Barry Song
2024-04-17  0:45   ` Huang, Ying
2024-04-17  1:16     ` Barry Song
2024-04-17  1:38       ` Huang, Ying
2024-04-17  1:48         ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1008d688-757a-4c2d-86bd-793f5e787d30@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=chrisl@kernel.org \
    --cc=david@redhat.com \
    --cc=hanchuanhua@oppo.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kasong@tencent.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=surenb@google.com \
    --cc=v-songbaohua@oppo.com \
    --cc=willy@infradead.org \
    --cc=xiang@kernel.org \
    --cc=ying.huang@intel.com \
    --cc=yosryahmed@google.com \
    --cc=yuzhao@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox