linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kemeng Shi <shikemeng@huaweicloud.com>
To: Kairui Song <kasong@tencent.com>, linux-mm@kvack.org
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Hugh Dickins <hughd@google.com>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	Matthew Wilcox <willy@infradead.org>,
	Chris Li <chrisl@kernel.org>, Nhat Pham <nphamcs@gmail.com>,
	Baoquan He <bhe@redhat.com>, Barry Song <baohua@kernel.org>,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH 3/4] mm/shmem, swap: improve mthp swapin process
Date: Wed, 18 Jun 2025 14:27:09 +0800	[thread overview]
Message-ID: <06bf5a2f-6687-dc24-cdb2-408faf413dd4@huaweicloud.com> (raw)
In-Reply-To: <20250617183503.10527-4-ryncsn@gmail.com>



on 6/18/2025 2:35 AM, Kairui Song wrote:
> From: Kairui Song <kasong@tencent.com>
> 
> Tidy up the mTHP swapin workflow. There should be no feature change, but
> consolidates the mTHP related check to one place so they are now all
> wrapped by CONFIG_TRANSPARENT_HUGEPAGE, and will be trimmed off by
> compiler if not needed.
> 
> Signed-off-by: Kairui Song <kasong@tencent.com>
> ---
>  mm/shmem.c | 175 ++++++++++++++++++++++++-----------------------------
>  1 file changed, 78 insertions(+), 97 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 0ad49e57f736..46dea2fa1b43 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c

...

> @@ -2283,110 +2306,66 @@ static int shmem_swapin_folio(struct inode *inode, pgoff_t index,
>  	/* Look it up and read it in.. */
>  	folio = swap_cache_get_folio(swap, NULL, 0);
>  	if (!folio) {
> -		int nr_pages = 1 << order;
> -		bool fallback_order0 = false;
> -
>  		/* Or update major stats only when swapin succeeds?? */
>  		if (fault_type) {
>  			*fault_type |= VM_FAULT_MAJOR;
>  			count_vm_event(PGMAJFAULT);
>  			count_memcg_event_mm(fault_mm, PGMAJFAULT);
>  		}
> -
> -		/*
> -		 * If uffd is active for the vma, we need per-page fault
> -		 * fidelity to maintain the uffd semantics, then fallback
> -		 * to swapin order-0 folio, as well as for zswap case.
> -		 * Any existing sub folio in the swap cache also blocks
> -		 * mTHP swapin.
> -		 */
> -		if (order > 0 && ((vma && unlikely(userfaultfd_armed(vma))) ||
> -				  !zswap_never_enabled() ||
> -				  non_swapcache_batch(swap, nr_pages) != nr_pages))
> -			fallback_order0 = true;
> -
> -		/* Skip swapcache for synchronous device. */
> -		if (!fallback_order0 && data_race(si->flags & SWP_SYNCHRONOUS_IO)) {
> -			folio = shmem_swap_alloc_folio(inode, vma, index, swap, order, gfp);
> +		/* Try direct mTHP swapin bypassing swap cache and readahead */
> +		if (data_race(si->flags & SWP_SYNCHRONOUS_IO)) {
> +			swap_order = order;
> +			folio = shmem_swapin_direct(inode, vma, index,
> +						    swap, &swap_order, gfp);
>  			if (!IS_ERR(folio)) {
>  				skip_swapcache = true;
>  				goto alloced;
>  			}
> -
> -			/*
> -			 * Fallback to swapin order-0 folio unless the swap entry
> -			 * already exists.
> -			 */
> +			/* Fallback if order > 0 swapin failed with -ENOMEM */
>  			error = PTR_ERR(folio);
>  			folio = NULL;
> -			if (error == -EEXIST)
> +			if (error != -ENOMEM || !swap_order)
>  				goto failed;
>  		}
> -
>  		/*
> -		 * Now swap device can only swap in order 0 folio, then we
> -		 * should split the large swap entry stored in the pagecache
> -		 * if necessary.
> +		 * Try order 0 swapin using swap cache and readahead, it still
> +		 * may return order > 0 folio due to raced swap cache.
>  		 */
> -		split_order = shmem_split_large_entry(inode, index, swap, gfp);
> -		if (split_order < 0) {
> -			error = split_order;
> -			goto failed;
> -		}
> -
> -		/*
> -		 * If the large swap entry has already been split, it is
> -		 * necessary to recalculate the new swap entry based on
> -		 * the old order alignment.
> -		 */
> -		if (split_order > 0) {
> -			pgoff_t offset = index - round_down(index, 1 << split_order);
> -
> -			swap = swp_entry(swp_type(swap), swp_offset(swap) + offset);
> -		}
> -
For fallback order 0, we always call shmem_swapin_cluster() before but we will call
shmem_swap_alloc_folio() now. It seems fine to me. Just point this out for others
to recheck this.
> -		/* Here we actually start the io */
>  		folio = shmem_swapin_cluster(swap, gfp, info, index);
>  		if (!folio) {
>  			error = -ENOMEM;
>  			goto failed;
>  		}
> -	} else if (order > folio_order(folio)) {
> -		/*
> -		 * Swap readahead may swap in order 0 folios into swapcache
> -		 * asynchronously, while the shmem mapping can still stores
> -		 * large swap entries. In such cases, we should split the
> -		 * large swap entry to prevent possible data corruption.
> -		 */
> -		split_order = shmem_split_large_entry(inode, index, swap, gfp);
> -		if (split_order < 0) {
> -			folio_put(folio);
> -			folio = NULL;
> -			error = split_order;
> -			goto failed;
> -		}
> -
> -		/*
> -		 * If the large swap entry has already been split, it is
> -		 * necessary to recalculate the new swap entry based on
> -		 * the old order alignment.
> -		 */
> -		if (split_order > 0) {
> -			pgoff_t offset = index - round_down(index, 1 << split_order);
> -
> -			swap = swp_entry(swp_type(swap), swp_offset(swap) + offset);
> -		}
> -	} else if (order < folio_order(folio)) {
> -		swap.val = round_down(swp_type(swap), folio_order(folio));
>  	}
> -
>  alloced:
> +	/*
> +	 * We need to split an existing large entry if swapin brought in a
> +	 * smaller folio due to various of reasons.
> +	 *
> +	 * And worth noting there is a special case: if there is a smaller
> +	 * cached folio that covers @swap, but not @index (it only covers
> +	 * first few sub entries of the large entry, but @index points to
> +	 * later parts), the swap cache lookup will still see this folio,
> +	 * And we need to split the large entry here. Later checks will fail,
> +	 * as it can't satisfy the swap requirement, and we will retry
> +	 * the swapin from beginning.
> +	 */
> +	swap_order = folio_order(folio);
> +	if (order > swap_order) {
> +		error = shmem_split_swap_entry(inode, index, swap, gfp);
> +		if (error)
> +			goto failed_nolock;
> +	}
> +
> +	index = round_down(index, 1 << swap_order);
> +	swap.val = round_down(swap.val, 1 << swap_order);
> +
If swap entry order is reduced but index and value keep unchange,
the shmem_split_swap_entry() will split the reduced large swap entry
successfully but index and swap.val will be incorrect beacuse of wrong
swap_order. We can catch unexpected order and swap entry in
shmem_add_to_page_cache() and will retry the swapin, but this will
introduce extra cost.
If we return order of entry which is splited in shmem_split_swap_entry()
and update index and swap.val with returned order, we can avoid the extra
cost for mentioned racy case.



  reply	other threads:[~2025-06-18  6:27 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-17 18:34 [PATCH 0/4] mm/shmem, swap: bugfix and improvement of mTHP swap in Kairui Song
2025-06-17 18:35 ` [PATCH 1/4] mm/shmem, swap: improve cached mTHP handling and fix potential hung Kairui Song
2025-06-17 22:58   ` Andrew Morton
2025-06-18  2:11     ` Kairui Song
2025-06-18  2:08   ` Kemeng Shi
2025-06-17 18:35 ` [PATCH 2/4] mm/shmem, swap: avoid redundant Xarray lookup during swapin Kairui Song
2025-06-18  2:48   ` Kemeng Shi
2025-06-18  3:07     ` Kairui Song
2025-06-19  1:30       ` Kemeng Shi
2025-06-18  7:16   ` Dev Jain
2025-06-18  7:22     ` Kairui Song
2025-06-18  7:29       ` Dev Jain
2025-06-17 18:35 ` [PATCH 3/4] mm/shmem, swap: improve mthp swapin process Kairui Song
2025-06-18  6:27   ` Kemeng Shi [this message]
2025-06-18  6:50     ` Kairui Song
2025-06-18  8:08       ` Kemeng Shi
2025-06-18  8:26   ` Kemeng Shi
2025-06-18  8:46     ` Kairui Song
2025-06-19  1:32       ` Kemeng Shi
2025-06-17 18:35 ` [PATCH 4/4] mm/shmem, swap: avoid false positive swap cache lookup Kairui Song
2025-06-19  1:28   ` Kemeng Shi
2025-06-19 17:37     ` Kairui Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=06bf5a2f-6687-dc24-cdb2-408faf413dd4@huaweicloud.com \
    --to=shikemeng@huaweicloud.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bhe@redhat.com \
    --cc=chrisl@kernel.org \
    --cc=hughd@google.com \
    --cc=kasong@tencent.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nphamcs@gmail.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox