linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Kairui Song <ryncsn@gmail.com>
To: Hugh Dickins <hughd@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Baolin Wang <baolin.wang@linux.alibaba.com>,
	 Baoquan He <bhe@redhat.com>, Barry Song <21cnbao@gmail.com>,
	Chris Li <chrisl@kernel.org>,
	 David Rientjes <rientjes@google.com>,
	Kemeng Shi <shikemeng@huaweicloud.com>,
	 Shakeel Butt <shakeel.butt@linux.dev>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH mm-new 1/2] mm/shmem: hold shmem_swaplist spinlock (not mutex) much less
Date: Tue, 22 Jul 2025 01:54:47 +0800	[thread overview]
Message-ID: <CAMgjq7CtLM0psoHmSguv6SFH3BH+vMOn7CJnjVxnUhoc_chTuA@mail.gmail.com> (raw)
In-Reply-To: <87beaec6-a3b0-ce7a-c892-1e1e5bd57aa3@google.com>

On Wed, Jul 16, 2025 at 4:06 PM Hugh Dickins <hughd@google.com> wrote:
>
> A flamegraph (from an MGLRU load) showed shmem_writeout()'s use of the
> global shmem_swaplist_mutex worryingly hot: improvement is long overdue.
>
> 3.1 commit 6922c0c7abd3 ("tmpfs: convert shmem_writepage and enable swap")
> apologized for extending shmem_swaplist_mutex across add_to_swap_cache(),
> and hoped to find another way: yes, there may be lots of work to allocate
> radix tree nodes in there.  Then 6.15 commit b487a2da3575 ("mm, swap:
> simplify folio swap allocation") will have made it worse, by moving
> shmem_writeout()'s swap allocation under that mutex too (but the worrying
> flamegraph was observed even before that change).
>
> There's a useful comment about pagelock no longer protecting from eviction
> once moved to swap cache: but it's good till shmem_delete_from_page_cache()
> replaces page pointer by swap entry, so move the swaplist add between them.
>
> We would much prefer to take the global lock once per inode than once per
> page: given the possible races with shmem_unuse() pruning when !swapped
> (and other tasks racing to swap other pages out or in), try the swaplist
> add whenever swapped was incremented from 0 (but inode may already be on
> the list - only unuse and evict bother to remove it).
>
> This technique is more subtle than it looks (we're avoiding the very lock
> which would make it easy), but works: whereas an unlocked list_empty()
> check runs a risk of the inode being unqueued and left off the swaplist
> forever, swapoff only completing when the page is faulted in or removed.
>
> The need for a sleepable mutex went away in 5.1 commit b56a2d8af914
> ("mm: rid swapoff of quadratic complexity"): a spinlock works better now.
>
> This commit is certain to take shmem_swaplist_mutex out of contention,
> and has been seen to make a practical improvement (but there is likely
> to have been an underlying issue which made its contention so visible).
>
> Signed-off-by: Hugh Dickins <hughd@google.com>
> ---
>  mm/shmem.c | 59 ++++++++++++++++++++++++++++++------------------------
>  1 file changed, 33 insertions(+), 26 deletions(-)

Thanks a lot! I've also seen this issue, and we observed this
contention on 5.4 kernels and I wasn't sure about how we can optimize
it. This is very helpful.

> diff --git a/mm/shmem.c b/mm/shmem.c
> index 60247dc48505..33675361031b 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -292,7 +292,7 @@ bool vma_is_shmem(struct vm_area_struct *vma)
>  }
>
>  static LIST_HEAD(shmem_swaplist);
> -static DEFINE_MUTEX(shmem_swaplist_mutex);
> +static DEFINE_SPINLOCK(shmem_swaplist_lock);
>
>  #ifdef CONFIG_TMPFS_QUOTA
>
> @@ -432,10 +432,13 @@ static void shmem_free_inode(struct super_block *sb, size_t freed_ispace)
>   *
>   * But normally   info->alloced == inode->i_mapping->nrpages + info->swapped
>   * So mm freed is info->alloced - (inode->i_mapping->nrpages + info->swapped)
> + *
> + * Return: true if swapped was incremented from 0, for shmem_writeout().
>   */
> -static void shmem_recalc_inode(struct inode *inode, long alloced, long swapped)
> +static bool shmem_recalc_inode(struct inode *inode, long alloced, long swapped)
>  {
>         struct shmem_inode_info *info = SHMEM_I(inode);
> +       bool first_swapped = false;
>         long freed;
>
>         spin_lock(&info->lock);
> @@ -450,8 +453,11 @@ static void shmem_recalc_inode(struct inode *inode, long alloced, long swapped)
>          * to stop a racing shmem_recalc_inode() from thinking that a page has
>          * been freed.  Compensate here, to avoid the need for a followup call.
>          */
> -       if (swapped > 0)
> +       if (swapped > 0) {
> +               if (info->swapped == swapped)
> +                       first_swapped = true;
>                 freed += swapped;
> +       }
>         if (freed > 0)
>                 info->alloced -= freed;
>         spin_unlock(&info->lock);
> @@ -459,6 +465,7 @@ static void shmem_recalc_inode(struct inode *inode, long alloced, long swapped)
>         /* The quota case may block */
>         if (freed > 0)
>                 shmem_inode_unacct_blocks(inode, freed);
> +       return first_swapped;
>  }
>
>  bool shmem_charge(struct inode *inode, long pages)
> @@ -1399,11 +1406,11 @@ static void shmem_evict_inode(struct inode *inode)
>                         /* Wait while shmem_unuse() is scanning this inode... */
>                         wait_var_event(&info->stop_eviction,
>                                        !atomic_read(&info->stop_eviction));
> -                       mutex_lock(&shmem_swaplist_mutex);
> +                       spin_lock(&shmem_swaplist_lock);
>                         /* ...but beware of the race if we peeked too early */
>                         if (!atomic_read(&info->stop_eviction))
>                                 list_del_init(&info->swaplist);
> -                       mutex_unlock(&shmem_swaplist_mutex);
> +                       spin_unlock(&shmem_swaplist_lock);
>                 }
>         }
>
> @@ -1526,7 +1533,7 @@ int shmem_unuse(unsigned int type)
>         if (list_empty(&shmem_swaplist))
>                 return 0;
>
> -       mutex_lock(&shmem_swaplist_mutex);
> +       spin_lock(&shmem_swaplist_lock);
>  start_over:
>         list_for_each_entry_safe(info, next, &shmem_swaplist, swaplist) {
>                 if (!info->swapped) {
> @@ -1540,12 +1547,12 @@ int shmem_unuse(unsigned int type)
>                  * (igrab() would protect from unlink, but not from unmount).
>                  */
>                 atomic_inc(&info->stop_eviction);
> -               mutex_unlock(&shmem_swaplist_mutex);
> +               spin_unlock(&shmem_swaplist_lock);
>
>                 error = shmem_unuse_inode(&info->vfs_inode, type);
>                 cond_resched();
>
> -               mutex_lock(&shmem_swaplist_mutex);
> +               spin_lock(&shmem_swaplist_lock);
>                 if (atomic_dec_and_test(&info->stop_eviction))
>                         wake_up_var(&info->stop_eviction);
>                 if (error)
> @@ -1556,7 +1563,7 @@ int shmem_unuse(unsigned int type)
>                 if (!info->swapped)
>                         list_del_init(&info->swaplist);
>         }
> -       mutex_unlock(&shmem_swaplist_mutex);
> +       spin_unlock(&shmem_swaplist_lock);
>
>         return error;
>  }
> @@ -1646,30 +1653,30 @@ int shmem_writeout(struct folio *folio, struct swap_iocb **plug,
>                 folio_mark_uptodate(folio);
>         }
>
> -       /*
> -        * Add inode to shmem_unuse()'s list of swapped-out inodes,
> -        * if it's not already there.  Do it now before the folio is
> -        * moved to swap cache, when its pagelock no longer protects
> -        * the inode from eviction.  But don't unlock the mutex until
> -        * we've incremented swapped, because shmem_unuse_inode() will
> -        * prune a !swapped inode from the swaplist under this mutex.
> -        */
> -       mutex_lock(&shmem_swaplist_mutex);
> -       if (list_empty(&info->swaplist))
> -               list_add(&info->swaplist, &shmem_swaplist);
> -
>         if (!folio_alloc_swap(folio, __GFP_HIGH | __GFP_NOMEMALLOC | __GFP_NOWARN)) {
> -               shmem_recalc_inode(inode, 0, nr_pages);
> +               bool first_swapped = shmem_recalc_inode(inode, 0, nr_pages);
> +
> +               /*
> +                * Add inode to shmem_unuse()'s list of swapped-out inodes,
> +                * if it's not already there.  Do it now before the folio is
> +                * removed from page cache, when its pagelock no longer
> +                * protects the inode from eviction.  And do it now, after
> +                * we've incremented swapped, because shmem_unuse() will
> +                * prune a !swapped inode from the swaplist.
> +                */
> +               if (first_swapped) {
> +                       spin_lock(&shmem_swaplist_lock);
> +                       if (list_empty(&info->swaplist))
> +                               list_add(&info->swaplist, &shmem_swaplist);
> +                       spin_unlock(&shmem_swaplist_lock);
> +               }
> +
>                 swap_shmem_alloc(folio->swap, nr_pages);
>                 shmem_delete_from_page_cache(folio, swp_to_radix_entry(folio->swap));
>
> -               mutex_unlock(&shmem_swaplist_mutex);
>                 BUG_ON(folio_mapped(folio));
>                 return swap_writeout(folio, plug);
>         }
> -       if (!info->swapped)
> -               list_del_init(&info->swaplist);
> -       mutex_unlock(&shmem_swaplist_mutex);
>         if (nr_pages > 1)
>                 goto try_split;
>  redirty:
> --
> 2.43.0

Reviewed-by: Kairui Song <kasong@tencent.com>


      parent reply	other threads:[~2025-07-21 17:55 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-16  8:05 Hugh Dickins
2025-07-16  8:08 ` [PATCH mm-new 2/2] mm/shmem: writeout free swap if swap_writeout() reactivates Hugh Dickins
2025-07-17  9:44   ` Baolin Wang
2025-07-19  0:51     ` Hugh Dickins
2025-07-19  4:32       ` Baolin Wang
2025-07-19  0:56     ` [PATCH mm-unstable] mm/shmem: writeout free swap if swap_writeout() reactivates fix Hugh Dickins
2025-07-20  7:07   ` [PATCH mm-new 2/2] mm/shmem: writeout free swap if swap_writeout() reactivates David Rientjes
2025-07-17  8:46 ` [PATCH mm-new 1/2] mm/shmem: hold shmem_swaplist spinlock (not mutex) much less Baolin Wang
2025-07-20  7:07 ` David Rientjes
2025-07-21 17:54 ` Kairui Song [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAMgjq7CtLM0psoHmSguv6SFH3BH+vMOn7CJnjVxnUhoc_chTuA@mail.gmail.com \
    --to=ryncsn@gmail.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=bhe@redhat.com \
    --cc=chrisl@kernel.org \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=rientjes@google.com \
    --cc=shakeel.butt@linux.dev \
    --cc=shikemeng@huaweicloud.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox