linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <songmuchun@bytedance.com>
To: Gang Li <ligang.bdlg@bytedance.com>
Cc: Hugh Dickins <hughd@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	 "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
	 Linux Memory Management List <linux-mm@kvack.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v1] shmem: change shrinklist_lock form spinlock to mutex and move iput into it
Date: Tue, 23 Nov 2021 16:07:08 +0800	[thread overview]
Message-ID: <CAMZfGtWmpynXNgjBqDzat5JQAQ95Ja1p55AxR6En8AkZ8iXjKQ@mail.gmail.com> (raw)
In-Reply-To: <20211122064126.76734-1-ligang.bdlg@bytedance.com>

On Mon, Nov 22, 2021 at 2:41 PM Gang Li <ligang.bdlg@bytedance.com> wrote:
>
> This patch fixes commit 779750d20b93 ("shmem: split huge pages
> beyond i_size under memory pressure").
>
> iput out of sbinfo->shrinklist_lock will let shmem_evict_inode grab
> and delete the inode, which will berak the consistency between
> shrinklist_len and shrinklist. The simultaneous deletion of adjacent
> elements in the local list "list" by shmem_unused_huge_shrink and
> shmem_evict_inode will also break the list.
>
> iput must in lock or after lock, but shrinklist_lock is a spinlock
> which can not sleep and iput may sleep.[1]
>
> Fix it by changing shrinklist_lock from spinlock to mutex and moving iput
> into this lock.
>
> [1]. Link: http://lkml.kernel.org/r/20170131093141.GA15899@node.shutemov.name
> Fixes: 779750d20b93 ("shmem: split huge pages beyond i_size under memory pressure")
> Signed-off-by: Gang Li <ligang.bdlg@bytedance.com>
> ---
>  include/linux/shmem_fs.h |  2 +-
>  mm/shmem.c               | 16 +++++++---------
>  2 files changed, 8 insertions(+), 10 deletions(-)
>
> diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
> index 166158b6e917..65804fd264d0 100644
> --- a/include/linux/shmem_fs.h
> +++ b/include/linux/shmem_fs.h
> @@ -41,7 +41,7 @@ struct shmem_sb_info {
>         ino_t next_ino;             /* The next per-sb inode number to use */
>         ino_t __percpu *ino_batch;  /* The next per-cpu inode number to use */
>         struct mempolicy *mpol;     /* default memory policy for mappings */
> -       spinlock_t shrinklist_lock;   /* Protects shrinklist */
> +       struct mutex shrinklist_mutex;/* Protects shrinklist */
>         struct list_head shrinklist;  /* List of shinkable inodes */
>         unsigned long shrinklist_len; /* Length of shrinklist */
>  };
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 18f93c2d68f1..2165a28631c5 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -559,7 +559,7 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
>         if (list_empty(&sbinfo->shrinklist))
>                 return SHRINK_STOP;
>
> -       spin_lock(&sbinfo->shrinklist_lock);
> +       mutex_lock(&sbinfo->shrinklist_mutex);
>         list_for_each_safe(pos, next, &sbinfo->shrinklist) {
>                 info = list_entry(pos, struct shmem_inode_info, shrinklist);
>
> @@ -586,7 +586,6 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
>                 if (!--batch)
>                         break;
>         }
> -       spin_unlock(&sbinfo->shrinklist_lock);
>
>         list_for_each_safe(pos, next, &to_remove) {
>                 info = list_entry(pos, struct shmem_inode_info, shrinklist);
> @@ -643,10 +642,9 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
>                 iput(inode);

It could lead to deadlock, since we could be the last user
of @inode, then shmem_evict_inode() will be called and
try to acquire the mutex lock. Notice that the mutex is already
held here.

Thanks.

>         }
>
> -       spin_lock(&sbinfo->shrinklist_lock);
>         list_splice_tail(&list, &sbinfo->shrinklist);
>         sbinfo->shrinklist_len -= removed;
> -       spin_unlock(&sbinfo->shrinklist_lock);
> +       mutex_unlock(&sbinfo->shrinklist_mutex);
>
>         return split;
>  }
> @@ -1137,12 +1135,12 @@ static void shmem_evict_inode(struct inode *inode)
>                 inode->i_size = 0;
>                 shmem_truncate_range(inode, 0, (loff_t)-1);
>                 if (!list_empty(&info->shrinklist)) {
> -                       spin_lock(&sbinfo->shrinklist_lock);
> +                   mutex_lock(&sbinfo->shrinklist_mutex);
>                         if (!list_empty(&info->shrinklist)) {
>                                 list_del_init(&info->shrinklist);
>                                 sbinfo->shrinklist_len--;
>                         }
> -                       spin_unlock(&sbinfo->shrinklist_lock);
> +                   mutex_unlock(&sbinfo->shrinklist_mutex);
>                 }
>                 while (!list_empty(&info->swaplist)) {
>                         /* Wait while shmem_unuse() is scanning this inode... */
> @@ -1954,7 +1952,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
>                  * Part of the huge page is beyond i_size: subject
>                  * to shrink under memory pressure.
>                  */
> -               spin_lock(&sbinfo->shrinklist_lock);
> +               mutex_lock(&sbinfo->shrinklist_mutex);
>                 /*
>                  * _careful to defend against unlocked access to
>                  * ->shrink_list in shmem_unused_huge_shrink()
> @@ -1964,7 +1962,7 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index,
>                                       &sbinfo->shrinklist);
>                         sbinfo->shrinklist_len++;
>                 }
> -               spin_unlock(&sbinfo->shrinklist_lock);
> +               mutex_unlock(&sbinfo->shrinklist_mutex);
>         }
>
>         /*
> @@ -3652,7 +3650,7 @@ static int shmem_fill_super(struct super_block *sb, struct fs_context *fc)
>         raw_spin_lock_init(&sbinfo->stat_lock);
>         if (percpu_counter_init(&sbinfo->used_blocks, 0, GFP_KERNEL))
>                 goto failed;
> -       spin_lock_init(&sbinfo->shrinklist_lock);
> +       mutex_init(&sbinfo->shrinklist_mutex);
>         INIT_LIST_HEAD(&sbinfo->shrinklist);
>
>         sb->s_maxbytes = MAX_LFS_FILESIZE;
> --
> 2.20.1
>


  reply	other threads:[~2021-11-23  8:08 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-11-22  6:41 Gang Li
2021-11-23  8:07 ` Muchun Song [this message]
2021-11-25 23:14 ` kernel test robot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAMZfGtWmpynXNgjBqDzat5JQAQ95Ja1p55AxR6En8AkZ8iXjKQ@mail.gmail.com \
    --to=songmuchun@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=hughd@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=ligang.bdlg@bytedance.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox