linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Hillf Danton <hdanton@sina.com>
To: Lance Yang <lance.yang@linux.dev>
Cc: Harry Yoo <harry.yoo@oracle.com>,
	akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org
Subject: Re: [PATCH v2 1/1] mm/hugetlb: fix possible deadlocks in hugetlb VMA unmap paths
Date: Tue, 11 Nov 2025 07:07:43 +0800	[thread overview]
Message-ID: <20251110230745.9105-1-hdanton@sina.com> (raw)
In-Reply-To: <bfe5a925-69ce-46af-a720-14e1d2fd30b5@linux.dev>

On Tue, 11 Nov 2025 00:39:29 +0800 Lance Yang wrote:
> On 2025/11/10 20:17, Harry Yoo wrote:
> > On Mon, Nov 10, 2025 at 07:15:53PM +0800, Lance Yang wrote:
> >> From: Lance Yang <lance.yang@linux.dev>
> >>
> >> The hugetlb VMA unmap path contains several potential deadlocks, as
> >> reported by syzbot. These deadlocks occur in __hugetlb_zap_begin(),
> >> move_hugetlb_page_tables(), and the retry path of
> >> hugetlb_unmap_file_folio() (affecting remove_inode_hugepages() and
> >> unmap_vmas()), where vma_lock is acquired before i_mmap_lock. This lock
> >> ordering conflicts with other paths like hugetlb_fault(), which establish
> >> the correct dependency as i_mmap_lock -> vma_lock.
> >>
> >> Possible unsafe locking scenario:
> >>
> >> CPU0                                 CPU1
> >> ----                                 ----
> >> lock(&vma_lock->rw_sema);
> >>                                       lock(&i_mmap_lock);
> >>                                       lock(&vma_lock->rw_sema);
> >> lock(&i_mmap_lock);
> >>
> >> Resolve the circular dependencies reported by syzbot across multiple call
> >> chains by reordering the locks in all conflicting paths to consistently
> >> follow the established i_mmap_lock -> vma_lock order.
> > 
> > But mm/rmap.c says:
> >> * hugetlbfs PageHuge() take locks in this order:
> >> *   hugetlb_fault_mutex (hugetlbfs specific page fault mutex)
> >> *     vma_lock (hugetlb specific lock for pmd_sharing)
> >> *       mapping->i_mmap_rwsem (also used for hugetlb pmd sharing)
> >> *         folio_lock
> >> */
> 
> Thanks! You are right, I was mistaken ...
> 
> > 
> > I think the commit message should explain why the locking order described
> > above is incorrect (or when it became incorrect) and fix the comment?
> 
> I think the locking order documented in mm/rmap.c (vma_lock -> i_mmap_lock)
> is indeed the correct one to follow.
> 
> This fix has it backwards then. I'll rework it to fix the actual violations.
>
Break a leg, better after taking a look at ffa1e7ada456 ("block: Make
request_queue lockdep splats show up earlier")


  reply	other threads:[~2025-11-10 23:08 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-10 11:15 Lance Yang
2025-11-10 12:17 ` Harry Yoo
2025-11-10 16:39   ` Lance Yang
2025-11-10 23:07     ` Hillf Danton [this message]
2025-11-11  3:20       ` Lance Yang
2025-11-11  3:25         ` Lance Yang
2025-11-10 15:19 ` [syzbot ci] " syzbot ci

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251110230745.9105-1-hdanton@sina.com \
    --to=hdanton@sina.com \
    --cc=akpm@linux-foundation.org \
    --cc=harry.yoo@oracle.com \
    --cc=lance.yang@linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox