From: Peter Xu <peterx@redhat.com>
To: John Hubbard <jhubbard@nvidia.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Muchun Song <songmuchun@bytedance.com>,
Andrea Arcangeli <aarcange@redhat.com>,
James Houghton <jthoughton@google.com>,
Jann Horn <jannh@google.com>, Rik van Riel <riel@surriel.com>,
Miaohe Lin <linmiaohe@huawei.com>,
Andrew Morton <akpm@linux-foundation.org>,
Mike Kravetz <mike.kravetz@oracle.com>,
David Hildenbrand <david@redhat.com>,
Nadav Amit <nadav.amit@gmail.com>
Subject: Re: [PATCH v2 04/10] mm/hugetlb: Move swap entry handling into vma lock when faulted
Date: Thu, 8 Dec 2022 15:28:57 -0500 [thread overview]
Message-ID: <Y5JJCZkUPyZdYjyn@x1n> (raw)
In-Reply-To: <86bff55b-d048-1500-cddc-2d53702d7a3b@nvidia.com>
On Wed, Dec 07, 2022 at 03:05:42PM -0800, John Hubbard wrote:
> On 12/7/22 14:43, Peter Xu wrote:
> > Note that here migration_entry_wait_huge() will release it.
> >
> > Sorry it's definitely not as straightforward, but this is also something I
> > didn't come up with a better solution, because we need the vma lock to
> > protect the spinlock, which is used in deep code path of the migration
> > code.
> >
> > That's also why I added a rich comment above, and there's "The vma lock
> > will be released there" which is just for that.
> >
>
> Yes, OK,
>
> Reviewed-by: John Hubbard <jhubbard@nvidia.com>
>
> ...and here is some fancy documentation polishing (incremental on top of this
> specific patch) if you would like to fold it in, optional but it makes me
> happier:
>
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 49f73677a418..e3bbd4869f68 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -5809,6 +5809,10 @@ u32 hugetlb_fault_mutex_hash(struct address_space *mapping, pgoff_t idx)
> }
> #endif
> +/*
> + * There are a few special cases in which this function returns while still
> + * holding locks. Those are noted inline.
> + */
This is not true, I think? It always releases all the locks.
> vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
> unsigned long address, unsigned int flags)
> {
> @@ -5851,8 +5855,8 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
> /* PTE markers should be handled the same way as none pte */
> if (huge_pte_none_mostly(entry))
> /*
> - * hugetlb_no_page will drop vma lock and hugetlb fault
> - * mutex internally, which make us return immediately.
> + * hugetlb_no_page() will release both the vma lock and the
> + * hugetlb fault mutex, so just return directly from that:
I'm probably not gonna touch this part because it's not part of the patch..
For the rest, I can do.
I'll also apply the comment changes elsewhere if I don't speak up - in most
cases they all look good to me.
Thanks,
> */
> return hugetlb_no_page(mm, vma, mapping, idx, address, ptep,
> entry, flags);
> @@ -5869,10 +5873,11 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
> if (!pte_present(entry)) {
> if (unlikely(is_hugetlb_entry_migration(entry))) {
> /*
> - * Release fault lock first because the vma lock is
> - * needed to guard the huge_pte_lockptr() later in
> - * migration_entry_wait_huge(). The vma lock will
> - * be released there.
> + * Release the hugetlb fault lock now, but retain the
> + * vma lock, because it is needed to guard the
> + * huge_pte_lockptr() later in
> + * migration_entry_wait_huge(). The vma lock will be
> + * released there.
> */
> mutex_unlock(&hugetlb_fault_mutex_table[hash]);
> migration_entry_wait_huge(vma, ptep);
> diff --git a/mm/migrate.c b/mm/migrate.c
> index d14f1f3ab073..a31df628b938 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -333,16 +333,18 @@ void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
> }
> #ifdef CONFIG_HUGETLB_PAGE
> +
> +/*
> + * The vma read lock must be held upon entry. Holding that lock prevents either
> + * the pte or the ptl from being freed.
> + *
> + * This function will release the vma lock before returning.
> + */
> void __migration_entry_wait_huge(struct vm_area_struct *vma,
> pte_t *ptep, spinlock_t *ptl)
> {
> pte_t pte;
> - /*
> - * The vma read lock must be taken, which will be released before
> - * the function returns. It makes sure the pgtable page (along
> - * with its spin lock) not be freed in parallel.
> - */
> hugetlb_vma_assert_locked(vma);
> spin_lock(ptl);
>
>
> thanks,
> --
> John Hubbard
> NVIDIA
>
--
Peter Xu
next prev parent reply other threads:[~2022-12-08 20:29 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-12-07 20:30 [PATCH v2 00/10] mm/hugetlb: Make huge_pte_offset() thread-safe for pmd unshare Peter Xu
2022-12-07 20:30 ` [PATCH v2 01/10] mm/hugetlb: Let vma_offset_start() to return start Peter Xu
2022-12-07 21:21 ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 02/10] mm/hugetlb: Don't wait for migration entry during follow page Peter Xu
2022-12-07 22:03 ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 03/10] mm/hugetlb: Document huge_pte_offset usage Peter Xu
2022-12-07 20:49 ` John Hubbard
2022-12-08 13:05 ` David Hildenbrand
2022-12-07 20:30 ` [PATCH v2 04/10] mm/hugetlb: Move swap entry handling into vma lock when faulted Peter Xu
2022-12-07 22:36 ` John Hubbard
2022-12-07 22:43 ` Peter Xu
2022-12-07 23:05 ` John Hubbard
2022-12-08 20:28 ` Peter Xu [this message]
2022-12-08 20:31 ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 05/10] mm/hugetlb: Make userfaultfd_huge_must_wait() safe to pmd unshare Peter Xu
2022-12-07 23:19 ` John Hubbard
2022-12-07 23:44 ` Peter Xu
2022-12-07 23:54 ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 06/10] mm/hugetlb: Make hugetlb_follow_page_mask() " Peter Xu
2022-12-07 23:21 ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 07/10] mm/hugetlb: Make follow_hugetlb_page() " Peter Xu
2022-12-07 23:25 ` John Hubbard
2022-12-07 20:30 ` [PATCH v2 08/10] mm/hugetlb: Make walk_hugetlb_range() " Peter Xu
2022-12-07 20:34 ` John Hubbard
2022-12-08 13:14 ` David Hildenbrand
2022-12-08 20:47 ` Peter Xu
2022-12-08 21:20 ` Peter Xu
2022-12-09 10:24 ` David Hildenbrand
2022-12-09 14:39 ` Peter Xu
2022-12-09 15:18 ` David Hildenbrand
2022-12-07 20:31 ` [PATCH v2 09/10] mm/hugetlb: Introduce hugetlb_walk() Peter Xu
2022-12-07 22:27 ` Mike Kravetz
2022-12-08 0:12 ` John Hubbard
2022-12-08 21:01 ` Peter Xu
2022-12-08 21:50 ` John Hubbard
2022-12-08 23:21 ` Peter Xu
2022-12-07 20:31 ` [PATCH v2 10/10] mm/hugetlb: Document why page_vma_mapped_walk() is safe to walk Peter Xu
2022-12-08 0:16 ` John Hubbard
2022-12-08 21:05 ` Peter Xu
2022-12-08 21:54 ` John Hubbard
2022-12-08 22:21 ` Peter Xu
2022-12-09 0:24 ` John Hubbard
2022-12-09 0:43 ` Peter Xu
2022-12-08 13:16 ` David Hildenbrand
2022-12-08 21:05 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y5JJCZkUPyZdYjyn@x1n \
--to=peterx@redhat.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=jannh@google.com \
--cc=jhubbard@nvidia.com \
--cc=jthoughton@google.com \
--cc=linmiaohe@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mike.kravetz@oracle.com \
--cc=nadav.amit@gmail.com \
--cc=riel@surriel.com \
--cc=songmuchun@bytedance.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox