From: John Hubbard <jhubbard@nvidia.com>
To: David Hildenbrand <david@redhat.com>,
Yang Shi <shy828301@gmail.com>,
peterx@redhat.com, kirill.shutemov@linux.intel.com,
jgg@nvidia.com, hughd@google.com, akpm@linux-foundation.org
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm: gup: fix the fast GUP race against THP collapse
Date: Mon, 5 Sep 2022 19:12:31 -0700 [thread overview]
Message-ID: <c38494f0-1bc5-61e4-8459-be9160029539@nvidia.com> (raw)
In-Reply-To: <a969abc5-1ad0-4073-a1f9-82f0431a0104@redhat.com>
On 9/5/22 00:59, David Hildenbrand wrote:
...
>>> diff --git a/mm/gup.c b/mm/gup.c
>>> index f3fc1f08d90c..4365b2811269 100644
>>> --- a/mm/gup.c
>>> +++ b/mm/gup.c
>>> @@ -2380,8 +2380,9 @@ static void __maybe_unused undo_dev_pagemap(int *nr, int nr_start,
>>> }
>>>
>>> #ifdef CONFIG_ARCH_HAS_PTE_SPECIAL
>>> -static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>>> - unsigned int flags, struct page **pages, int *nr)
>>> +static int gup_pte_range(pmd_t pmd, pmd_t *pmdp, unsigned long addr,
>>> + unsigned long end, unsigned int flags,
>>> + struct page **pages, int *nr)
>>> {
>>> struct dev_pagemap *pgmap = NULL;
>>> int nr_start = *nr, ret = 0;
>>> @@ -2423,7 +2424,23 @@ static int gup_pte_range(pmd_t pmd, unsigned long addr, unsigned long end,
>>> goto pte_unmap;
>>> }
>>>
>>> - if (unlikely(pte_val(pte) != pte_val(*ptep))) {
>>> + /*
>>> + * THP collapse conceptually does:
>>> + * 1. Clear and flush PMD
>>> + * 2. Check the base page refcount
>>> + * 3. Copy data to huge page
>>> + * 4. Clear PTE
>>> + * 5. Discard the base page
>>> + *
>>> + * So fast GUP may race with THP collapse then pin and
>>> + * return an old page since TLB flush is no longer sufficient
>>> + * to serialize against fast GUP.
>>> + *
>>> + * Check PMD, if it is changed just back off since it
>>> + * means there may be parallel THP collapse.
>>> + */
>>
>> As I mentioned in the other thread, it would be a nice touch to move
>> such discussion into the comment header.
>>
>>> + if (unlikely(pmd_val(pmd) != pmd_val(*pmdp)) ||
>>> + unlikely(pte_val(pte) != pte_val(*ptep))) {
>>
>>
>> That should be READ_ONCE() for the *pmdp and *ptep reads. Because this
>> whole lockless house of cards may fall apart if we try reading the
>> page table values without READ_ONCE().
>
> I came to the conclusion that the implicit memory barrier when grabbing
> a reference on the page is sufficient such that we don't need READ_ONCE
> here.
OK, I believe you're referring to this:
folio = try_grab_folio(page, 1, flags);
just earlier in gup_pte_range(). Yes that's true...but it's hidden, which
is unfortunate. Maybe a comment could help.
>
> If we still intend to change that code, we should fixup all GUP-fast
> functions in a similar way. But again, I don't think we need a change here.
>
It's really rough, having to play this hide-and-seek game of "who did
the memory barrier". And I'm tempted to suggest adding READ_ONCE() to
any and all reads of the page table entries, just to help stay out of
trouble. It's a visual reminder that page table reads are always a
lockless read and are inherently volatile.
Of course, I realize that adding extra READ_ONCE() calls is not a good
thing. It might be a performance hit, although, again, these are
volatile reads by nature, so you probably had a membar anyway.
And looking in reverse, there are actually a number of places here where
we could probably get away with *removing* READ_ONCE()!
Overall, I would be inclined to load up on READ_ONCE() calls, yes. But I
sort of expect to be overridden on that, due to potential performance
concerns, and that's reasonable.
At a minimum we should add a few short comments about what memory
barriers are used, and why we don't need a READ_ONCE() or something
stronger when reading the pte.
>
>>> - * After this gup_fast can't run anymore. This also removes
>>> - * any huge TLB entry from the CPU so we won't allow
>>> - * huge and small TLB entries for the same virtual address
>>> - * to avoid the risk of CPU bugs in that area.
>>> + * This removes any huge TLB entry from the CPU so we won't allow
>>> + * huge and small TLB entries for the same virtual address to
>>> + * avoid the risk of CPU bugs in that area.
>>> + *
>>> + * Parallel fast GUP is fine since fast GUP will back off when
>>> + * it detects PMD is changed.
>>> */
>>> _pmd = pmdp_collapse_flush(vma, address, pmd);
>>
>> To follow up on David Hildenbrand's note about this in the nearby thread...
>> I'm also not sure if pmdp_collapse_flush() implies a memory barrier on
>> all arches. It definitely does do an atomic op with a return value on x86,
>> but that's just one arch.
>>
>
> I think a ptep/pmdp clear + TLB flush really has to imply a memory
> barrier, otherwise TLB flushing code might easily mess up with
> surrounding code. But we should better double-check.
Let's document the function as such, once it's verified: "This is a
guaranteed memory barrier".
>
> s390x executes an IDTE instruction, which performs serialization (->
> memory barrier). arm64 seems to use DSB instructions to enforce memory
> ordering.
>
thanks,
--
John Hubbard
NVIDIA
next prev parent reply other threads:[~2022-09-06 2:12 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-01 22:27 Yang Shi
2022-09-01 23:26 ` Peter Xu
2022-09-01 23:50 ` Yang Shi
2022-09-02 6:39 ` David Hildenbrand
2022-09-02 15:23 ` Yang Shi
2022-09-02 15:59 ` Peter Xu
2022-09-02 16:04 ` Peter Xu
2022-09-02 17:30 ` Yang Shi
2022-09-02 17:45 ` Yang Shi
2022-09-02 20:33 ` Peter Xu
2022-09-05 8:56 ` Aneesh Kumar K.V
2022-09-05 8:54 ` Aneesh Kumar K.V
2022-09-06 19:07 ` Yang Shi
2022-09-07 4:50 ` Aneesh Kumar K V
2022-09-07 17:08 ` Yang Shi
2022-09-04 22:21 ` John Hubbard
2022-09-02 6:42 ` David Hildenbrand
2022-09-04 22:29 ` John Hubbard
2022-09-05 7:59 ` David Hildenbrand
2022-09-05 10:16 ` Baolin Wang
2022-09-05 10:24 ` David Hildenbrand
2022-09-05 11:11 ` David Hildenbrand
2022-09-05 14:35 ` Baolin Wang
2022-09-05 14:40 ` David Hildenbrand
2022-09-06 5:53 ` Baolin Wang
2022-09-06 2:12 ` John Hubbard [this message]
2022-09-06 12:50 ` David Hildenbrand
2022-09-06 13:47 ` Jason Gunthorpe
2022-09-06 13:57 ` David Hildenbrand
2022-09-06 14:30 ` Jason Gunthorpe
2022-09-06 14:44 ` David Hildenbrand
2022-09-06 15:33 ` Jason Gunthorpe
2022-09-06 19:11 ` Yang Shi
2022-09-06 23:16 ` John Hubbard
2022-09-06 19:01 ` Yang Shi
2022-09-05 9:03 ` Baolin Wang
2022-09-06 18:50 ` Yang Shi
2022-09-06 21:27 ` John Hubbard
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c38494f0-1bc5-61e4-8459-be9160029539@nvidia.com \
--to=jhubbard@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=hughd@google.com \
--cc=jgg@nvidia.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=peterx@redhat.com \
--cc=shy828301@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox