linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: David Hildenbrand <david@redhat.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Jiri Olsa <jolsa@kernel.org>, Ian Rogers <irogers@google.com>,
	Adrian Hunter <adrian.hunter@intel.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Muchun Song <muchun.song@linux.dev>
Cc: linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH v1 0/4] Reduce cost of ptep_get_lockless on arm64
Date: Mon, 15 Apr 2024 10:28:51 +0100	[thread overview]
Message-ID: <f13d1e4d-1eea-4379-b683-4d736ad99c2c@arm.com> (raw)
In-Reply-To: <70a36403-aefd-4311-b612-84e602465689@redhat.com>

On 12/04/2024 21:16, David Hildenbrand wrote:
>>
>> Yes agreed - 2 types; "lockless walkers that later recheck under PTL" and
>> "lockless walkers that never take the PTL".
>>
>> Detail: the part about disabling interrupts and TLB flush syncing is
>> arch-specifc. That's not how arm64 does it (the hw broadcasts the TLBIs). But
>> you make that clear further down.
> 
> Yes, but disabling interrupts is also required for RCU-freeing of page tables
> such that they can be walked safely. The TLB flush IPI is arch-specific and
> indeed to sync against PTE invalidation (before generic GUP-fast).
> [...]
> 
>>>>
>>>> Could it be this easy? My head is hurting...
>>>
>>> I think what has to happen is:
>>>
>>> (1) pte_get_lockless() must return the same value as ptep_get() as long as there
>>> are no races. No removal/addition of access/dirty bits etc.
>>
>> Today's arm64 ptep_get() guarantees this.
>>
>>>
>>> (2) Lockless page table walkers that later verify under the PTL can handle
>>> serious "garbage PTEs". This is our page fault handler.
>>
>> This isn't really a property of a ptep_get_lockless(); its a statement about a
>> class of users. I agree with the statement.
> 
> Yes. That's a requirement for the user of ptep_get_lockless(), such as page
> fault handlers. Well, mostly "not GUP".
> 
>>
>>>
>>> (3) Lockless page table walkers that cannot verify under PTL cannot handle
>>> arbitrary garbage PTEs. This is GUP-fast. Two options:
>>>
>>> (3a) pte_get_lockless() can atomically read the PTE: We re-check later if the
>>> atomically-read PTE is still unchanged (without PTL). No IPI for TLB flushes
>>> required. This is the common case. HW might concurrently set access/dirty bits,
>>> so we can race with that. But we don't read garbage.
>>
>> Today's arm64 ptep_get() cannot garantee that the access/dirty bits are
>> consistent for contpte ptes. That's the bit that complicates the current
>> ptep_get_lockless() implementation.
>>
>> But the point I was trying to make is that GUP-fast does not actually care about
>> *all* the fields being consistent (e.g. access/dirty). So we could spec
>> pte_get_lockless() to say that "all fields in the returned pte are guarranteed
>> to be self-consistent except for access and dirty information, which may be
>> inconsistent if a racing modification occured".
> 
> We *might* have KVM in the future want to check that a PTE is dirty, such that
> we can only allow dirty PTEs to be writable in a secondary MMU. That's not there
> yet, but one thing I was discussing on the list recently. Burried in:
> 
> https://lkml.kernel.org/r/20240320005024.3216282-1-seanjc@google.com
> 
> We wouldn't care about racing modifications, as long as MMU notifiers will
> properly notify us when the PTE would lose its dirty bits.
> 
> But getting false-positive dirty bits would be problematic.
> 
>>
>> This could mean that the access/dirty state *does* change for a given page while
>> GUP-fast is walking it, but GUP-fast *doesn't* detect that change. I *think*
>> that failing to detect this is benign.
> 
> I mean, HW could just set the dirty/access bit immediately after the check. So
> if HW concurrently sets the bit and we don't observe that change when we
> recheck, I think that would be perfectly fine.

Yes indeed; that's my point - GUP-fast doesn't care about access/dirty (or
soft-dirty or uffd-wp).

But if you don't want to change the ptep_get_lockless() spec to explicitly allow
this (because you have the KVM use case where false-positive dirty is
problematic), then I think we are stuck with ptep_get_lockless() as implemented
for arm64 today.

> 
>>
>> Aside: GUP-fast currently rechecks the pte originally obtained with
>> ptep_get_lockless(), using ptep_get(). Is that correct? ptep_get() must conform
>> to (1), so either it returns the same pte or it returns a different pte or
>> garbage. But that garbage could just happen to be the same as the originally
>> obtained pte. So in that case, it would have a false match. I think this needs
>> to be changed to ptep_get_lockless()?
> 
> I *think* it's fine, because the case where it would make a difference (x86-PAE)
> still requires the TLB flush IPI to sync against PTE changes, and that check
> would likely be wrong in one way or the other. So for x86-pae, that check is
> just moot either way.
> 
> That my theory, at least.
> 
> (but this "let's fake-read atomically although we don't, but let's do like we
> could in some specific circumstances" is really hard to get)
> 
> I was wondering a while ago if we are missing a memory barrier before the checl,
> but I think the one from obtaining the page reference gets the job done (at
> least that's what I remember).
> 



  parent reply	other threads:[~2024-04-15  9:28 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-15 12:17 Ryan Roberts
2024-02-15 12:17 ` [RFC PATCH v1 1/4] mm: Introduce ptep_get_lockless_norecency() Ryan Roberts
     [not found]   ` <7aefa967-43aa-490b-ae0d-7d1455402e89@redhat.com>
2024-03-26 16:39     ` Ryan Roberts
2024-03-27  9:28       ` David Hildenbrand
2024-03-27  9:57         ` Ryan Roberts
2024-03-27 17:02           ` David Hildenbrand
2024-02-15 12:17 ` [RFC PATCH v1 2/4] mm/gup: Use ptep_get_lockless_norecency() Ryan Roberts
2024-03-26 16:30   ` David Hildenbrand
2024-03-26 16:48     ` Ryan Roberts
2024-02-15 12:17 ` [RFC PATCH v1 3/4] mm/memory: Use ptep_get_lockless_norecency() for orig_pte Ryan Roberts
2024-03-26 17:02   ` David Hildenbrand
2024-03-26 17:27     ` Ryan Roberts
2024-03-26 17:38       ` David Hildenbrand
2024-03-26 17:48         ` Ryan Roberts
2024-03-26 17:58           ` David Hildenbrand
2024-03-27  9:51             ` Ryan Roberts
2024-03-27 17:05               ` David Hildenbrand
2024-02-15 12:17 ` [RFC PATCH v1 4/4] arm64/mm: Override ptep_get_lockless_norecency() Ryan Roberts
2024-03-26 16:35   ` David Hildenbrand
2024-03-26 16:17 ` [RFC PATCH v1 0/4] Reduce cost of ptep_get_lockless on arm64 David Hildenbrand
2024-03-26 16:31   ` Ryan Roberts
     [not found]     ` <de143212-49ce-4c30-8bfa-4c0ff613f107@redhat.com>
2024-03-26 16:53       ` Ryan Roberts
2024-03-26 17:04         ` David Hildenbrand
2024-03-26 17:32           ` Ryan Roberts
2024-03-26 17:39             ` David Hildenbrand
2024-03-26 17:51               ` Ryan Roberts
2024-03-27  9:34                 ` David Hildenbrand
2024-03-27 10:01                   ` Ryan Roberts
2024-04-03 12:59                   ` Ryan Roberts
2024-04-08  8:36                     ` David Hildenbrand
2024-04-09 16:35                       ` Ryan Roberts
2024-04-10 20:09                         ` David Hildenbrand
2024-04-11  9:45                           ` Ryan Roberts
     [not found]                             ` <70a36403-aefd-4311-b612-84e602465689@redhat.com>
2024-04-15  9:28                               ` Ryan Roberts [this message]
     [not found]                                 ` <3e50030d-2289-4470-a727-a293baa21618@redhat.com>
2024-04-15 13:30                                   ` Ryan Roberts
     [not found]                                     ` <969dc6c3-2764-4a35-9fa6-7596832fb2a3@redhat.com>
2024-04-15 14:34                                       ` Ryan Roberts
     [not found]                                         ` <11b1c25b-3e20-4acf-9be5-57b508266c5b@redhat.com>
2024-04-15 15:17                                           ` Ryan Roberts
2024-04-15 15:22                                             ` David Hildenbrand
2024-04-15 15:53                                               ` Ryan Roberts
2024-04-15 16:02                                                 ` David Hildenbrand
2024-04-23 10:15                                                   ` Ryan Roberts
2024-04-23 10:18                                                     ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f13d1e4d-1eea-4379-b683-4d736ad99c2c@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=adrian.hunter@intel.com \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.shishkin@linux.intel.com \
    --cc=catalin.marinas@arm.com \
    --cc=david@redhat.com \
    --cc=irogers@google.com \
    --cc=jolsa@kernel.org \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mark.rutland@arm.com \
    --cc=muchun.song@linux.dev \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox