linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Suren Baghdasaryan <surenb@google.com>
To: Vlastimil Babka <vbabka@suse.cz>
Cc: akpm@linux-foundation.org, willy@infradead.org,
	liam.howlett@oracle.com,  lorenzo.stoakes@oracle.com,
	mhocko@suse.com, hannes@cmpxchg.org,  mjguzik@gmail.com,
	oliver.sang@intel.com, mgorman@techsingularity.net,
	 david@redhat.com, peterx@redhat.com, oleg@redhat.com,
	dave@stgolabs.net,  paulmck@kernel.org, brauner@kernel.org,
	dhowells@redhat.com, hdanton@sina.com,  hughd@google.com,
	minchan@google.com, jannh@google.com,  shakeel.butt@linux.dev,
	souravpanda@google.com, pasha.tatashin@soleen.com,
	 corbet@lwn.net, linux-doc@vger.kernel.org, linux-mm@kvack.org,
	 linux-kernel@vger.kernel.org, kernel-team@android.com
Subject: Re: [PATCH v5 4/6] mm: make vma cache SLAB_TYPESAFE_BY_RCU
Date: Tue, 10 Dec 2024 15:01:29 -0800	[thread overview]
Message-ID: <CAJuCfpGrnSTU5ZH0Vt_AXyyFX5vAyknqcOtRsfnh4dbpOeyy-A@mail.gmail.com> (raw)
In-Reply-To: <5036d089-0774-4863-88c5-eaaea1265ac7@suse.cz>

On Tue, Dec 10, 2024 at 9:25 AM Vlastimil Babka <vbabka@suse.cz> wrote:
>
> On 12/10/24 18:16, Suren Baghdasaryan wrote:
> > On Tue, Dec 10, 2024 at 8:32 AM Vlastimil Babka <vbabka@suse.cz> wrote:
> >>
> >> On 12/10/24 17:20, Suren Baghdasaryan wrote:
> >> > On Tue, Dec 10, 2024 at 6:21 AM Vlastimil Babka <vbabka@suse.cz> wrote:
> >> >>
> >> >> On 12/6/24 23:52, Suren Baghdasaryan wrote:
> >> >> > To enable SLAB_TYPESAFE_BY_RCU for vma cache we need to ensure that
> >> >> > object reuse before RCU grace period is over will be detected inside
> >> >> > lock_vma_under_rcu().
> >> >> > lock_vma_under_rcu() enters RCU read section, finds the vma at the
> >> >> > given address, locks the vma and checks if it got detached or remapped
> >> >> > to cover a different address range. These last checks are there
> >> >> > to ensure that the vma was not modified after we found it but before
> >> >> > locking it.
> >> >> > vma reuse introduces several new possibilities:
> >> >> > 1. vma can be reused after it was found but before it is locked;
> >> >> > 2. vma can be reused and reinitialized (including changing its vm_mm)
> >> >> > while being locked in vma_start_read();
> >> >> > 3. vma can be reused and reinitialized after it was found but before
> >> >> > it is locked, then attached at a new address or to a new mm while
> >> >> > read-locked;
> >> >> > For case #1 current checks will help detecting cases when:
> >> >> > - vma was reused but not yet added into the tree (detached check)
> >> >> > - vma was reused at a different address range (address check);
> >> >> > We are missing the check for vm_mm to ensure the reused vma was not
> >> >> > attached to a different mm. This patch adds the missing check.
> >> >> > For case #2, we pass mm to vma_start_read() to prevent access to
> >> >> > unstable vma->vm_mm. This might lead to vma_start_read() returning
> >> >> > a false locked result but that's not critical if it's rare because
> >> >> > it will only lead to a retry under mmap_lock.
> >> >> > For case #3, we ensure the order in which vma->detached flag and
> >> >> > vm_start/vm_end/vm_mm are set and checked. vma gets attached after
> >> >> > vm_start/vm_end/vm_mm were set and lock_vma_under_rcu() should check
> >> >> > vma->detached before checking vm_start/vm_end/vm_mm. This is required
> >> >> > because attaching vma happens without vma write-lock, as opposed to
> >> >> > vma detaching, which requires vma write-lock. This patch adds memory
> >> >> > barriers inside is_vma_detached() and vma_mark_attached() needed to
> >> >> > order reads and writes to vma->detached vs vm_start/vm_end/vm_mm.
> >> >> > After these provisions, SLAB_TYPESAFE_BY_RCU is added to vm_area_cachep.
> >> >> > This will facilitate vm_area_struct reuse and will minimize the number
> >> >> > of call_rcu() calls.
> >> >> >
> >> >> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> >> >>
> >> >> I'm wondering about the vma freeing path. Consider vma_complete():
> >> >>
> >> >> vma_mark_detached(vp->remove);
> >> >>   vma->detached = true; - plain write
> >> >> vm_area_free(vp->remove);
> >> >>   vma->vm_lock_seq = UINT_MAX; - plain write
> >> >>   kmem_cache_free(vm_area_cachep)
> >> >> ...
> >> >> potential reallocation
> >> >>
> >> >> against:
> >> >>
> >> >> lock_vma_under_rcu()
> >> >> - mas_walk finds a stale vma due to race
> >> >> vma_start_read()
> >> >>   if (READ_ONCE(vma->vm_lock_seq) == READ_ONCE(mm->mm_lock_seq.sequence))
> >> >>   - can be false, the vma was not being locked on the freeing side?
> >> >>   down_read_trylock(&vma->vm_lock.lock) - suceeds, wasn't locked
> >> >>     this is acquire, but was there any release?
> >> >
> >> > Yes, there was a release. I think what you missed is that
> >> > vma_mark_detached() that is called from vma_complete() requires VMA to
> >> > be write-locked (see vma_assert_write_locked() in
> >> > vma_mark_detached()). The rule is that a VMA can be attached without
> >> > write-locking but only a write-locked VMA can be detached. So, after
> >>
> >> OK but write unlocking means the mm's seqcount is bumped and becomes
> >> non-equal with vma's vma->vm_lock_seq, right?
> >>
> >> Yet in the example above we happily set it to UINT_MAX and thus effectively
> >> false unlock it for vma_start_read()?
> >>
> >> And this is all done before the vma_complete() side would actually reach
> >> mmap_write_unlock(), AFAICS.
> >
> > Ah, you are right. With the possibility of reuse, even a freed VMA
> > should be kept write-locked until it is unlocked by
> > mmap_write_unlock(). I think the fix for this is simply to not reset
> > vma->vm_lock_seq inside vm_area_free(). I'll also need to add a
>
> But even if we don't reset vm_lock_seq to UINT_MAX, then whover reallocated
> it can proceed and end up doing a vma_start_write() and rewrite it there
> anyway, no?

Actually, I think with a small change we can simplify these locking rules:

static inline void vma_start_write(struct vm_area_struct *vma)
{
        int mm_lock_seq;

-        if (__is_vma_write_locked(vma, &mm_lock_seq))
-                return;
+        mmap_assert_write_locked(vma->vm_mm);
+        mm_lock_seq = vma->vm_mm->mm_lock_seq;

        down_write(&vma->vm_lock->lock);
        /*
        * We should use WRITE_ONCE() here because we can have concurrent reads
        * from the early lockless pessimistic check in vma_start_read().
        * We don't really care about the correctness of that early check, but
        * we should use WRITE_ONCE() for cleanliness and to keep KCSAN happy.
        */
        WRITE_ONCE(vma->vm_lock_seq, mm_lock_seq);
        up_write(&vma->vm_lock->lock);
}

This will force vma_start_write() to always write-lock vma->vm_lock
before changing vma->vm_lock_seq. Since vma->vm_lock survives reuse,
the other readers/writers will synchronize on it even if vma got
reused.

>
> > comment for vm_lock_seq explaining these requirements.
> > Do you agree that such a change would resolve the issue?
> >
> >>
> >> > vma_mark_detached() and before down_read_trylock(&vma->vm_lock.lock)
> >> > in vma_start_read() the VMA write-lock should have been released by
> >> > mmap_write_unlock() and therefore vma->detached=false should be
> >> > visible to the reader when it executed lock_vma_under_rcu().
> >> >
> >> >>   is_vma_detached() - false negative as the write above didn't propagate
> >> >>     here yet; a read barrier but where is the write barrier?
> >> >>   checks for vma->vm_mm, vm_start, vm_end - nobody reset them yet so false
> >> >>     positive, or they got reset on reallocation but writes didn't propagate
> >> >>
> >> >> Am I missing something that would prevent lock_vma_under_rcu() falsely
> >> >> succeeding here?
> >> >>
> >>
>


  parent reply	other threads:[~2024-12-10 23:01 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-06 22:51 [PATCH v5 0/6] move per-vma lock into vm_area_struct Suren Baghdasaryan
2024-12-06 22:51 ` [PATCH v5 1/6] mm: introduce vma_start_read_locked{_nested} helpers Suren Baghdasaryan
2024-12-10  9:03   ` Vlastimil Babka
2024-12-06 22:51 ` [PATCH v5 2/6] mm: move per-vma lock into vm_area_struct Suren Baghdasaryan
2024-12-10  9:15   ` Vlastimil Babka
2024-12-06 22:52 ` [PATCH v5 3/6] mm: mark vma as detached until it's added into vma tree Suren Baghdasaryan
2024-12-10  9:35   ` Vlastimil Babka
2024-12-10 11:36   ` Vlastimil Babka
2024-12-10 16:28     ` Suren Baghdasaryan
2024-12-06 22:52 ` [PATCH v5 4/6] mm: make vma cache SLAB_TYPESAFE_BY_RCU Suren Baghdasaryan
2024-12-09 17:35   ` Klara Modin
2024-12-09 20:28     ` Suren Baghdasaryan
2024-12-09 22:19       ` Suren Baghdasaryan
2024-12-10 12:06   ` Vlastimil Babka
2024-12-10 16:23     ` Suren Baghdasaryan
2024-12-10 14:21   ` Vlastimil Babka
2024-12-10 16:20     ` Suren Baghdasaryan
2024-12-10 16:32       ` Vlastimil Babka
2024-12-10 17:16         ` Suren Baghdasaryan
2024-12-10 17:25           ` Vlastimil Babka
2024-12-10 18:53             ` Suren Baghdasaryan
2024-12-10 23:01             ` Suren Baghdasaryan [this message]
2024-12-11 15:30               ` Suren Baghdasaryan
2024-12-11 16:05                 ` Vlastimil Babka
2024-12-11 16:14                   ` Suren Baghdasaryan
2024-12-06 22:52 ` [PATCH v5 5/6] mm/slab: allow freeptr_offset to be used with ctor Suren Baghdasaryan
2024-12-10 11:01   ` Vlastimil Babka
2024-12-06 22:52 ` [PATCH v5 6/6] docs/mm: document latest changes to vm_lock Suren Baghdasaryan
2024-12-07  3:23   ` Randy Dunlap
2024-12-07  4:24     ` Akira Yokosawa
2024-12-07 17:33       ` Suren Baghdasaryan
2024-12-07  4:29 ` [PATCH v5 0/6] move per-vma lock into vm_area_struct Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAJuCfpGrnSTU5ZH0Vt_AXyyFX5vAyknqcOtRsfnh4dbpOeyy-A@mail.gmail.com \
    --to=surenb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=brauner@kernel.org \
    --cc=corbet@lwn.net \
    --cc=dave@stgolabs.net \
    --cc=david@redhat.com \
    --cc=dhowells@redhat.com \
    --cc=hannes@cmpxchg.org \
    --cc=hdanton@sina.com \
    --cc=hughd@google.com \
    --cc=jannh@google.com \
    --cc=kernel-team@android.com \
    --cc=liam.howlett@oracle.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mgorman@techsingularity.net \
    --cc=mhocko@suse.com \
    --cc=minchan@google.com \
    --cc=mjguzik@gmail.com \
    --cc=oleg@redhat.com \
    --cc=oliver.sang@intel.com \
    --cc=pasha.tatashin@soleen.com \
    --cc=paulmck@kernel.org \
    --cc=peterx@redhat.com \
    --cc=shakeel.butt@linux.dev \
    --cc=souravpanda@google.com \
    --cc=vbabka@suse.cz \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox