From: Suren Baghdasaryan <surenb@google.com>
To: "Liam R. Howlett" <Liam.Howlett@oracle.com>,
Peter Zijlstra <peterz@infradead.org>,
Suren Baghdasaryan <surenb@google.com>,
akpm@linux-foundation.org, willy@infradead.org,
lorenzo.stoakes@oracle.com, mhocko@suse.com, vbabka@suse.cz,
hannes@cmpxchg.org, mjguzik@gmail.com, oliver.sang@intel.com,
mgorman@techsingularity.net, david@redhat.com,
peterx@redhat.com, oleg@redhat.com, dave@stgolabs.net,
paulmck@kernel.org, brauner@kernel.org, dhowells@redhat.com,
hdanton@sina.com, hughd@google.com, lokeshgidra@google.com,
minchan@google.com, jannh@google.com, shakeel.butt@linux.dev,
souravpanda@google.com, pasha.tatashin@soleen.com,
klarasmodin@gmail.com, corbet@lwn.net,
linux-doc@vger.kernel.org, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, kernel-team@android.com
Subject: Re: [PATCH v6 10/16] mm: replace vm_lock and detached flag with a reference count
Date: Wed, 18 Dec 2024 07:57:17 -0800 [thread overview]
Message-ID: <CAJuCfpHRtuRdf3YTGFTK7oV0mk4Ck-G22-dARKA+ObVwvfxNkg@mail.gmail.com> (raw)
In-Reply-To: <kfltsrry7qjuycyqpe2wune2ejad6kvusm2zixvfbtprbnw2lv@wcafrui6qaa7>
On Wed, Dec 18, 2024 at 7:37 AM Liam R. Howlett <Liam.Howlett@oracle.com> wrote:
>
> * Peter Zijlstra <peterz@infradead.org> [241218 05:06]:
> > On Wed, Dec 18, 2024 at 10:41:04AM +0100, Peter Zijlstra wrote:
> > > On Tue, Dec 17, 2024 at 08:27:46AM -0800, Suren Baghdasaryan wrote:
> > >
> > > > > So I just replied there, and no, I don't think it makes sense. Just put
> > > > > the kmem_cache_free() in vma_refcount_put(), to be done on 0.
> > > >
> > > > That's very appealing indeed and makes things much simpler. The
> > > > problem I see with that is the case when we detach a vma from the tree
> > > > to isolate it, then do some cleanup and only then free it. That's done
> > > > in vms_gather_munmap_vmas() here:
> > > > https://elixir.bootlin.com/linux/v6.12.5/source/mm/vma.c#L1240 and we
> > > > even might reattach detached vmas back:
> > > > https://elixir.bootlin.com/linux/v6.12.5/source/mm/vma.c#L1312. IOW,
> > > > detached state is not final and we can't destroy the object that
> > > > reached this state.
> > >
> > > Urgh, so that's the munmap() path, but arguably when that fails, the
> > > map stays in place.
> > >
> > > I think this means you're marking detached too soon; you should only
> > > mark detached once you reach the point of no return.
> > >
> > > That said, once you've reached the point of no return; and are about to
> > > go remove the page-tables, you very much want to ensure a lack of
> > > concurrency.
> > >
> > > So perhaps waiting for out-standing readers at this point isn't crazy.
> > >
> > > Also, I'm having a very hard time reading this maple tree stuff :/
> > > Afaict vms_gather_munmap_vmas() only adds the VMAs to be removed to a
> > > second tree, it does not in fact unlink them from the mm yet.
>
> Yes, that's correct. I tried to make this clear with a gather/complete
> naming like other areas of the mm. I hope that helped.
>
> Also, the comments for the function state that's what's going on:
>
> * vms_gather_munmap_vmas() - Put all VMAs within a range into a maple tree
> * for removal at a later date. Handles splitting first and last if necessary
> * and marking the vmas as isolated.
>
> ... might be worth updating with new information.
>
> > >
> > > AFAICT it's vma_iter_clear_gfp() that actually wipes the vmas from the
> > > mm -- and that being able to fail is mind boggling and I suppose is what
> > > gives rise to much of this insanity :/
>
> This is also correct. The maple tree is a b-tree variant that has
> internal nodes. When you write to it, including nulls, they are tracked
> and may need to allocate. This is a cost for rcu lookups; we will use
> the same or less memory in the end but must maintain a consistent view
> of the ranges.
>
> But to put this into perspective, we get 16 nodes per 4k page, most
> writes will use 1 or 3 of these from a kmem_cache, so we are talking
> about a very unlikely possibility. Except when syzbot decides to fail
> random allocations.
>
> We could preallocate for the write, but this section of the code is
> GFP_KERNEL, so we don't. Preallocation is an option to simplify the
> failure path though... which is what you did below.
>
> > >
> > > Anyway, I would expect remove_vma() to be the one that marks it detached
> > > (it's already unreachable through vma_lookup() at this point) and there
> > > you should wait for concurrent readers to bugger off.
> >
> > Also, I think vma_start_write() in that gather look is too early, you're
> > not actually going to change the VMA yet -- with obvious exception of
> > the split cases.
>
> The split needs to start the write on the vma to avoid anyone reading it
> while it's being altered.
>
> >
> > That too should probably come after you've passes all the fail/unwind
> > spots.
>
> Do you mean the split? I'd like to move the split later as well..
> tracking that is a pain and may need an extra vma for when one vma is
> split twice before removing the middle part.
>
> Actually, I think we need to allocate two (or at least one) vmas in this
> case and just pass one through to unmap (written only to the mas_detach
> tree?). It would be nice to find a way to NOT need to do that even.. I
> had tried to use a vma on the stack years ago, which didn't work out.
>
> >
> > Something like so perhaps? (yeah, I know, I wrecked a bunch)
> >
> > diff --git a/mm/vma.c b/mm/vma.c
> > index 8e31b7e25aeb..45d43adcbb36 100644
> > --- a/mm/vma.c
> > +++ b/mm/vma.c
> > @@ -1173,6 +1173,11 @@ static void vms_complete_munmap_vmas(struct vma_munmap_struct *vms,
> > struct vm_area_struct *vma;
> > struct mm_struct *mm;
> >
>
> mas_set(mas_detach, 0);
>
> > + mas_for_each(mas_detach, vma, ULONG_MAX) {
> > + vma_start_write(next);
> > + vma_mark_detached(next, true);
> > + }
> > +
> > mm = current->mm;
> > mm->map_count -= vms->vma_count;
> > mm->locked_vm -= vms->locked_vm;
> > @@ -1219,9 +1224,6 @@ static void reattach_vmas(struct ma_state *mas_detach)
> > struct vm_area_struct *vma;
> >
>
> > mas_set(mas_detach, 0);
> Drop the mas_set here.
>
> > - mas_for_each(mas_detach, vma, ULONG_MAX)
> > - vma_mark_detached(vma, false);
> > -
> > __mt_destroy(mas_detach->tree);
> > }
> >
> > @@ -1289,13 +1291,11 @@ static int vms_gather_munmap_vmas(struct vma_munmap_struct *vms,
> > if (error)
> > goto end_split_failed;
> > }
> > - vma_start_write(next);
> > mas_set(mas_detach, vms->vma_count++);
> > error = mas_store_gfp(mas_detach, next, GFP_KERNEL);
> > if (error)
> > goto munmap_gather_failed;
> >
> > - vma_mark_detached(next, true);
> > nrpages = vma_pages(next);
> >
> > vms->nr_pages += nrpages;
> > @@ -1431,14 +1431,17 @@ int do_vmi_align_munmap(struct vma_iterator *vmi, struct vm_area_struct *vma,
> > struct vma_munmap_struct vms;
> > int error;
> >
>
> The preallocation needs to know the range being stored to know what's
> going to happen.
>
> vma_iter_config(vmi, start, end);
>
> > + error = mas_preallocate(vmi->mas);
>
> We haven't had a need to have a vma iterator preallocate for storing a
> null, but we can add one for this.
>
> > + if (error)
> > + goto gather_failed;
> > +
> > init_vma_munmap(&vms, vmi, vma, start, end, uf, unlock);
> > error = vms_gather_munmap_vmas(&vms, &mas_detach);
> > if (error)
> > goto gather_failed;
> >
>
> Drop this stuff.
> > error = vma_iter_clear_gfp(vmi, start, end, GFP_KERNEL);
> > - if (error)
> > - goto clear_tree_failed;
> > + VM_WARN_ON(error);
>
> Do this instead
> vma_iter_config(vmi, start, end);
> vma_iter_clear(vmi);
Thanks for the input, Liam. Let me try to make a patch from these
suggestions and see where we end up and what might blow up.
>
> >
> > /* Point of no return */
> > vms_complete_munmap_vmas(&vms, &mas_detach);
next prev parent reply other threads:[~2024-12-18 15:57 UTC|newest]
Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-12-16 19:24 [PATCH v6 00/16] move per-vma lock into vm_area_struct Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 01/16] mm: introduce vma_start_read_locked{_nested} helpers Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 02/16] mm: move per-vma lock into vm_area_struct Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 03/16] mm: mark vma as detached until it's added into vma tree Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 04/16] mm/nommu: fix the last places where vma is not locked before being attached Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 05/16] types: move struct rcuwait into types.h Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 06/16] mm: allow vma_start_read_locked/vma_start_read_locked_nested to fail Suren Baghdasaryan
2024-12-17 11:31 ` Lokesh Gidra
2024-12-17 15:51 ` Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 07/16] mm: move mmap_init_lock() out of the header file Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 08/16] mm: uninline the main body of vma_start_write() Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 09/16] refcount: introduce __refcount_{add|inc}_not_zero_limited Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 10/16] mm: replace vm_lock and detached flag with a reference count Suren Baghdasaryan
2024-12-16 20:42 ` Peter Zijlstra
2024-12-16 20:53 ` Suren Baghdasaryan
2024-12-16 21:15 ` Peter Zijlstra
2024-12-16 21:53 ` Suren Baghdasaryan
2024-12-16 22:00 ` Peter Zijlstra
2024-12-16 21:37 ` Peter Zijlstra
2024-12-16 21:44 ` Suren Baghdasaryan
2024-12-17 10:30 ` Peter Zijlstra
2024-12-17 16:27 ` Suren Baghdasaryan
2024-12-18 9:41 ` Peter Zijlstra
2024-12-18 10:06 ` Peter Zijlstra
2024-12-18 15:37 ` Liam R. Howlett
2024-12-18 15:50 ` Suren Baghdasaryan
2024-12-18 16:18 ` Peter Zijlstra
2024-12-18 17:36 ` Suren Baghdasaryan
2024-12-18 17:44 ` Peter Zijlstra
2024-12-18 17:58 ` Suren Baghdasaryan
2024-12-18 19:00 ` Liam R. Howlett
2024-12-18 19:07 ` Suren Baghdasaryan
2024-12-18 19:29 ` Suren Baghdasaryan
2024-12-18 19:38 ` Liam R. Howlett
2024-12-18 20:00 ` Suren Baghdasaryan
2024-12-18 20:38 ` Liam R. Howlett
2024-12-18 21:53 ` Suren Baghdasaryan
2024-12-18 21:55 ` Suren Baghdasaryan
2024-12-19 0:35 ` Andrew Morton
2024-12-19 0:47 ` Suren Baghdasaryan
2024-12-19 9:13 ` Peter Zijlstra
2024-12-19 11:20 ` Peter Zijlstra
2024-12-19 16:17 ` Suren Baghdasaryan
2024-12-19 17:16 ` Liam R. Howlett
2024-12-19 17:42 ` Peter Zijlstra
2024-12-19 18:18 ` Liam R. Howlett
2024-12-19 18:46 ` Peter Zijlstra
2024-12-19 18:55 ` Liam R. Howlett
2024-12-20 15:22 ` Suren Baghdasaryan
2024-12-23 3:03 ` Suren Baghdasaryan
2024-12-26 17:12 ` Suren Baghdasaryan
2024-12-19 16:14 ` Suren Baghdasaryan
2024-12-19 17:23 ` Peter Zijlstra
2024-12-19 8:55 ` Peter Zijlstra
2024-12-19 16:08 ` Suren Baghdasaryan
2024-12-19 8:53 ` Peter Zijlstra
2024-12-19 16:08 ` Suren Baghdasaryan
2024-12-18 15:57 ` Suren Baghdasaryan [this message]
2024-12-18 16:13 ` Peter Zijlstra
2024-12-18 15:42 ` Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 11/16] mm: enforce vma to be in detached state before freeing Suren Baghdasaryan
2024-12-16 21:16 ` Peter Zijlstra
2024-12-16 21:18 ` Peter Zijlstra
2024-12-16 21:57 ` Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 12/16] mm: remove extra vma_numab_state_init() call Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 13/16] mm: introduce vma_ensure_detached() Suren Baghdasaryan
2024-12-17 10:26 ` Peter Zijlstra
2024-12-17 15:58 ` Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 14/16] mm: prepare lock_vma_under_rcu() for vma reuse possibility Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 15/16] mm: make vma cache SLAB_TYPESAFE_BY_RCU Suren Baghdasaryan
2024-12-16 19:24 ` [PATCH v6 16/16] docs/mm: document latest changes to vm_lock Suren Baghdasaryan
2024-12-16 19:39 ` [PATCH v6 00/16] move per-vma lock into vm_area_struct Suren Baghdasaryan
2024-12-17 18:42 ` Andrew Morton
2024-12-17 18:49 ` Suren Baghdasaryan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJuCfpHRtuRdf3YTGFTK7oV0mk4Ck-G22-dARKA+ObVwvfxNkg@mail.gmail.com \
--to=surenb@google.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=brauner@kernel.org \
--cc=corbet@lwn.net \
--cc=dave@stgolabs.net \
--cc=david@redhat.com \
--cc=dhowells@redhat.com \
--cc=hannes@cmpxchg.org \
--cc=hdanton@sina.com \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=kernel-team@android.com \
--cc=klarasmodin@gmail.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lokeshgidra@google.com \
--cc=lorenzo.stoakes@oracle.com \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=minchan@google.com \
--cc=mjguzik@gmail.com \
--cc=oleg@redhat.com \
--cc=oliver.sang@intel.com \
--cc=pasha.tatashin@soleen.com \
--cc=paulmck@kernel.org \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=shakeel.butt@linux.dev \
--cc=souravpanda@google.com \
--cc=vbabka@suse.cz \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox