From: Suren Baghdasaryan <surenb@google.com>
To: "Liam R. Howlett" <Liam.Howlett@oracle.com>,
Suren Baghdasaryan <surenb@google.com>,
akpm@linux-foundation.org, michel@lespinasse.org,
jglisse@google.com, mhocko@suse.com, vbabka@suse.cz,
hannes@cmpxchg.org, mgorman@techsingularity.net,
dave@stgolabs.net, willy@infradead.org, peterz@infradead.org,
ldufour@linux.ibm.com, paulmck@kernel.org, mingo@redhat.com,
will@kernel.org, luto@kernel.org, songliubraving@fb.com,
peterx@redhat.com, david@redhat.com, dhowells@redhat.com,
hughd@google.com, bigeasy@linutronix.de,
kent.overstreet@linux.dev, punit.agrawal@bytedance.com,
lstoakes@gmail.com, peterjung1337@gmail.com,
rientjes@google.com, chriscli@google.com,
axelrasmussen@google.com, joelaf@google.com, minchan@google.com,
rppt@kernel.org, jannh@google.com, shakeelb@google.com,
tatashin@google.com, edumazet@google.com, gthelen@google.com,
gurua@google.com, arjunroy@google.com, soheil@google.com,
leewalsh@google.com, posk@google.com,
michalechner92@googlemail.com, linux-mm@kvack.org,
linux-arm-kernel@lists.infradead.org,
linuxppc-dev@lists.ozlabs.org, x86@kernel.org,
linux-kernel@vger.kernel.org, kernel-team@android.com
Subject: Re: [PATCH v3 23/35] mm/mmap: prevent pagefault handler from racing with mmu_notifier registration
Date: Thu, 23 Feb 2023 12:29:58 -0800 [thread overview]
Message-ID: <CAJuCfpEywUsBxKW5DCHCa_XK45ewhnULia75zoZ9ehW9nsYAMA@mail.gmail.com> (raw)
In-Reply-To: <20230223200616.kfnwwpuzuwq5hr7j@revolver>
On Thu, Feb 23, 2023 at 12:06 PM Liam R. Howlett
<Liam.Howlett@oracle.com> wrote:
>
> * Suren Baghdasaryan <surenb@google.com> [230216 00:18]:
> > Page fault handlers might need to fire MMU notifications while a new
> > notifier is being registered. Modify mm_take_all_locks to write-lock all
> > VMAs and prevent this race with page fault handlers that would hold VMA
> > locks. VMAs are locked before i_mmap_rwsem and anon_vma to keep the same
> > locking order as in page fault handlers.
> >
> > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > ---
> > mm/mmap.c | 9 +++++++++
> > 1 file changed, 9 insertions(+)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index 00f8c5798936..801608726be8 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -3501,6 +3501,7 @@ static void vm_lock_mapping(struct mm_struct *mm, struct address_space *mapping)
> > * of mm/rmap.c:
> > * - all hugetlbfs_i_mmap_rwsem_key locks (aka mapping->i_mmap_rwsem for
> > * hugetlb mapping);
> > + * - all vmas marked locked
> > * - all i_mmap_rwsem locks;
> > * - all anon_vma->rwseml
> > *
> > @@ -3523,6 +3524,13 @@ int mm_take_all_locks(struct mm_struct *mm)
> >
> > mutex_lock(&mm_all_locks_mutex);
> >
> > + mas_for_each(&mas, vma, ULONG_MAX) {
> > + if (signal_pending(current))
> > + goto out_unlock;
> > + vma_start_write(vma);
> > + }
> > +
> > + mas_set(&mas, 0);
> > mas_for_each(&mas, vma, ULONG_MAX) {
> > if (signal_pending(current))
> > goto out_unlock;
>
> Do we need a vma_end_write_all(mm) in the out_unlock unrolling?
We can't really do that because some VMAs might have been locked
before mm_take_all_locks() got called. So, we will have to wait until
mmap write lock is dropped and vma_end_write_all() is called from
there. Getting a signal while executing mm_take_all_locks() is
probably not too common and won't pose a performance issue.
>
> Also, does this need to honour the strict locking order that we have to
> add an entire new loop? This function is...suboptimal today, but if we
> could get away with not looping through every VMA for a 4th time, that
> would be nice.
That's what I used to do until Jann pointed out the locking order
requirement to avoid deadlocks in here:
https://lore.kernel.org/all/CAG48ez3EAai=1ghnCMF6xcgUebQRm-u2xhwcpYsfP9=r=oVXig@mail.gmail.com/.
>
> > @@ -3612,6 +3620,7 @@ void mm_drop_all_locks(struct mm_struct *mm)
> > if (vma->vm_file && vma->vm_file->f_mapping)
> > vm_unlock_mapping(vma->vm_file->f_mapping);
> > }
> > + vma_end_write_all(mm);
> >
> > mutex_unlock(&mm_all_locks_mutex);
> > }
> > --
> > 2.39.1
> >
>
> --
> To unsubscribe from this group and stop receiving emails from it, send an email to kernel-team+unsubscribe@android.com.
>
next prev parent reply other threads:[~2023-02-23 20:30 UTC|newest]
Thread overview: 67+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-16 5:17 [PATCH v3 00/35] Per-VMA locks Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 01/35] maple_tree: Be more cautious about dead nodes Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 02/35] maple_tree: Detect dead nodes in mas_start() Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 03/35] maple_tree: Fix freeing of nodes in rcu mode Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 04/35] maple_tree: remove extra smp_wmb() from mas_dead_leaves() Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 05/35] maple_tree: Fix write memory barrier of nodes once dead for RCU mode Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 06/35] maple_tree: Add smp_rmb() to dead node detection Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 07/35] maple_tree: Add RCU lock checking to rcu callback functions Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 08/35] mm: Enable maple tree RCU mode by default Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 09/35] mm: introduce CONFIG_PER_VMA_LOCK Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 10/35] mm: rcu safe VMA freeing Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 11/35] mm: move mmap_lock assert function definitions Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 12/35] mm: add per-VMA lock and helper functions to control it Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 13/35] mm: mark VMA as being written when changing vm_flags Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 14/35] mm/mmap: move VMA locking before vma_adjust_trans_huge call Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 15/35] mm/khugepaged: write-lock VMA while collapsing a huge page Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 16/35] mm/mmap: write-lock VMAs before merging, splitting or expanding them Suren Baghdasaryan
2023-02-23 14:51 ` Hyeonggon Yoo
2023-02-23 14:59 ` Hyeonggon Yoo
2023-02-23 17:46 ` Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 17/35] mm/mmap: write-lock VMA before shrinking or expanding it Suren Baghdasaryan
2023-02-23 20:20 ` Liam R. Howlett
2023-02-23 20:28 ` Liam R. Howlett
2023-02-23 21:16 ` Suren Baghdasaryan
2023-02-24 1:46 ` Liam R. Howlett
2023-02-24 2:06 ` Suren Baghdasaryan
2023-02-24 16:14 ` Liam R. Howlett
2023-02-24 16:19 ` Suren Baghdasaryan
2023-02-27 17:33 ` Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 18/35] mm/mremap: write-lock VMA while remapping it to a new address range Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 19/35] mm: write-lock VMAs before removing them from VMA tree Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 20/35] mm: conditionally write-lock VMA in free_pgtables Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 21/35] mm/mmap: write-lock adjacent VMAs if they can grow into unmapped area Suren Baghdasaryan
2023-02-16 15:34 ` Liam R. Howlett
2023-02-16 19:36 ` Suren Baghdasaryan
2023-02-17 14:50 ` Liam R. Howlett
2023-02-17 15:54 ` Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 22/35] kernel/fork: assert no VMA readers during its destruction Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 23/35] mm/mmap: prevent pagefault handler from racing with mmu_notifier registration Suren Baghdasaryan
2023-02-23 20:06 ` Liam R. Howlett
2023-02-23 20:29 ` Suren Baghdasaryan [this message]
2023-02-16 5:17 ` [PATCH v3 24/35] mm: introduce vma detached flag Suren Baghdasaryan
2023-02-23 20:08 ` Liam R. Howlett
2023-02-23 20:34 ` Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 25/35] mm: introduce lock_vma_under_rcu to be used from arch-specific code Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 26/35] mm: fall back to mmap_lock if vma->anon_vma is not yet set Suren Baghdasaryan
2023-02-16 15:44 ` Matthew Wilcox
2023-02-16 19:43 ` Suren Baghdasaryan
2023-02-17 2:14 ` Suren Baghdasaryan
2023-02-17 10:21 ` Hyeonggon Yoo
2023-02-17 16:13 ` Suren Baghdasaryan
2023-02-17 18:49 ` Hyeonggon Yoo
2023-02-17 16:05 ` Matthew Wilcox
2023-02-17 16:10 ` Suren Baghdasaryan
2023-04-03 19:49 ` Matthew Wilcox
2023-02-16 5:17 ` [PATCH v3 27/35] mm: add FAULT_FLAG_VMA_LOCK flag Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 28/35] mm: prevent do_swap_page from handling page faults under VMA lock Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 29/35] mm: prevent userfaults to be handled under per-vma lock Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 30/35] mm: introduce per-VMA lock statistics Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 31/35] x86/mm: try VMA lock-based page fault handling first Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 32/35] arm64/mm: " Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 33/35] powerc/mm: " Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 34/35] mm/mmap: free vm_area_struct without call_rcu in exit_mmap Suren Baghdasaryan
2023-02-16 5:17 ` [PATCH v3 35/35] mm: separate vma->lock from vm_area_struct Suren Baghdasaryan
2023-02-24 9:21 ` [PATCH v3 00/35] Per-VMA locks freak07
2023-02-27 16:50 ` Davidlohr Bueso
2023-02-27 17:22 ` Suren Baghdasaryan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJuCfpEywUsBxKW5DCHCa_XK45ewhnULia75zoZ9ehW9nsYAMA@mail.gmail.com \
--to=surenb@google.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=arjunroy@google.com \
--cc=axelrasmussen@google.com \
--cc=bigeasy@linutronix.de \
--cc=chriscli@google.com \
--cc=dave@stgolabs.net \
--cc=david@redhat.com \
--cc=dhowells@redhat.com \
--cc=edumazet@google.com \
--cc=gthelen@google.com \
--cc=gurua@google.com \
--cc=hannes@cmpxchg.org \
--cc=hughd@google.com \
--cc=jannh@google.com \
--cc=jglisse@google.com \
--cc=joelaf@google.com \
--cc=kent.overstreet@linux.dev \
--cc=kernel-team@android.com \
--cc=ldufour@linux.ibm.com \
--cc=leewalsh@google.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linuxppc-dev@lists.ozlabs.org \
--cc=lstoakes@gmail.com \
--cc=luto@kernel.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=michalechner92@googlemail.com \
--cc=michel@lespinasse.org \
--cc=minchan@google.com \
--cc=mingo@redhat.com \
--cc=paulmck@kernel.org \
--cc=peterjung1337@gmail.com \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=posk@google.com \
--cc=punit.agrawal@bytedance.com \
--cc=rientjes@google.com \
--cc=rppt@kernel.org \
--cc=shakeelb@google.com \
--cc=soheil@google.com \
--cc=songliubraving@fb.com \
--cc=tatashin@google.com \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox