From: Suren Baghdasaryan <surenb@google.com>
To: Peter Xu <peterx@redhat.com>
Cc: David Hildenbrand <david@redhat.com>,
akpm@linux-foundation.org, jirislaby@kernel.org,
jacobly.alt@gmail.com, holger@applied-asynchrony.com,
hdegoede@redhat.com, michel@lespinasse.org, jglisse@google.com,
mhocko@suse.com, vbabka@suse.cz, hannes@cmpxchg.org,
mgorman@techsingularity.net, dave@stgolabs.net,
willy@infradead.org, liam.howlett@oracle.com,
peterz@infradead.org, ldufour@linux.ibm.com, paulmck@kernel.org,
mingo@redhat.com, will@kernel.org, luto@kernel.org,
songliubraving@fb.com, dhowells@redhat.com, hughd@google.com,
bigeasy@linutronix.de, kent.overstreet@linux.dev,
punit.agrawal@bytedance.com, lstoakes@gmail.com,
peterjung1337@gmail.com, rientjes@google.com,
chriscli@google.com, axelrasmussen@google.com,
joelaf@google.com, minchan@google.com, rppt@kernel.org,
jannh@google.com, shakeelb@google.com, tatashin@google.com,
edumazet@google.com, gthelen@google.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, stable@vger.kernel.org
Subject: Re: [PATCH v3 2/2] mm: disable CONFIG_PER_VMA_LOCK until its fixed
Date: Wed, 5 Jul 2023 13:33:26 -0700 [thread overview]
Message-ID: <CAJuCfpGHRfK1ZC3YmF1caKHiR7hD73goOXLKQubFLuOgzCr0dg@mail.gmail.com> (raw)
In-Reply-To: <ZKXRsQC8ufiebDGu@x1n>
On Wed, Jul 5, 2023 at 1:25 PM Peter Xu <peterx@redhat.com> wrote:
>
> On Wed, Jul 05, 2023 at 10:22:27AM -0700, Suren Baghdasaryan wrote:
> > On Wed, Jul 5, 2023 at 10:16 AM David Hildenbrand <david@redhat.com> wrote:
> > >
> > > On 05.07.23 19:12, Suren Baghdasaryan wrote:
> > > > A memory corruption was reported in [1] with bisection pointing to the
> > > > patch [2] enabling per-VMA locks for x86.
> > > > Disable per-VMA locks config to prevent this issue while the problem is
> > > > being investigated. This is expected to be a temporary measure.
> > > >
> > > > [1] https://bugzilla.kernel.org/show_bug.cgi?id=217624
> > > > [2] https://lore.kernel.org/all/20230227173632.3292573-30-surenb@google.com
> > > >
> > > > Reported-by: Jiri Slaby <jirislaby@kernel.org>
> > > > Closes: https://lore.kernel.org/all/dbdef34c-3a07-5951-e1ae-e9c6e3cdf51b@kernel.org/
> > > > Reported-by: Jacob Young <jacobly.alt@gmail.com>
> > > > Closes: https://bugzilla.kernel.org/show_bug.cgi?id=217624
> > > > Fixes: 0bff0aaea03e ("x86/mm: try VMA lock-based page fault handling first")
> > > > Cc: stable@vger.kernel.org
> > > > Signed-off-by: Suren Baghdasaryan <surenb@google.com>
> > > > ---
> > > > mm/Kconfig | 3 ++-
> > > > 1 file changed, 2 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/mm/Kconfig b/mm/Kconfig
> > > > index 09130434e30d..0abc6c71dd89 100644
> > > > --- a/mm/Kconfig
> > > > +++ b/mm/Kconfig
> > > > @@ -1224,8 +1224,9 @@ config ARCH_SUPPORTS_PER_VMA_LOCK
> > > > def_bool n
> > > >
> > > > config PER_VMA_LOCK
> > > > - def_bool y
> > > > + bool "Enable per-vma locking during page fault handling."
> > > > depends on ARCH_SUPPORTS_PER_VMA_LOCK && MMU && SMP
> > > > + depends on BROKEN
> > > > help
> > > > Allow per-vma locking during page fault handling.
> > > >
> > > Do we have any testing results (that don't reveal other issues :) ) for
> > > patch #1? Not sure if we really want to mark it broken if patch #1 fixes
> > > the issue.
> >
> > I tested the fix using the only reproducer provided in the reports
> > plus kernel compilation and my fork stress test. All looked good and
> > stable but I don't know if other reports had the same issue or
> > something different.
>
> The commit log seems slightly confusing. It mostly says the bug was still
> not solved, but I assume patch 1 is the current "fix", it's just not clear
> whether there's any other potential issues?
>
> According to the stable tree rules:
>
> - It must fix a problem that causes a build error (but not for things
> marked CONFIG_BROKEN), an oops, a hang, data corruption, a real
> security issue, or some "oh, that's not good" issue. In short, something
> critical.
>
> I think it means vma lock will never be fixed in 6.4, and it can't (because
> after this patch it'll be BROKEN, and this patch copies stable, and we
> can't fix BROKEN things in stables).
I was hoping we could re-enable VMA locks in 6.4 once we get more
confirmations that the problem is gone. Is that not possible once the
BROKEN dependency is merged?
>
> Totally no problem I see, just to make sure this is what you wanted..
>
> There'll still try to be a final fix, am I right? As IIRC allowing page
> faults during fork() is one of the major goals of vma lock.
I think we can further optimize the locking rules here (see discussion
in https://lore.kernel.org/all/20230703182150.2193578-1-surenb@google.com/)
but for now we want the most effective and simple way to fix the
memory corruption problem.
Thanks,
Suren.
>
> Thanks,
>
> --
> Peter Xu
>
next prev parent reply other threads:[~2023-07-05 20:33 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-05 17:12 [PATCH v3 0/2] Avoid memory corruption caused by per-VMA locks Suren Baghdasaryan
2023-07-05 17:12 ` [PATCH v3 1/2] fork: lock VMAs of the parent process when forking Suren Baghdasaryan
2023-07-05 17:14 ` David Hildenbrand
2023-07-05 17:23 ` Suren Baghdasaryan
2023-07-05 23:06 ` Liam R. Howlett
2023-07-06 0:20 ` Suren Baghdasaryan
2023-07-06 0:32 ` Liam R. Howlett
2023-07-06 0:42 ` Suren Baghdasaryan
2023-07-05 17:12 ` [PATCH v3 2/2] mm: disable CONFIG_PER_VMA_LOCK until its fixed Suren Baghdasaryan
2023-07-05 17:15 ` David Hildenbrand
2023-07-05 17:22 ` Suren Baghdasaryan
2023-07-05 17:24 ` David Hildenbrand
2023-07-05 18:09 ` Suren Baghdasaryan
2023-07-05 18:14 ` Suren Baghdasaryan
2023-07-05 20:25 ` Peter Xu
2023-07-05 20:33 ` Suren Baghdasaryan [this message]
2023-07-06 0:24 ` Andrew Morton
2023-07-06 0:30 ` Suren Baghdasaryan
2023-07-06 0:32 ` Suren Baghdasaryan
2023-07-06 0:44 ` Andrew Morton
2023-07-06 0:49 ` Suren Baghdasaryan
2023-07-06 1:16 ` Suren Baghdasaryan
2023-07-05 20:37 ` David Hildenbrand
2023-07-05 21:09 ` Suren Baghdasaryan
2023-07-05 21:27 ` Matthew Wilcox
2023-07-05 21:54 ` Suren Baghdasaryan
2023-07-05 21:55 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAJuCfpGHRfK1ZC3YmF1caKHiR7hD73goOXLKQubFLuOgzCr0dg@mail.gmail.com \
--to=surenb@google.com \
--cc=akpm@linux-foundation.org \
--cc=axelrasmussen@google.com \
--cc=bigeasy@linutronix.de \
--cc=chriscli@google.com \
--cc=dave@stgolabs.net \
--cc=david@redhat.com \
--cc=dhowells@redhat.com \
--cc=edumazet@google.com \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=hdegoede@redhat.com \
--cc=holger@applied-asynchrony.com \
--cc=hughd@google.com \
--cc=jacobly.alt@gmail.com \
--cc=jannh@google.com \
--cc=jglisse@google.com \
--cc=jirislaby@kernel.org \
--cc=joelaf@google.com \
--cc=kent.overstreet@linux.dev \
--cc=ldufour@linux.ibm.com \
--cc=liam.howlett@oracle.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lstoakes@gmail.com \
--cc=luto@kernel.org \
--cc=mgorman@techsingularity.net \
--cc=mhocko@suse.com \
--cc=michel@lespinasse.org \
--cc=minchan@google.com \
--cc=mingo@redhat.com \
--cc=paulmck@kernel.org \
--cc=peterjung1337@gmail.com \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=punit.agrawal@bytedance.com \
--cc=rientjes@google.com \
--cc=rppt@kernel.org \
--cc=shakeelb@google.com \
--cc=songliubraving@fb.com \
--cc=stable@vger.kernel.org \
--cc=tatashin@google.com \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox