From: Nadav Amit <nadav.amit@gmail.com>
To: Andy Lutomirski <luto@kernel.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>,
linux-mm <linux-mm@kvack.org>, Peter Xu <peterx@redhat.com>,
lkml <linux-kernel@vger.kernel.org>,
Pavel Emelyanov <xemul@openvz.org>,
Mike Kravetz <mike.kravetz@oracle.com>,
Mike Rapoport <rppt@linux.vnet.ibm.com>,
stable <stable@vger.kernel.org>, Minchan Kim <minchan@kernel.org>,
Yu Zhao <yuzhao@google.com>, Will Deacon <will@kernel.org>,
Peter Zijlstra <peterz@infradead.org>
Subject: Re: [PATCH] mm/userfaultfd: fix memory corruption due to writeprotect
Date: Tue, 22 Dec 2020 12:58:18 -0800 [thread overview]
Message-ID: <719DF2CD-A0BC-4B67-9FBA-A9E0A98AA45E@gmail.com> (raw)
In-Reply-To: <CALCETrXLH7vPep-h4fBFSft1YEkyZQo_7W2uh017rHKYT=Occw@mail.gmail.com>
> On Dec 22, 2020, at 12:34 PM, Andy Lutomirski <luto@kernel.org> wrote:
>
> On Sat, Dec 19, 2020 at 2:06 PM Nadav Amit <nadav.amit@gmail.com> wrote:
>>> [ I have in mind another solution, such as keeping in each page-table a
>>> “table-generation” which is the mm-generation at the time of the change,
>>> and only flush if “table-generation”==“mm-generation”, but it requires
>>> some thought on how to avoid adding new memory barriers. ]
>>>
>>> IOW: I think the change that you suggest is insufficient, and a proper
>>> solution is too intrusive for “stable".
>>>
>>> As for performance, I can add another patch later to remove the TLB flush
>>> that is unnecessarily performed during change_protection_range() that does
>>> permission promotion. I know that your concern is about the “protect” case
>>> but I cannot think of a good immediate solution that avoids taking mmap_lock
>>> for write.
>>>
>>> Thoughts?
>>
>> On a second thought (i.e., I don’t know what I was thinking), doing so —
>> checking mm_tlb_flush_pending() on every PTE read which is potentially
>> dangerous and flushing if needed - can lead to huge amount of TLB flushes
>> and shootodowns as the counter might be elevated for considerable amount of
>> time.
>
> I've lost track as to whether we still think that this particular
> problem is really a problem,
If you mean “problem” as to whether there is a correctness issue with
userfaultfd and soft-dirty deferred flushes under mmap_read_lock() - yes
there is a problem and I produced these failures on upstream.
If you mean “problem” as to performance - I do not know.
> but could we perhaps make the
> tlb_flush_pending field be per-ptl instead of per-mm? Depending on
> how it gets used, it could plausibly be done without atomics or
> expensive barriers by using PTL to protect the field.
>
> FWIW, x86 has a mm generation counter, and I don't think it would be
> totally crazy to find a way to expose an mm generation to core code.
> I don't think we'd want to expose the specific data structures that
> x86 uses to track it -- they're very tailored to the oddities of x86
> TLB management. x86 also doesn't currently have any global concept of
> which mm generation is guaranteed to have been propagated to all CPUs
> -- we track the generation in the pagetables and, per cpu, the
> generation that we know that CPU has seen. x86 could offer a function
> "ensure that all CPUs catch up to mm generation G and don't return
> until this happens" and its relative "have all CPUs caught up to mm
> generation G", but these would need to look at data from multiple CPUs
> and would probably be too expensive on very large systems to use in
> normal page faults unless we were to cache the results somewhere.
> Making a nice cache for this is surely doable, but maybe more
> complexity than we'd want.
I had somewhat similar ideas - saving in each page-struct the generation,
which would allow to: (1) extend pte_same() to detect interim changes
that were reverted (RO->RW->RO) and (2) per-PTE pending flushes.
Obviously, I cannot do it as part of this fix. But moreover, it seems to me
that it would require a memory barrier after updating the PTEs and before
reading the current generation (that would be saved per page-table). I
try to think about schemes that would use the per-CPU generation instead,
but still could not and did not have the time to figure it out.
next prev parent reply other threads:[~2020-12-22 20:58 UTC|newest]
Thread overview: 120+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-12-19 4:30 Nadav Amit
2020-12-19 19:15 ` Andrea Arcangeli
[not found] ` <EDC00345-B46E-4396-8379-98E943723809@gmail.com>
2020-12-19 22:06 ` Nadav Amit
2020-12-20 2:20 ` Andrea Arcangeli
2020-12-21 4:36 ` Nadav Amit
2020-12-21 5:12 ` Yu Zhao
2020-12-21 5:25 ` Nadav Amit
2020-12-21 5:39 ` Nadav Amit
2020-12-21 7:29 ` Yu Zhao
2020-12-22 20:34 ` Andy Lutomirski
2020-12-22 20:58 ` Nadav Amit [this message]
2020-12-22 21:34 ` Andrea Arcangeli
2020-12-20 2:01 ` Andy Lutomirski
2020-12-20 2:49 ` Andrea Arcangeli
2020-12-20 5:08 ` Andy Lutomirski
2020-12-21 18:03 ` Andrea Arcangeli
2020-12-21 18:22 ` Andy Lutomirski
2020-12-20 6:05 ` Yu Zhao
2020-12-20 8:06 ` Nadav Amit
2020-12-20 9:54 ` Yu Zhao
2020-12-21 3:33 ` Nadav Amit
2020-12-21 4:44 ` Yu Zhao
2020-12-21 17:27 ` Peter Xu
2020-12-21 18:31 ` Nadav Amit
2020-12-21 19:16 ` Yu Zhao
2020-12-21 19:55 ` Linus Torvalds
2020-12-21 20:21 ` Yu Zhao
2020-12-21 20:25 ` Linus Torvalds
2020-12-21 20:23 ` Nadav Amit
2020-12-21 20:26 ` Linus Torvalds
2020-12-21 21:24 ` Yu Zhao
2020-12-21 21:49 ` Nadav Amit
2020-12-21 22:30 ` Peter Xu
2020-12-21 22:55 ` Nadav Amit
2020-12-21 23:30 ` Linus Torvalds
2020-12-21 23:46 ` Nadav Amit
2020-12-22 19:44 ` Andrea Arcangeli
2020-12-22 20:19 ` Nadav Amit
2020-12-22 21:17 ` Andrea Arcangeli
2020-12-21 23:12 ` Yu Zhao
2020-12-21 23:33 ` Linus Torvalds
2020-12-22 0:00 ` Yu Zhao
2020-12-22 0:11 ` Linus Torvalds
2020-12-22 0:24 ` Yu Zhao
2020-12-21 23:22 ` Linus Torvalds
2020-12-22 3:19 ` Andy Lutomirski
2020-12-22 4:16 ` Linus Torvalds
2020-12-22 20:19 ` Andy Lutomirski
2021-01-05 15:37 ` Peter Zijlstra
2021-01-05 18:03 ` Andrea Arcangeli
2021-01-12 16:20 ` Peter Zijlstra
2021-01-12 11:43 ` Vinayak Menon
2021-01-12 15:47 ` Laurent Dufour
2021-01-12 16:57 ` Peter Zijlstra
2021-01-12 19:02 ` Laurent Dufour
2021-01-12 19:15 ` Nadav Amit
2021-01-12 19:56 ` Yu Zhao
2021-01-12 20:38 ` Nadav Amit
2021-01-12 20:49 ` Yu Zhao
2021-01-12 21:43 ` Will Deacon
2021-01-12 22:29 ` Nadav Amit
2021-01-12 22:46 ` Will Deacon
2021-01-13 0:31 ` Andy Lutomirski
2021-01-17 4:41 ` Yu Zhao
2021-01-17 7:32 ` Nadav Amit
2021-01-17 9:16 ` Yu Zhao
2021-01-17 10:13 ` Nadav Amit
2021-01-17 19:25 ` Yu Zhao
2021-01-18 2:49 ` Nadav Amit
2020-12-22 9:38 ` Nadav Amit
2020-12-22 19:31 ` Andrea Arcangeli
2020-12-22 20:15 ` Matthew Wilcox
2020-12-22 20:26 ` Andrea Arcangeli
2020-12-22 21:14 ` Yu Zhao
2020-12-22 22:02 ` Andrea Arcangeli
2020-12-22 23:39 ` Yu Zhao
2020-12-22 23:50 ` Linus Torvalds
2020-12-23 0:01 ` Linus Torvalds
2020-12-23 0:23 ` Yu Zhao
2020-12-23 2:17 ` Andrea Arcangeli
2020-12-23 9:44 ` Linus Torvalds
2020-12-23 10:06 ` Yu Zhao
2020-12-23 16:24 ` Peter Xu
2020-12-23 18:51 ` Andrea Arcangeli
2020-12-23 18:55 ` Andrea Arcangeli
2020-12-23 19:12 ` Yu Zhao
2020-12-23 19:32 ` Peter Xu
2020-12-23 0:20 ` Linus Torvalds
2020-12-23 2:56 ` Andrea Arcangeli
2020-12-23 3:36 ` Yu Zhao
2020-12-23 15:52 ` Peter Xu
2020-12-23 21:07 ` Andrea Arcangeli
2020-12-23 21:39 ` Andrea Arcangeli
2020-12-23 22:29 ` Yu Zhao
2020-12-23 23:04 ` Andrea Arcangeli
2020-12-24 1:21 ` Andy Lutomirski
2020-12-24 2:00 ` Andrea Arcangeli
2020-12-24 3:09 ` Nadav Amit
2020-12-24 3:30 ` Nadav Amit
2020-12-24 3:34 ` Yu Zhao
2020-12-24 4:01 ` Andrea Arcangeli
2020-12-24 5:18 ` Nadav Amit
2020-12-24 18:49 ` Andrea Arcangeli
2020-12-24 19:16 ` Andrea Arcangeli
2020-12-24 4:37 ` Nadav Amit
2020-12-24 3:31 ` Andrea Arcangeli
2020-12-23 23:39 ` Linus Torvalds
2020-12-24 1:01 ` Andrea Arcangeli
2020-12-22 21:14 ` Nadav Amit
2020-12-22 12:40 ` Nadav Amit
2020-12-22 18:30 ` Yu Zhao
2020-12-22 19:20 ` Nadav Amit
2020-12-23 16:23 ` Will Deacon
2020-12-23 19:04 ` Nadav Amit
2020-12-23 22:05 ` Andrea Arcangeli
2020-12-23 22:45 ` Nadav Amit
2020-12-23 23:55 ` Andrea Arcangeli
2020-12-21 21:55 ` Peter Xu
2020-12-21 23:13 ` Linus Torvalds
2020-12-21 19:53 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=719DF2CD-A0BC-4B67-9FBA-A9E0A98AA45E@gmail.com \
--to=nadav.amit@gmail.com \
--cc=aarcange@redhat.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mike.kravetz@oracle.com \
--cc=minchan@kernel.org \
--cc=peterx@redhat.com \
--cc=peterz@infradead.org \
--cc=rppt@linux.vnet.ibm.com \
--cc=stable@vger.kernel.org \
--cc=will@kernel.org \
--cc=xemul@openvz.org \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox