From: Jeff Moyer <jmoyer@redhat.com>
To: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Jia He <justin.he@arm.com>,
Catalin Marinas <catalin.marinas@arm.com>,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>,
linux-mm@kvack.org
Subject: Re: bug: data corruption introduced by commit 83d116c53058 ("mm: fix double page fault on arm64 if PTE_AF is cleared")
Date: Wed, 12 Feb 2020 09:22:03 -0500 [thread overview]
Message-ID: <x491rqz3myc.fsf@segfault.boston.devel.redhat.com> (raw)
In-Reply-To: <20200211224038.4u6au5jwki7lofpq@box> (Kirill A. Shutemov's message of "Wed, 12 Feb 2020 01:40:38 +0300")
"Kirill A. Shutemov" <kirill@shutemov.name> writes:
> On Tue, Feb 11, 2020 at 11:27:36AM -0500, Jeff Moyer wrote:
>> > The real solution would be to retry __copy_from_user_inatomic() under ptl
>> > if the first attempt fails. I expect it to be ugly.
>>
>> So long as it's correct. :)
>
> The first attempt on the real solution is below.
>
> Yeah, this is ugly. Any suggestion on clearing up this mess is welcome.
>
> Jeff, could you give it a try?
Yes, that patch appears to fix the problem. I wonder if we could remove
the clear_page completely, though. I'd rather see the program segfault
than operate on bad data. What do you think?
-Jeff
>
> diff --git a/mm/memory.c b/mm/memory.c
> index 0bccc622e482..e8bfdf0d9d1d 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -2257,7 +2257,7 @@ static inline bool cow_user_page(struct page *dst, struct page *src,
> bool ret;
> void *kaddr;
> void __user *uaddr;
> - bool force_mkyoung;
> + bool locked = false;
> struct vm_area_struct *vma = vmf->vma;
> struct mm_struct *mm = vma->vm_mm;
> unsigned long addr = vmf->address;
> @@ -2282,11 +2282,11 @@ static inline bool cow_user_page(struct page *dst, struct page *src,
> * On architectures with software "accessed" bits, we would
> * take a double page fault, so mark it accessed here.
> */
> - force_mkyoung = arch_faults_on_old_pte() && !pte_young(vmf->orig_pte);
> - if (force_mkyoung) {
> + if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
> pte_t entry;
>
> vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl);
> + locked = true;
> if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> /*
> * Other thread has already handled the fault
> @@ -2310,18 +2310,37 @@ static inline bool cow_user_page(struct page *dst, struct page *src,
> * zeroes.
> */
> if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
> + if (locked)
> + goto warn;
> +
> + /* Re-validate under PTL if the page is still mapped */
> + vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl);
> + locked = true;
> + if (!likely(pte_same(*vmf->pte, vmf->orig_pte))) {
> + /* The PTE changed under us. Retry page fault. */
> + ret = false;
> + goto pte_unlock;
> + }
> +
> /*
> - * Give a warn in case there can be some obscure
> - * use-case
> + * The same page can be mapped back since last copy attampt.
> + * Try to copy again under PTL.
> */
> - WARN_ON_ONCE(1);
> - clear_page(kaddr);
> + if (__copy_from_user_inatomic(kaddr, uaddr, PAGE_SIZE)) {
> + /*
> + * Give a warn in case there can be some obscure
> + * use-case
> + */
> +warn:
> + WARN_ON_ONCE(1);
> + clear_page(kaddr);
> + }
> }
>
> ret = true;
>
> pte_unlock:
> - if (force_mkyoung)
> + if (locked)
> pte_unmap_unlock(vmf->pte, vmf->ptl);
> kunmap_atomic(kaddr);
> flush_dcache_page(dst);
next prev parent reply other threads:[~2020-02-12 14:22 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-10 22:51 Jeff Moyer
2020-02-11 4:17 ` Justin He
2020-02-11 4:29 ` Justin He
2020-02-11 16:44 ` Jeff Moyer
2020-02-11 17:33 ` Kirill A. Shutemov
2020-02-11 17:55 ` Jeff Moyer
2020-02-11 21:44 ` Kirill A. Shutemov
2020-02-11 22:01 ` Jeff Moyer
2020-02-11 22:15 ` Kirill A. Shutemov
2020-02-11 14:51 ` Kirill A. Shutemov
2020-02-11 16:27 ` Jeff Moyer
2020-02-11 22:40 ` Kirill A. Shutemov
2020-02-12 14:22 ` Jeff Moyer [this message]
2020-02-13 12:14 ` Kirill A. Shutemov
2020-02-14 21:07 ` Jeff Moyer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=x491rqz3myc.fsf@segfault.boston.devel.redhat.com \
--to=jmoyer@redhat.com \
--cc=catalin.marinas@arm.com \
--cc=justin.he@arm.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=kirill@shutemov.name \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox