From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1708DC43334 for ; Tue, 21 Jun 2022 16:38:44 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6EA6F8E0001; Tue, 21 Jun 2022 12:38:43 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 699726B0073; Tue, 21 Jun 2022 12:38:43 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 53B438E0001; Tue, 21 Jun 2022 12:38:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 4238C6B0072 for ; Tue, 21 Jun 2022 12:38:43 -0400 (EDT) Received: from smtpin21.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id 122F0110F for ; Tue, 21 Jun 2022 16:38:43 +0000 (UTC) X-FDA: 79602801726.21.A302506 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf21.hostedemail.com (Postfix) with ESMTP id 418251C0006 for ; Tue, 21 Jun 2022 16:38:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655829521; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=ojJ+15h4iXvxzZeBCXUdeP73RALWIvuCaI+gLmNpNuM=; b=MVLB2P23qd4tELyBVOgdq5w47oPhmwSCpX4o857i07hrzGfarW/dBNz8wXaB0sitlUNPyY m1J81WR7sAYZtqS3yErFzOB5++bAwWVnBYeD6KUMAPyfhh5NLWZEuLONAD84cHDCfj4XLz 76Uh/SYTNZyp8B6VdYe7Z96gMwcipSE= Received: from mail-il1-f198.google.com (mail-il1-f198.google.com [209.85.166.198]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-411-VV5c3qdeNpCRFrTN0HX86g-1; Tue, 21 Jun 2022 12:38:40 -0400 X-MC-Unique: VV5c3qdeNpCRFrTN0HX86g-1 Received: by mail-il1-f198.google.com with SMTP id l2-20020a056e0212e200b002d9258029c4so2048041iln.22 for ; Tue, 21 Jun 2022 09:38:40 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=ojJ+15h4iXvxzZeBCXUdeP73RALWIvuCaI+gLmNpNuM=; b=yD0BVj+Qcv0m+xaesF3/g+LNVq5JhLQnjbsRf/ZXNbtxsYLr39mwkKfY1MoyK3SRVw ODmeOmL8KDcSCV8FS8M+0J11m9rTLQGivgWLtq0AsGf21tr8ZCc1nLUiqdrdVOe9PQdR NhvlM5zuOSslqdbAERLLcZ0wBO+/4aVbQGqVl5uXHBTggUjBISEsSpJXrCiEhT0IfYIV jJQFyuoo+lHwGUbP0oSZf0MHalX8tJhTfYmX9voSkOWUyEQ77Px8MJeML9RjbCCvZnPL m7QOjp9E/g/gSVySBzV3YWcs2MAIekGiKMxBMvI/dKvUrHkXx8PsCwykXgBmZlp0vWQM 1dQg== X-Gm-Message-State: AJIora+WQeimbbxjvCbQHTJb8cj6GgaJXXlD78fYwc0kCrLaEvx7bZnf ggf8U4sRay16iAY0OLiJfMwMw0ezq6DRZS+M04zkwXuxRSHGIECCZERBZKBtxVoQuauA8ENPUXM WtpzkT9ZNeps= X-Received: by 2002:a02:c6ac:0:b0:331:a259:120d with SMTP id o12-20020a02c6ac000000b00331a259120dmr17152040jan.301.1655829520060; Tue, 21 Jun 2022 09:38:40 -0700 (PDT) X-Google-Smtp-Source: AGRyM1vRWfcJ/ZDvlN4frqKFjPz4WHKW4ar0r9NtoGaXblnWHeQQohtUGomX51+CAmyc0aeL1F7R9w== X-Received: by 2002:a02:c6ac:0:b0:331:a259:120d with SMTP id o12-20020a02c6ac000000b00331a259120dmr17152024jan.301.1655829519761; Tue, 21 Jun 2022 09:38:39 -0700 (PDT) Received: from xz-m1.local (cpec09435e3e0ee-cmc09435e3e0ec.cpe.net.cable.rogers.com. [99.241.198.116]) by smtp.gmail.com with ESMTPSA id s18-20020a02cf32000000b00331c58086d8sm688826jar.147.2022.06.21.09.38.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 21 Jun 2022 09:38:38 -0700 (PDT) Date: Tue, 21 Jun 2022 12:38:37 -0400 From: Peter Xu To: Nadav Amit Cc: linux-mm@kvack.org, Nadav Amit , Mike Kravetz , Hugh Dickins , Andrew Morton , Axel Rasmussen , David Hildenbrand , Mike Rapoport Subject: Re: [RFC PATCH v2 3/5] userfaultfd: introduce write-likely mode for copy/wp operations Message-ID: References: <20220619233449.181323-1-namit@vmware.com> <20220619233449.181323-4-namit@vmware.com> MIME-Version: 1.0 In-Reply-To: <20220619233449.181323-4-namit@vmware.com> X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline ARC-Authentication-Results: i=1; imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=MVLB2P23; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf21.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=peterx@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1655829522; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=ojJ+15h4iXvxzZeBCXUdeP73RALWIvuCaI+gLmNpNuM=; b=2hi+PznK2w25AKkD8su3gp19aVnkFoQnUBZUbbGvjJDxKUA92sM1Jfl4t02YUwfR0rEngV P3jbYkWfx/kyLGQkCljMrMQndE5zjRJ83mk19diK392t3Kca+8FeegcxtePmrjOutzCU90 IgMhz0CHsXFxMtpOAI5r3vE5JiTHAjQ= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1655829522; a=rsa-sha256; cv=none; b=0Jt7nO7DK3dzRDxyjwv4bfuctkwZNK8VdRbfpNNVLsd+Y0cQ6wR2CGSGST1ISOvGKRrBcl ff8iYALBViwD2WGmGXvzMIyaI5V8M0x1fcyQGKPsX00VnHQSnVcgTt9MrNctoGFuiuKnXI 1bTzt0reMugFidacWsXFNPl4hcOn54g= X-Rspamd-Queue-Id: 418251C0006 Authentication-Results: imf21.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=MVLB2P23; dmarc=pass (policy=none) header.from=redhat.com; spf=none (imf21.hostedemail.com: domain of peterx@redhat.com has no SPF policy when checking 170.10.129.124) smtp.mailfrom=peterx@redhat.com X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: 7w58ykg4toyr9m9hrqqtnhy6dtt1gfhh X-HE-Tag: 1655829522-539010 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Hi, Nadav, On Sun, Jun 19, 2022 at 04:34:47PM -0700, Nadav Amit wrote: > From: Nadav Amit > > Commit 9ae0f87d009ca ("mm/shmem: unconditionally set pte dirty in > mfill_atomic_install_pte") has set PTEs as dirty as its title indicates. > However, setting read-only PTEs as dirty can have several undesired > implications. > > First, setting read-only PTEs as dirty, can cause these PTEs to become > writable during mprotect() syscall. See in change_pte_range(): > > /* Avoid taking write faults for known dirty pages */ > if (dirty_accountable && pte_dirty(ptent) && > (pte_soft_dirty(ptent) || > !(vma->vm_flags & VM_SOFTDIRTY))) { > ptent = pte_mkwrite(ptent); > } IMHO this is not really the direct reason to add the feature to "allow user to specify whether dirty bit be set for UFFDIO_COPY/... ioctl", because IIUC what's missing is the pte_uffd_wp() check that I should have added in the shmem+hugetlb uffd-wp series in change_pte_range() but I missed.. But since this is fixed by David's patch to optimize mprotect() altogether which checks pte_uffd_wp() (and afaict that's only needed after the shmem+hugetlb patchset, not before), so I think we're safe now with all the branches. So IMHO we don't need to mention this as it's kind of misleading. It'll be welcomed if you want to recover the pte_dirty behavior in mfill_atomic_install_pte() but probably this is not the right patch for it? > > Second, unmapping read-only dirty PTEs often prevents TLB flush batching. > See try_to_unmap_one(): > > /* > * Page is dirty. Flush the TLB if a writable entry > * potentially exists to avoid CPU writes after IO > * starts and then write it out here. > */ > try_to_unmap_flush_dirty(); > > Similarly batching TLB flushed might be prevented in zap_pte_range(): > > if (!PageAnon(page)) { > if (pte_dirty(ptent)) { > force_flush = 1; > set_page_dirty(page); > } > ... I still keep the pure question here (which I asked in the private reply to you) on why we'd like only pte_dirty() but not pte_write() && pte_dirty() here. I'll rewrite what I have in the private email to you here again that I think should be the write thing to do here.. if (!PageAnon(page)) { if (pte_dirty(ptent)) { set_page_dirty(page); if (pte_write(ptent)) force_flush = 1; } } I also mentioned the other example of doing mprotect(PROT_READ) upon a dirty pte can also create a pte with dirty=1 and write=0 which should be the same condition we have here with uffd, afaict. So it's at least not a solo problem for uffd, and I still think if we treat this tlb flush issue as a perf bug we should consider fixing up the tlb flush code instead because otherwise the "mprotect(PROT_READ) upon dirty pte" will also be able to hit it. Meanwhile, I'll have the same comment that this is not helping any reviewer to understand why we need to grant user ability to conditionally set dirty bit from uABI, so I think it's better to drop this paragraph too for this patch alone. > > In general, setting a PTE as dirty seems for read-only entries might be > dangerous. It should be reminded the dirty-COW vulnerability mitigation > also relies on the dirty bit being set only after COW (although it does > not appear to apply to userfaultfd). > > To summarize, setting the dirty bit for read-only PTEs is dangerous. But > even if we only consider writable pages, always setting the dirty bit or > always leaving it clear, does not seem as the best policy. Leaving the > bit clear introduces overhead on the first write-access to set the bit. > Setting the bit for pages the are eventually not written to can require > more TLB flushes. And IMHO only this paragraph is the real and proper reasoning for this patch.. > > Let the userfaultfd users control whether PTEs are marked as dirty or > clean. Introduce UFFDIO_COPY_MODE_WRITE and > UFFDIO_COPY_MODE_WRITE_LIKELY and UFFDIO_WRITEPROTECT_MODE_WRITE_LIKELY > to enable userspace to indicate whether pages are likely to be written > and set the dirty-bit if they are likely to be written. > > Cc: Mike Kravetz > Cc: Hugh Dickins > Cc: Andrew Morton > Cc: Axel Rasmussen > Cc: Peter Xu > Cc: David Hildenbrand > Cc: Mike Rapoport > Signed-off-by: Nadav Amit > --- > fs/userfaultfd.c | 22 ++++++++++++++-------- > include/linux/userfaultfd_k.h | 1 + > include/uapi/linux/userfaultfd.h | 27 +++++++++++++++++++-------- > mm/hugetlb.c | 3 +++ > mm/shmem.c | 3 +++ > mm/userfaultfd.c | 11 +++++++++-- > 6 files changed, 49 insertions(+), 18 deletions(-) > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > index 35a8c4347c54..a56983b594d5 100644 > --- a/fs/userfaultfd.c > +++ b/fs/userfaultfd.c > @@ -1700,7 +1700,7 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, > struct uffdio_copy uffdio_copy; > struct uffdio_copy __user *user_uffdio_copy; > struct userfaultfd_wake_range range; > - bool mode_wp, mode_access_likely; > + bool mode_wp, mode_access_likely, mode_write_likely; > uffd_flags_t uffd_flags; > > user_uffdio_copy = (struct uffdio_copy __user *) arg; > @@ -1727,14 +1727,17 @@ static int userfaultfd_copy(struct userfaultfd_ctx *ctx, > if (uffdio_copy.src + uffdio_copy.len <= uffdio_copy.src) > goto out; > if (uffdio_copy.mode & ~(UFFDIO_COPY_MODE_DONTWAKE|UFFDIO_COPY_MODE_WP| > - UFFDIO_COPY_MODE_ACCESS_LIKELY)) > + UFFDIO_COPY_MODE_ACCESS_LIKELY| > + UFFDIO_COPY_MODE_WRITE_LIKELY)) > goto out; > > mode_wp = uffdio_copy.mode & UFFDIO_COPY_MODE_WP; > mode_access_likely = uffdio_copy.mode & UFFDIO_COPY_MODE_ACCESS_LIKELY; > + mode_write_likely = uffdio_copy.mode & UFFDIO_COPY_MODE_WRITE_LIKELY; > > uffd_flags = (mode_wp ? UFFD_FLAGS_WP : 0) | > - (mode_access_likely ? UFFD_FLAGS_ACCESS_LIKELY : 0); > + (mode_access_likely ? UFFD_FLAGS_ACCESS_LIKELY : 0) | > + (mode_write_likely ? UFFD_FLAGS_WRITE_LIKELY : 0); > > if (mmget_not_zero(ctx->mm)) { > ret = mcopy_atomic(ctx->mm, uffdio_copy.dst, uffdio_copy.src, > @@ -1819,7 +1822,7 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx, > struct uffdio_writeprotect uffdio_wp; > struct uffdio_writeprotect __user *user_uffdio_wp; > struct userfaultfd_wake_range range; > - bool mode_wp, mode_dontwake, mode_access_likely; > + bool mode_wp, mode_dontwake, mode_access_likely, mode_write_likely; > uffd_flags_t uffd_flags; > > if (atomic_read(&ctx->mmap_changing)) > @@ -1838,18 +1841,21 @@ static int userfaultfd_writeprotect(struct userfaultfd_ctx *ctx, > > if (uffdio_wp.mode & ~(UFFDIO_WRITEPROTECT_MODE_DONTWAKE | > UFFDIO_WRITEPROTECT_MODE_WP | > - UFFDIO_WRITEPROTECT_MODE_ACCESS_LIKELY)) > + UFFDIO_WRITEPROTECT_MODE_ACCESS_LIKELY | > + UFFDIO_WRITEPROTECT_MODE_WRITE_LIKELY)) > return -EINVAL; > > mode_wp = uffdio_wp.mode & UFFDIO_WRITEPROTECT_MODE_WP; > mode_dontwake = uffdio_wp.mode & UFFDIO_WRITEPROTECT_MODE_DONTWAKE; > mode_access_likely = uffdio_wp.mode & UFFDIO_WRITEPROTECT_MODE_ACCESS_LIKELY; > + mode_write_likely = uffdio_wp.mode & UFFDIO_WRITEPROTECT_MODE_WRITE_LIKELY; > > if (mode_wp && mode_dontwake) > return -EINVAL; > > uffd_flags = (mode_wp ? UFFD_FLAGS_WP : 0) | > - (mode_access_likely ? UFFD_FLAGS_ACCESS_LIKELY : 0); > + (mode_access_likely ? UFFD_FLAGS_ACCESS_LIKELY : 0) | > + (mode_write_likely ? UFFD_FLAGS_WRITE_LIKELY : 0); > > if (mmget_not_zero(ctx->mm)) { > ret = mwriteprotect_range(ctx->mm, uffdio_wp.range.start, > @@ -1902,10 +1908,10 @@ static int userfaultfd_continue(struct userfaultfd_ctx *ctx, unsigned long arg) > uffdio_continue.range.start) { > goto out; > } > - if (uffdio_continue.mode & ~UFFDIO_CONTINUE_MODE_DONTWAKE) > + if (uffdio_continue.mode & ~(UFFDIO_CONTINUE_MODE_DONTWAKE)) > goto out; > > - uffd_flags = UFFD_FLAGS_ACCESS_LIKELY; > + uffd_flags = UFFD_FLAGS_ACCESS_LIKELY | UFFD_FLAGS_WRITE_LIKELY; Setting dirty by default for CONTINUE may make some sense (unlike young), at least to keep the old behavior of the code where the pte was unconditionally set. But there's another thought a real CONTINUE user modifies the page cache elsewhere so logically the PageDirty should be set elsewhere already anyways, in most cases when the userapp is updating the page cache with another mapping before installing pgtables for this mapping. Then this dirty is not required, it seems. If we're going to export this to uABI, I'm wondering maybe it'll be nicer to not apply either young or dirty bit for CONTINUE, because fundamentally losing dirty bit doesn't sound risky, meanwhile the user app should have the best knowledge of what to do (whether page was requested or it's just during a pre-faulting in the background). Axel may have some better thoughts. > > if (mmget_not_zero(ctx->mm)) { > ret = mcopy_continue(ctx->mm, uffdio_continue.range.start, > diff --git a/include/linux/userfaultfd_k.h b/include/linux/userfaultfd_k.h > index e6ac165ec044..261a3fa750d0 100644 > --- a/include/linux/userfaultfd_k.h > +++ b/include/linux/userfaultfd_k.h > @@ -59,6 +59,7 @@ typedef unsigned int __bitwise uffd_flags_t; > > #define UFFD_FLAGS_WP ((__force uffd_flags_t)BIT(0)) > #define UFFD_FLAGS_ACCESS_LIKELY ((__force uffd_flags_t)BIT(1)) > +#define UFFD_FLAGS_WRITE_LIKELY ((__force uffd_flags_t)BIT(2)) > > extern int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, > struct vm_area_struct *dst_vma, > diff --git a/include/uapi/linux/userfaultfd.h b/include/uapi/linux/userfaultfd.h > index d9c8ce9ba777..6ad93a13282e 100644 > --- a/include/uapi/linux/userfaultfd.h > +++ b/include/uapi/linux/userfaultfd.h > @@ -267,12 +267,20 @@ struct uffdio_copy { > */ > __s64 copy; > /* > - * UFFDIO_COPY_MODE_ACCESS_LIKELY will set the mapped page as young. > - * This can reduce the time that the first access to the page takes. > - * Yet, if set opportunistically to memory that is not used, it might > - * extend the time before the unused memory pages are reclaimed. > + * UFFDIO_COPY_MODE_ACCESS_LIKELY indicates that the memory is likely to > + * be accessed in the near future, in contrast to memory that is > + * opportunistically copied and might not be accessed. The kernel will > + * act accordingly, for instance by setting the access-bit in the PTE to > + * reduce the access time to the page. > + * > + * UFFDIO_COPY_MODE_WRITE_LIKELY indicates that the memory is likely to > + * be written to. The kernel will act accordingly, for instance by > + * setting the dirty-bit in the PTE to reduce the write time to the > + * page. This flag will be silently ignored if UFFDIO_COPY_MODE_WP is > + * set. > */ > -#define UFFDIO_COPY_MODE_ACCESS_LIKELY ((__u64)1<<3) > +#define UFFDIO_COPY_MODE_ACCESS_LIKELY ((__u64)1<<2) > +#define UFFDIO_COPY_MODE_WRITE_LIKELY ((__u64)1<<3) > }; > > struct uffdio_zeropage { > @@ -297,9 +305,11 @@ struct uffdio_writeprotect { > * UFFDIO_WRITEPROTECT_MODE_DONTWAKE: set the flag to avoid waking up > * any wait thread after the operation succeeds. > * > - * UFFDIO_WRITEPROTECT_MODE_ACCESS_LIKELY: set the flag to mark the modified > - * memory as young, which can reduce the time that the first access > - * to the page takes. > + * UFFDIO_WRITEPROTECT_MODE_ACCESS_LIKELY: set the flag to indicate the memory > + * is likely to be accessed in the near future. > + * > + * UFFDIO_WRITEPROTECT_MODE_WRITE_LIKELY: set the flag to indicate that the > + * memory is likely to be written to in the near future. > * > * NOTE: Write protecting a region (WP=1) is unrelated to page faults, > * therefore DONTWAKE flag is meaningless with WP=1. Removing write > @@ -309,6 +319,7 @@ struct uffdio_writeprotect { > #define UFFDIO_WRITEPROTECT_MODE_WP ((__u64)1<<0) > #define UFFDIO_WRITEPROTECT_MODE_DONTWAKE ((__u64)1<<1) > #define UFFDIO_WRITEPROTECT_MODE_ACCESS_LIKELY ((__u64)1<<2) > +#define UFFDIO_WRITEPROTECT_MODE_WRITE_LIKELY ((__u64)1<<3) > __u64 mode; > }; > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 2beff8a4bf7c..46814fc7762f 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -5962,6 +5962,9 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, > *pagep = NULL; > } > > + /* The PTE is not marked as dirty unconditionally */ > + SetPageDirty(page); > + > /* > * The memory barrier inside __SetPageUptodate makes sure that > * preceding stores to the page contents become visible before > diff --git a/mm/shmem.c b/mm/shmem.c > index 89c775275bae..7488cd186c32 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -2404,6 +2404,9 @@ int shmem_mfill_atomic_pte(struct mm_struct *dst_mm, > VM_BUG_ON(PageSwapBacked(page)); > __SetPageLocked(page); > __SetPageSwapBacked(page); > + > + /* The PTE is not marked as dirty unconditionally */ > + SetPageDirty(page); > __SetPageUptodate(page); > > ret = -EFAULT; > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c > index 140c8d3e946e..3172158d8faa 100644 > --- a/mm/userfaultfd.c > +++ b/mm/userfaultfd.c > @@ -70,7 +70,6 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, > pgoff_t offset, max_off; > > _dst_pte = mk_pte(page, dst_vma->vm_page_prot); > - _dst_pte = pte_mkdirty(_dst_pte); > if (page_in_cache && !vm_shared) > writable = false; > > @@ -85,13 +84,18 @@ int mfill_atomic_install_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd, > > if (writable) > _dst_pte = pte_mkwrite(_dst_pte); > - else > + else { > /* > * We need this to make sure write bit removed; as mk_pte() > * could return a pte with write bit set. > */ > _dst_pte = pte_wrprotect(_dst_pte); > > + /* Marking RO entries as dirty can mess with other code */ > + if (uffd_flags & UFFD_FLAGS_WRITE_LIKELY) > + _dst_pte = pte_mkdirty(_dst_pte); Hmm.. what about "writable=true" ones? > + } > + > if (uffd_flags & UFFD_FLAGS_ACCESS_LIKELY) > _dst_pte = pte_mkyoung(_dst_pte); > > @@ -180,6 +184,9 @@ static int mcopy_atomic_pte(struct mm_struct *dst_mm, > *pagep = NULL; > } > > + /* The PTE is not marked as dirty unconditionally */ > + SetPageDirty(page); > + > /* > * The memory barrier inside __SetPageUptodate makes sure that > * preceding stores to the page contents become visible before > -- > 2.25.1 > > -- Peter Xu