From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82CA5C433B4 for ; Tue, 27 Apr 2021 16:14:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1980261151 for ; Tue, 27 Apr 2021 16:14:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 1980261151 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 1D1BE6B0092; Tue, 27 Apr 2021 12:13:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 0BD066B0096; Tue, 27 Apr 2021 12:13:55 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DB59B6B0095; Tue, 27 Apr 2021 12:13:55 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0103.hostedemail.com [216.40.44.103]) by kanga.kvack.org (Postfix) with ESMTP id B56AC6B0092 for ; Tue, 27 Apr 2021 12:13:55 -0400 (EDT) Received: from smtpin33.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 696618249980 for ; Tue, 27 Apr 2021 16:13:55 +0000 (UTC) X-FDA: 78078643230.33.9D22AAB Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by imf24.hostedemail.com (Postfix) with ESMTP id ACE17A000187 for ; Tue, 27 Apr 2021 16:13:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1619540034; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=nXCghrNNWHRRCCV5QV6/ctMnnQcSJJc61Ke+xYUBfck=; b=bF5c07sI7zP1jr7pPZ9yO4dUzVoAzVsjK9nrWu0mLF7tST8kZ/Nwu9NAxSdt2uxjypUjVG dzXBtwkx9ml6WnmqeY2b1VlVA86IvYWIQ/ay67rpisW0zN/K7N+NKVr5KB+CB8w/c13tWK sJC9NKK9WSChbr6ZdJBC3ZFkXO2sEsg= Received: from mail-qt1-f198.google.com (mail-qt1-f198.google.com [209.85.160.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-64-BzK41oP4N7yIR7RjCoQpBQ-1; Tue, 27 Apr 2021 12:13:51 -0400 X-MC-Unique: BzK41oP4N7yIR7RjCoQpBQ-1 Received: by mail-qt1-f198.google.com with SMTP id 10-20020ac8594a0000b02901b9f6ae286fso15562939qtz.23 for ; Tue, 27 Apr 2021 09:13:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=nXCghrNNWHRRCCV5QV6/ctMnnQcSJJc61Ke+xYUBfck=; b=ukuh+NHAaJKTBGdHV1CItpG3idoBiT7pI8YTtjI4RY5fp9QxmtbtNGX28f1chC328B hCJ7a60YUXvDDvP+8diiF2p54SmtDAp2X1QCVPZsRwFGsbTjw8S2YZqSBrV1uOqJ2VaS 0Tk9JgKDmEKRwRzrVjdPIwSQCJyv7OyyE4cttoaK4uMTSMwP9VwN/P5aCz82YB498swD yuOZtd9zWMpPbfJhbNs+ldJtZaau+D6XbxuWoKAjKogRNgun3CKCtkl4npD9+4y6uMxQ 1BnHILok6lZP7IoCNLbKlBjRbGAPgJINH/c0zj/OORNSjFe11w1xpm4xrAl/VpKiYiUa fxOg== X-Gm-Message-State: AOAM533/gFaP4accgaD49EyvsflCCiKW9FEeP/IGq/yXX4Gw2nZFVYys UU0d1aR5lshRZyJUBU4AYY/yW74aYZK7OxoD6FJas3lw1AyesPNkJ/0Gv92foOSi14lk7YOQw8J D9/5PViHYSUw+koMMJS3+gKGlP58WL/7lOmxbTrgPVkE8sRonKXQfcWS6DD+n X-Received: by 2002:a0c:fcc8:: with SMTP id i8mr12477272qvq.31.1619540030027; Tue, 27 Apr 2021 09:13:50 -0700 (PDT) X-Google-Smtp-Source: ABdhPJxguoAyDMHYkFEYl63WpQWxcE8JjPIEI2Z6YlCr7tNl/+SaEdw+TPXCtDuapevxol/5mZyGrg== X-Received: by 2002:a0c:fcc8:: with SMTP id i8mr12477221qvq.31.1619540029712; Tue, 27 Apr 2021 09:13:49 -0700 (PDT) Received: from xz-x1.redhat.com (bras-base-toroon474qw-grc-77-184-145-104-227.dsl.bell.ca. [184.145.104.227]) by smtp.gmail.com with ESMTPSA id v66sm3103621qkd.113.2021.04.27.09.13.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 27 Apr 2021 09:13:49 -0700 (PDT) From: Peter Xu To: linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: Nadav Amit , Miaohe Lin , Mike Rapoport , Andrea Arcangeli , Hugh Dickins , peterx@redhat.com, Jerome Glisse , Mike Kravetz , Jason Gunthorpe , Matthew Wilcox , Andrew Morton , Axel Rasmussen , "Kirill A . Shutemov" Subject: [PATCH v2 17/24] hugetlb/userfaultfd: Take care of UFFDIO_COPY_MODE_WP Date: Tue, 27 Apr 2021 12:13:10 -0400 Message-Id: <20210427161317.50682-18-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210427161317.50682-1-peterx@redhat.com> References: <20210427161317.50682-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="US-ASCII" X-Rspamd-Queue-Id: ACE17A000187 X-Stat-Signature: jmyq7x7qsk8idzipf1jqetsobz98xre3 X-Rspamd-Server: rspam02 Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf24; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=170.10.133.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1619540024-207681 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Firstly, pass the wp_copy variable into hugetlb_mcopy_atomic_pte() though= out the stack. Then, apply the UFFD_WP bit if UFFDIO_COPY_MODE_WP is with UFFDIO_COPY. Introduce huge_pte_mkuffd_wp() for it. Hugetlb pages are only managed by hugetlbfs, so we're safe even without s= etting dirty bit in the huge pte if the page is installed as read-only. However= we'd better still keep the dirty bit set for a read-only UFFDIO_COPY pte (when UFFDIO_COPY_MODE_WP bit is set), not only to match what we do with shmem,= but also because the page does contain dirty data that the kernel just copied= from the userspace. Signed-off-by: Peter Xu --- include/asm-generic/hugetlb.h | 5 +++++ include/linux/hugetlb.h | 6 ++++-- mm/hugetlb.c | 22 +++++++++++++++++----- mm/userfaultfd.c | 12 ++++++++---- 4 files changed, 34 insertions(+), 11 deletions(-) diff --git a/include/asm-generic/hugetlb.h b/include/asm-generic/hugetlb.= h index 8e1e6244a89da..548212eccbd61 100644 --- a/include/asm-generic/hugetlb.h +++ b/include/asm-generic/hugetlb.h @@ -27,6 +27,11 @@ static inline pte_t huge_pte_mkdirty(pte_t pte) return pte_mkdirty(pte); } =20 +static inline pte_t huge_pte_mkuffd_wp(pte_t pte) +{ + return pte_mkuffd_wp(pte); +} + static inline pte_t huge_pte_modify(pte_t pte, pgprot_t newprot) { return pte_modify(pte, newprot); diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index eb134a75cad41..e38077918330f 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -138,7 +138,8 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm= , pte_t *dst_pte, unsigned long dst_addr, unsigned long src_addr, enum mcopy_atomic_mode mode, - struct page **pagep); + struct page **pagep, + bool wp_copy); #endif /* CONFIG_USERFAULTFD */ bool hugetlb_reserve_pages(struct inode *inode, long from, long to, struct vm_area_struct *vma, @@ -318,7 +319,8 @@ static inline int hugetlb_mcopy_atomic_pte(struct mm_= struct *dst_mm, unsigned long dst_addr, unsigned long src_addr, enum mcopy_atomic_mode mode, - struct page **pagep) + struct page **pagep, + bool wp_copy) { BUG(); return 0; diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 8e234ee9a15e2..20ee8fdf6507d 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -4884,7 +4884,8 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_= mm, unsigned long dst_addr, unsigned long src_addr, enum mcopy_atomic_mode mode, - struct page **pagep) + struct page **pagep, + bool wp_copy) { bool is_continue =3D (mode =3D=3D MCOPY_ATOMIC_CONTINUE); struct address_space *mapping; @@ -4981,17 +4982,28 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *ds= t_mm, hugepage_add_new_anon_rmap(page, dst_vma, dst_addr); } =20 - /* For CONTINUE on a non-shared VMA, don't set VM_WRITE for CoW. */ - if (is_continue && !vm_shared) + /* + * For either: (1) CONTINUE on a non-shared VMA, or (2) UFFDIO_COPY + * with wp flag set, don't set pte write bit. + */ + if (wp_copy || (is_continue && !vm_shared)) writable =3D 0; else writable =3D dst_vma->vm_flags & VM_WRITE; =20 _dst_pte =3D make_huge_pte(dst_vma, page, writable); - if (writable) - _dst_pte =3D huge_pte_mkdirty(_dst_pte); + /* + * Always mark UFFDIO_COPY page dirty; note that this may not be + * extremely important for hugetlbfs for now since swapping is not + * supported, but we should still be clear in that this page cannot be + * thrown away at will, even if write bit not set. + */ + _dst_pte =3D huge_pte_mkdirty(_dst_pte); _dst_pte =3D pte_mkyoung(_dst_pte); =20 + if (wp_copy) + _dst_pte =3D huge_pte_mkuffd_wp(_dst_pte); + set_huge_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte); =20 (void)huge_ptep_set_access_flags(dst_vma, dst_addr, dst_pte, _dst_pte, diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index 7adaebe222b8e..4f716838f1fdb 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -207,7 +207,8 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb= (struct mm_struct *dst_mm, unsigned long dst_start, unsigned long src_start, unsigned long len, - enum mcopy_atomic_mode mode) + enum mcopy_atomic_mode mode, + bool wp_copy) { int vm_alloc_shared =3D dst_vma->vm_flags & VM_SHARED; int vm_shared =3D dst_vma->vm_flags & VM_SHARED; @@ -304,7 +305,8 @@ static __always_inline ssize_t __mcopy_atomic_hugetlb= (struct mm_struct *dst_mm, } =20 err =3D hugetlb_mcopy_atomic_pte(dst_mm, dst_pte, dst_vma, - dst_addr, src_addr, mode, &page); + dst_addr, src_addr, mode, &page, + wp_copy); =20 mutex_unlock(&hugetlb_fault_mutex_table[hash]); i_mmap_unlock_read(mapping); @@ -406,7 +408,8 @@ extern ssize_t __mcopy_atomic_hugetlb(struct mm_struc= t *dst_mm, unsigned long dst_start, unsigned long src_start, unsigned long len, - enum mcopy_atomic_mode mode); + enum mcopy_atomic_mode mode, + bool wp_copy); #endif /* CONFIG_HUGETLB_PAGE */ =20 static __always_inline ssize_t mfill_atomic_pte(struct mm_struct *dst_mm= , @@ -526,7 +529,8 @@ static __always_inline ssize_t __mcopy_atomic(struct = mm_struct *dst_mm, */ if (is_vm_hugetlb_page(dst_vma)) return __mcopy_atomic_hugetlb(dst_mm, dst_vma, dst_start, - src_start, len, mcopy_mode); + src_start, len, mcopy_mode, + wp_copy); =20 if (!vma_is_anonymous(dst_vma) && !vma_is_shmem(dst_vma)) goto out_unlock; --=20 2.26.2