From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.0 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 726DDC433ED for ; Mon, 26 Apr 2021 02:08:54 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 158AA60FDC for ; Mon, 26 Apr 2021 02:08:54 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 158AA60FDC Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 8067D6B007B; Sun, 25 Apr 2021 22:08:53 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7B64B6B007D; Sun, 25 Apr 2021 22:08:53 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 609506B007E; Sun, 25 Apr 2021 22:08:53 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0140.hostedemail.com [216.40.44.140]) by kanga.kvack.org (Postfix) with ESMTP id 41C606B007B for ; Sun, 25 Apr 2021 22:08:53 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id DA5F412C6 for ; Mon, 26 Apr 2021 02:08:52 +0000 (UTC) X-FDA: 78072884904.18.3C712AE Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by imf16.hostedemail.com (Postfix) with ESMTP id 5874D80192EA for ; Mon, 26 Apr 2021 02:08:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1619402931; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=SQcuUEVJbsB5DNTPE7G1glwcuGeqhKTzKJYzQ0TQxuQ=; b=iM6o47IkZYFsk/+fZf+7082G9/YsRG1inXsSoAE+OF35ghA/lZ7lzly2NDxJVw0XYCakXK WDl51B0bR8iKS/VIfsqi2hend/QjfLU03/D3k9EU0hFCXgFeQjXx5qOKU3Xb2UJkfGD+ys Z6qUqrmn79MRlpT4bIzmCXM5Vv07/nw= Received: from mail-qk1-f199.google.com (mail-qk1-f199.google.com [209.85.222.199]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-579-TZa0t8vnNoGUbOzV4lB85w-1; Sun, 25 Apr 2021 22:08:45 -0400 X-MC-Unique: TZa0t8vnNoGUbOzV4lB85w-1 Received: by mail-qk1-f199.google.com with SMTP id l19-20020a37f5130000b02902e3dc23dc92so15725723qkk.15 for ; Sun, 25 Apr 2021 19:08:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=SQcuUEVJbsB5DNTPE7G1glwcuGeqhKTzKJYzQ0TQxuQ=; b=HueKLvRzjPAGGugIWpOOkDHNSfiZ5qDpNSn2JUOoUhNCrtK1bT8gp0uELPnAy5TRlf RClkq6Q3HTtUSgmkoH+QygbAjsoeIt1U+3mv2QVIiu+c9RcDcqstYOC2xiUiBjFq/uoe Fv9lspOG8KpPtVs8jTbCVwhtxqGRZqFFuhQXXaqzo6mka9/HNU7wbPnoJoZUHq8G7fE3 jt0ppRLpCZNg1yDe+mL/LJIF0KMQ9qw958sBnUGZm8f/ixoVvJVz/tfkhR0zgNYFq/jw Frb3vk78aTBb8dMIfzwvxmKk/7VYLvlZsvzZC0xe9j0t702nAt+ri9XPpWP2jFhA5LbH zRzA== X-Gm-Message-State: AOAM5303ealeTffbFtsS8EyYyvgpFtvoT/7WA3PSm3M30r3xK0EpzdKA GDp48LcpaKI0MNyyBTtbNIkVZ2yIzXGaEUQ4Wu8V8lxEZ9tx4SVz5AOnkb20GmNinBkijjMwSv9 GAMPWMe9c8+o= X-Received: by 2002:a37:e315:: with SMTP id y21mr15191429qki.176.1619402924575; Sun, 25 Apr 2021 19:08:44 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzt2GznUvIrmBPfYns2DPvVFTePiNP5J2ZSP6QzSMxzm67mEjFqK9PpUipX9kw3yFKRcueXdw== X-Received: by 2002:a37:e315:: with SMTP id y21mr15191416qki.176.1619402924268; Sun, 25 Apr 2021 19:08:44 -0700 (PDT) Received: from xz-x1 (bras-base-toroon474qw-grc-77-184-145-104-227.dsl.bell.ca. [184.145.104.227]) by smtp.gmail.com with ESMTPSA id d10sm3324391qki.122.2021.04.25.19.08.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Apr 2021 19:08:43 -0700 (PDT) Date: Sun, 25 Apr 2021 22:08:42 -0400 From: Peter Xu To: Mike Kravetz Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Rapoport , Nadav Amit , Jerome Glisse , Hugh Dickins , Andrea Arcangeli , Andrew Morton , "Kirill A . Shutemov" , Axel Rasmussen , Matthew Wilcox Subject: Re: [PATCH 19/23] hugetlb/userfaultfd: Handle uffd-wp special pte in hugetlb pf handler Message-ID: <20210426020842.GB85002@xz-x1> References: <20210323004912.35132-1-peterx@redhat.com> <20210323005049.35862-1-peterx@redhat.com> <3178f1ff-f8da-7fdd-68ef-8c35972ca2e1@oracle.com> MIME-Version: 1.0 In-Reply-To: <3178f1ff-f8da-7fdd-68ef-8c35972ca2e1@oracle.com> Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset=utf-8 Content-Disposition: inline X-Stat-Signature: mk7hqx3goiw8cadjujowfdq6mx6txdnt X-Rspamd-Server: rspam04 X-Rspamd-Queue-Id: 5874D80192EA Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf16; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=216.205.24.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1619402929-719329 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Apr 22, 2021 at 03:45:39PM -0700, Mike Kravetz wrote: > On 3/22/21 5:50 PM, Peter Xu wrote: > > Teach the hugetlb page fault code to understand uffd-wp special pte. For > > example, when seeing such a pte we need to convert any write fault into a read > > one (which is fake - we'll retry the write later if so). Meanwhile, for > > handle_userfault() we'll need to make sure we must wait for the special swap > > pte too just like a none pte. > > > > Note that we also need to teach UFFDIO_COPY about this special pte across the > > code path so that we can safely install a new page at this special pte as long > > as we know it's a stall entry. > > > > Signed-off-by: Peter Xu > > --- > > fs/userfaultfd.c | 5 ++++- > > mm/hugetlb.c | 34 +++++++++++++++++++++++++++------- > > mm/userfaultfd.c | 5 ++++- > > 3 files changed, 35 insertions(+), 9 deletions(-) > > > > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c > > index 72956f9cc892..f6fa34f58c37 100644 > > --- a/fs/userfaultfd.c > > +++ b/fs/userfaultfd.c > > @@ -245,8 +245,11 @@ static inline bool userfaultfd_huge_must_wait(struct userfaultfd_ctx *ctx, > > /* > > * Lockless access: we're in a wait_event so it's ok if it > > * changes under us. > > + * > > + * Regarding uffd-wp special case, please refer to comments in > > + * userfaultfd_must_wait(). > > */ > > - if (huge_pte_none(pte)) > > + if (huge_pte_none(pte) || pte_swp_uffd_wp_special(pte)) > > ret = true; > > if (!huge_pte_write(pte) && (reason & VM_UFFD_WP)) > > ret = true; > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > > index 64e424b03774..448ef745d5ee 100644 > > --- a/mm/hugetlb.c > > +++ b/mm/hugetlb.c > > @@ -4369,7 +4369,8 @@ static inline vm_fault_t hugetlb_handle_userfault(struct vm_area_struct *vma, > > static vm_fault_t hugetlb_no_page(struct mm_struct *mm, > > struct vm_area_struct *vma, > > struct address_space *mapping, pgoff_t idx, > > - unsigned long address, pte_t *ptep, unsigned int flags) > > + unsigned long address, pte_t *ptep, > > + pte_t old_pte, unsigned int flags) > > { > > struct hstate *h = hstate_vma(vma); > > vm_fault_t ret = VM_FAULT_SIGBUS; > > @@ -4493,7 +4494,7 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, > > > > ptl = huge_pte_lock(h, mm, ptep); > > ret = 0; > > - if (!huge_pte_none(huge_ptep_get(ptep))) > > + if (!pte_same(huge_ptep_get(ptep), old_pte)) > > goto backout; > > > > if (anon_rmap) { > > @@ -4503,6 +4504,11 @@ static vm_fault_t hugetlb_no_page(struct mm_struct *mm, > > page_dup_rmap(page, true); > > new_pte = make_huge_pte(vma, page, ((vma->vm_flags & VM_WRITE) > > && (vma->vm_flags & VM_SHARED))); > > + if (unlikely(flags & FAULT_FLAG_UFFD_WP)) { > > + WARN_ON_ONCE(flags & FAULT_FLAG_WRITE); > > + /* We should have the write bit cleared already, but be safe */ > > + new_pte = huge_pte_wrprotect(huge_pte_mkuffd_wp(new_pte)); > > + } > > set_huge_pte_at(mm, haddr, ptep, new_pte); > > > > hugetlb_count_add(pages_per_huge_page(h), mm); > > @@ -4584,9 +4590,16 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma, > > if (unlikely(is_hugetlb_entry_migration(entry))) { > > migration_entry_wait_huge(vma, mm, ptep); > > return 0; > > - } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) > > + } else if (unlikely(is_hugetlb_entry_hwpoisoned(entry))) { > > return VM_FAULT_HWPOISON_LARGE | > > VM_FAULT_SET_HINDEX(hstate_index(h)); > > + } else if (unlikely(is_swap_special_pte(entry))) { > > + /* Must be a uffd-wp special swap pte */ > > + WARN_ON_ONCE(!pte_swp_uffd_wp_special(entry)); > > + flags |= FAULT_FLAG_UFFD_WP; > > + /* Emulate a read fault */ > > + flags &= ~FAULT_FLAG_WRITE; > > + } > > The comment above this if/else block points out that we hold no locks > and are only checking conditions that would cause a quick return. Yet, > this new code is potentially modifying flags. Pretty sure we can race > and have the entry change. > > Not sure of all the side effects of emulating a read if changed entry is > not a uffd-wp special swap pte and we emulate read when we should not. > > Perhaps we should just put this check and modification of flags after > taking the fault mutex and before the change below? That's a great point. Even the WARN_ON_ONCE could trigger if the pte got modified in parallel, so definitely broken. Yes I'd better do that with the pgtable lock, mostly hugetlb_no_page() should be the only function to handle this special case. So maybe I don't need to emulate the READ fault at all, but just check pte_swp_uffd_wp_special() with the lock, then do wrprotect properly should suffice. Maybe that's even true for shmem, I'll think more about it. Thanks! -- Peter Xu