From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23036C433E0 for ; Tue, 23 Mar 2021 00:49:49 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8AE196199F for ; Tue, 23 Mar 2021 00:49:48 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8AE196199F Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 869D76B012A; Mon, 22 Mar 2021 20:49:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 72EC96B012C; Mon, 22 Mar 2021 20:49:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 582D06B012D; Mon, 22 Mar 2021 20:49:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0250.hostedemail.com [216.40.44.250]) by kanga.kvack.org (Postfix) with ESMTP id 335386B012A for ; Mon, 22 Mar 2021 20:49:37 -0400 (EDT) Received: from smtpin10.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 947176C3E for ; Tue, 23 Mar 2021 00:49:36 +0000 (UTC) X-FDA: 77949305952.10.E381199 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [63.128.21.124]) by imf01.hostedemail.com (Postfix) with ESMTP id 9D5175001529 for ; Tue, 23 Mar 2021 00:49:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1616460575; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=GdeoM8mAgWisQJp09LS8IOLXFM1XCnluE4TOq2YmQQc=; b=CHLWZlCFCsPrIo0JeydFbFUhc0g2nVtcM+s1hFfWQq8c65nVRD2VfyMz/cFPjfcAO7ZJfX Vrrd9Qummegug+t5uNrKPgDD7nTSulh5Wi+ox1HlGjh+/2n1urOEykDITQ5gOLtMxXDs1d XR1lTvbqnYdbvOZSJAvcUfE4u2rqPrU= Received: from mail-qk1-f198.google.com (mail-qk1-f198.google.com [209.85.222.198]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-528-u1HDYV6jMWqOupfvs42A2w-1; Mon, 22 Mar 2021 20:49:34 -0400 X-MC-Unique: u1HDYV6jMWqOupfvs42A2w-1 Received: by mail-qk1-f198.google.com with SMTP id c131so784281qkg.21 for ; Mon, 22 Mar 2021 17:49:34 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GdeoM8mAgWisQJp09LS8IOLXFM1XCnluE4TOq2YmQQc=; b=N9zyO8gYykOSKKU2/gL4AfKyX3on4S4Z6XX58a83cqyoP26SaCOlUx2XcJ5/j29c2W aUKKERBEc1zMFxheiAYfs5FjvA/93OXaoRFr7MIf/fyeftQr00k3VCrTHDJFAIcxqCkJ krz1/X5FoMDaLlwqkq3rdcDmSsBBXiS3M0cQuE/m5jgQipaLUXnkQ3KJ0TnbpuROVvho ljbU1wFGfq5ypua4dZeVvOgU4T6glOGoMNyrBckEZBe3MqImDjoXVcZz2giwsmBtITtp Y40XuchkENFAzHIRUKq27BFSg75qkgI9IXB+W1jsTRdZSZ4sUJ1Abvd3DF20jYfSzKUt 3V4Q== X-Gm-Message-State: AOAM531Kq7RBYDw/s6g3ITv0WqrgjGGcMAdACFyOOiaEO40gPsZdE78o UwjrD/lbM6qzmbmwaYWoGcJB4JiREn4+p1whGfQwb9xZMbQ11Kj98/7ThfXSlaJgaS+zUD+hBE3 JBNV8xCA3+fM= X-Received: by 2002:a0c:ef81:: with SMTP id w1mr2559077qvr.0.1616460573502; Mon, 22 Mar 2021 17:49:33 -0700 (PDT) X-Google-Smtp-Source: ABdhPJws9zEAWvWiYpilkK69WHg1W7DvCAHWTeaiLVpskY0kvpdwkfq5fk2WfOjV880qQyXwe5nCog== X-Received: by 2002:a0c:ef81:: with SMTP id w1mr2559056qvr.0.1616460573212; Mon, 22 Mar 2021 17:49:33 -0700 (PDT) Received: from localhost.localdomain (bras-base-toroon474qw-grc-82-174-91-135-175.dsl.bell.ca. [174.91.135.175]) by smtp.gmail.com with ESMTPSA id n6sm5031793qtx.22.2021.03.22.17.49.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 22 Mar 2021 17:49:32 -0700 (PDT) From: Peter Xu To: linux-kernel@vger.kernel.org, linux-mm@kvack.org Cc: "Kirill A . Shutemov" , Jerome Glisse , Mike Kravetz , Matthew Wilcox , Andrew Morton , Axel Rasmussen , Hugh Dickins , peterx@redhat.com, Nadav Amit , Andrea Arcangeli , Mike Rapoport Subject: [PATCH 11/23] shmem/userfaultfd: Allow wr-protect none pte for file-backed mem Date: Mon, 22 Mar 2021 20:49:00 -0400 Message-Id: <20210323004912.35132-12-peterx@redhat.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210323004912.35132-1-peterx@redhat.com> References: <20210323004912.35132-1-peterx@redhat.com> MIME-Version: 1.0 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=peterx@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Content-Type: text/plain; charset="US-ASCII" X-Stat-Signature: m8zrjh7ihcw8x61rk66c8p7yqz5c6x4s X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 9D5175001529 Received-SPF: none (redhat.com>: No applicable sender policy available) receiver=imf01; identity=mailfrom; envelope-from=""; helo=us-smtp-delivery-124.mimecast.com; client-ip=63.128.21.124 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1616460575-952982 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: File-backed memory differs from anonymous memory in that even if the pte = is missing, the data could still resides either in the file or in page/swap = cache. So when wr-protect a pte, we need to consider none ptes too. We do that by installing the uffd-wp special swap pte as a marker. So wh= en there's a future write to the pte, the fault handler will go the special = path to first fault-in the page as read-only, then report to userfaultfd serve= r with the wr-protect message. On the other hand, when unprotecting a page, it's also possible that the = pte got unmapped but replaced by the special uffd-wp marker. Then we'll need= to be able to recover from a uffd-wp special swap pte into a none pte, so that = the next access to the page will fault in correctly as usual when trigger the= fault handler next time, rather than sending a uffd-wp message. Special care needs to be taken throughout the change_protection_range() process. Since now we allow user to wr-protect a none pte, we need to be= able to pre-populate the page table entries if we see !anonymous && MM_CP_UFFD= _WP requests, otherwise change_protection_range() will always skip when the p= gtable entry does not exist. Note that this patch only covers the small pages (pte level) but not cove= ring any of the transparent huge pages yet. But this will be a base for thps = too. Signed-off-by: Peter Xu --- mm/mprotect.c | 48 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) diff --git a/mm/mprotect.c b/mm/mprotect.c index b3def0a102bf..6b63e3544b47 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include @@ -176,6 +177,32 @@ static unsigned long change_pte_range(struct vm_area= _struct *vma, pmd_t *pmd, set_pte_at(vma->vm_mm, addr, pte, newpte); pages++; } + } else if (unlikely(is_swap_special_pte(oldpte))) { + if (uffd_wp_resolve && !vma_is_anonymous(vma) && + pte_swp_uffd_wp_special(oldpte)) { + /* + * This is uffd-wp special pte and we'd like to + * unprotect it. What we need to do is simply + * recover the pte into a none pte; the next + * page fault will fault in the page. + */ + pte_clear(vma->vm_mm, addr, pte); + pages++; + } + } else { + /* It must be an none page, or what else?.. */ + WARN_ON_ONCE(!pte_none(oldpte)); + if (unlikely(uffd_wp && !vma_is_anonymous(vma))) { + /* + * For file-backed mem, we need to be able to + * wr-protect even for a none pte! Because + * even if the pte is null, the page/swap cache + * could exist. + */ + set_pte_at(vma->vm_mm, addr, pte, + pte_swp_mkuffd_wp_special(vma)); + pages++; + } } } while (pte++, addr +=3D PAGE_SIZE, addr !=3D end); arch_leave_lazy_mmu_mode(); @@ -209,6 +236,25 @@ static inline int pmd_none_or_clear_bad_unless_trans= _huge(pmd_t *pmd) return 0; } =20 +/* + * File-backed vma allows uffd wr-protect upon none ptes, because even i= f pte + * is missing, page/swap cache could exist. When that happens, the wr-p= rotect + * information will be stored in the page table entries with the marker = (e.g., + * PTE_SWP_UFFD_WP_SPECIAL). Prepare for that by always populating the = page + * tables to pte level, so that we'll install the markers in change_pte_= range() + * where necessary. + * + * Note that we only need to do this in pmd level, because if pmd does n= ot + * exist, it means the whole range covered by the pmd entry (of a pud) d= oes not + * contain any valid data but all zeros. Then nothing to wr-protect. + */ +#define change_protection_prepare(vma, pmd, addr, cp_flags) \ + do { \ + if (unlikely((cp_flags & MM_CP_UFFD_WP) && pmd_none(*pmd) && \ + !vma_is_anonymous(vma))) \ + WARN_ON_ONCE(pte_alloc(vma->vm_mm, pmd)); \ + } while (0) + static inline unsigned long change_pmd_range(struct vm_area_struct *vma, pud_t *pud, unsigned long addr, unsigned long end, pgprot_t newprot, unsigned long cp_flags) @@ -227,6 +273,8 @@ static inline unsigned long change_pmd_range(struct v= m_area_struct *vma, =20 next =3D pmd_addr_end(addr, end); =20 + change_protection_prepare(vma, pmd, addr, cp_flags); + /* * Automatic NUMA balancing walks the tables with mmap_lock * held for read. It's possible a parallel update to occur --=20 2.26.2