From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A3F2AC47082 for ; Tue, 1 Jun 2021 00:44:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 21E086100A for ; Tue, 1 Jun 2021 00:44:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 21E086100A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A742C8D0002; Mon, 31 May 2021 20:44:05 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A24656B006E; Mon, 31 May 2021 20:44:05 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8EB848D0002; Mon, 31 May 2021 20:44:05 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0213.hostedemail.com [216.40.44.213]) by kanga.kvack.org (Postfix) with ESMTP id 589496B006C for ; Mon, 31 May 2021 20:44:05 -0400 (EDT) Received: from smtpin31.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id E248D5DE1 for ; Tue, 1 Jun 2021 00:44:04 +0000 (UTC) X-FDA: 78203308008.31.DED254C Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf20.hostedemail.com (Postfix) with ESMTP id 03AF2371 for ; Tue, 1 Jun 2021 00:43:48 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 48BF06124B; Tue, 1 Jun 2021 00:44:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1622508243; bh=NYAEYIOktJS0H+wZrjzcqC4jjFydwW90VEJKJupQI20=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=pgP8v6oCsjVbPkYB5BKQdBcjLSJHfa/orJpR8iIlJJsLMSZnBYPNWRdH53p5/NHZ0 585n1eufYmLbWBPckqsZLXAcLniKz4Z1rBQB6c8OxyDuap4gbIL1ZC9WfIjAeK7B1Z gnaVxtmJc/Axjudp60EttaIwf2VEYBu6Dqj5u0QM= Date: Mon, 31 May 2021 17:44:02 -0700 From: Andrew Morton To: Peter Collingbourne Cc: Kostya Kortchinsky , Evgenii Stepanov , Andrea Arcangeli , Peter Xu , linux-mm@kvack.org Subject: Re: [PATCH v4] mm: improve mprotect(R|W) efficiency on pages referenced once Message-Id: <20210531174402.1208042b55c9fc6c538569da@linux-foundation.org> In-Reply-To: <20210527190453.1259020-1-pcc@google.com> References: <20210527190453.1259020-1-pcc@google.com> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=pgP8v6oC; dmarc=none; spf=pass (imf20.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org X-Stat-Signature: ojugqouankzkbb99f7k4xzzd9bo5yht1 X-Rspamd-Queue-Id: 03AF2371 X-Rspamd-Server: rspam02 X-HE-Tag: 1622508228-333996 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, 27 May 2021 12:04:53 -0700 Peter Collingbourne wrote: > In the Scudo memory allocator [1] we would like to be able to > detect use-after-free vulnerabilities involving large allocations > by issuing mprotect(PROT_NONE) on the memory region used for the > allocation when it is deallocated. Later on, after the memory > region has been "quarantined" for a sufficient period of time we > would like to be able to use it for another allocation by issuing > mprotect(PROT_READ|PROT_WRITE). > > ... > > --- a/mm/mprotect.c > +++ b/mm/mprotect.c > @@ -35,6 +35,29 @@ > > #include "internal.h" > > +static bool may_avoid_write_fault(pte_t pte, struct vm_area_struct *vma, > + unsigned long cp_flags) Some comments would be nice, and the function is ideally structured to explain each test. "why" we're testing these things, not "what" we're testing. > +{ /* here */ > + if (!(cp_flags & MM_CP_DIRTY_ACCT)) { > + if (!(vma_is_anonymous(vma) && (vma->vm_flags & VM_WRITE))) > + return false; > + > + if (page_count(pte_page(pte)) != 1) > + return false; > + } > + /* and here */ > + if (!pte_dirty(pte)) > + return false; > + /* and here */ > + if (!pte_soft_dirty(pte) && (vma->vm_flags & VM_SOFTDIRTY)) > + return false; /* and here */ > + if (pte_uffd_wp(pte)) > + return false; > + > + return true; > +} > static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, > unsigned long addr, unsigned long end, pgprot_t newprot, > unsigned long cp_flags) > @@ -43,7 +66,6 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, > spinlock_t *ptl; > unsigned long pages = 0; > int target_node = NUMA_NO_NODE; > - bool dirty_accountable = cp_flags & MM_CP_DIRTY_ACCT; > bool prot_numa = cp_flags & MM_CP_PROT_NUMA; > bool uffd_wp = cp_flags & MM_CP_UFFD_WP; > bool uffd_wp_resolve = cp_flags & MM_CP_UFFD_WP_RESOLVE; > @@ -132,11 +154,8 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd, > } > > /* Avoid taking write faults for known dirty pages */ And this comment could be moved to may_avoid_write_fault()'s explanation. > - if (dirty_accountable && pte_dirty(ptent) && > - (pte_soft_dirty(ptent) || > - !(vma->vm_flags & VM_SOFTDIRTY))) { > + if (may_avoid_write_fault(ptent, vma, cp_flags)) > ptent = pte_mkwrite(ptent); > - } > ptep_modify_prot_commit(vma, addr, pte, oldpte, ptent); > pages++; > } else if (is_swap_pte(oldpte)) { > -- > 2.32.0.rc0.204.g9fa02ecfa5-goog