From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CDF0C433DB for ; Thu, 21 Jan 2021 20:19:40 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0CD4E23A56 for ; Thu, 21 Jan 2021 20:19:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0CD4E23A56 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E35A66B0006; Thu, 21 Jan 2021 15:19:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id DC2B26B0007; Thu, 21 Jan 2021 15:19:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CAD8C6B0008; Thu, 21 Jan 2021 15:19:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0001.hostedemail.com [216.40.44.1]) by kanga.kvack.org (Postfix) with ESMTP id AF9366B0006 for ; Thu, 21 Jan 2021 15:19:38 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0C45A2464 for ; Thu, 21 Jan 2021 20:19:38 +0000 (UTC) X-FDA: 77730897636.21.yard02_520282a27566 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id 9388618081926 for ; Thu, 21 Jan 2021 20:19:37 +0000 (UTC) X-HE-Tag: yard02_520282a27566 X-Filterd-Recvd-Size: 5539 Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by imf17.hostedemail.com (Postfix) with ESMTP for ; Thu, 21 Jan 2021 20:19:35 +0000 (UTC) IronPort-SDR: QYaX4ip+r7PF6QZy7B78pcVD0lsH9BG4hx6oNiE6gWNVb518S/LB9450fqfDrOeLmD4PBDLNAP yl7fUKAxP61Q== X-IronPort-AV: E=McAfee;i="6000,8403,9871"; a="240874341" X-IronPort-AV: E=Sophos;i="5.79,365,1602572400"; d="scan'208";a="240874341" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jan 2021 12:16:25 -0800 IronPort-SDR: Xeb6GP6QHS5E+NUN4IsRGZ97r6GOsWNpHXOAkde9SaUDTqrCmzanAvvZHRo2+bsJULzAS4gGR+ RVNS7GFSD8vw== X-IronPort-AV: E=Sophos;i="5.79,365,1602572400"; d="scan'208";a="385443835" Received: from yyu32-mobl1.amr.corp.intel.com (HELO [10.209.46.254]) ([10.209.46.254]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Jan 2021 12:16:24 -0800 Subject: Re: [PATCH v17 08/26] x86/mm: Introduce _PAGE_COW To: Borislav Petkov Cc: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang , Pengfei Xu References: <20201229213053.16395-1-yu-cheng.yu@intel.com> <20201229213053.16395-9-yu-cheng.yu@intel.com> <20210121184405.GE32060@zn.tnic> From: "Yu, Yu-cheng" Message-ID: Date: Thu, 21 Jan 2021 12:16:23 -0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.1 MIME-Version: 1.0 In-Reply-To: <20210121184405.GE32060@zn.tnic> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 1/21/2021 10:44 AM, Borislav Petkov wrote: > On Tue, Dec 29, 2020 at 01:30:35PM -0800, Yu-cheng Yu wrote: [...] >> @@ -343,6 +349,16 @@ static inline pte_t pte_mkold(pte_t pte) >> >> static inline pte_t pte_wrprotect(pte_t pte) >> { >> + /* >> + * Blindly clearing _PAGE_RW might accidentally create >> + * a shadow stack PTE (RW=0, Dirty=1). Move the hardware >> + * dirty value to the software bit. >> + */ >> + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { >> + pte.pte |= (pte.pte & _PAGE_DIRTY) >> _PAGE_BIT_DIRTY << _PAGE_BIT_COW; > > Why the unreadable shifting when you can simply do: > > if (pte.pte & _PAGE_DIRTY) > pte.pte |= _PAGE_COW; > > ? It clears _PAGE_DIRTY and sets _PAGE_COW. That is, if (pte.pte & _PAGE_DIRTY) { pte.pte &= ~_PAGE_DIRTY; pte.pte |= _PAGE_COW; } So, shifting makes resulting code more efficient. >> @@ -434,16 +469,40 @@ static inline pmd_t pmd_mkold(pmd_t pmd) >> >> static inline pmd_t pmd_mkclean(pmd_t pmd) >> { >> - return pmd_clear_flags(pmd, _PAGE_DIRTY); >> + return pmd_clear_flags(pmd, _PAGE_DIRTY_BITS); >> } >> >> static inline pmd_t pmd_wrprotect(pmd_t pmd) >> { >> + /* >> + * Blindly clearing _PAGE_RW might accidentally create >> + * a shadow stack PMD (RW=0, Dirty=1). Move the hardware >> + * dirty value to the software bit. >> + */ >> + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { >> + pmdval_t v = native_pmd_val(pmd); >> + >> + v |= (v & _PAGE_DIRTY) >> _PAGE_BIT_DIRTY << _PAGE_BIT_COW; > > As above. > >> @@ -488,17 +554,35 @@ static inline pud_t pud_mkold(pud_t pud) >> >> static inline pud_t pud_mkclean(pud_t pud) >> { >> - return pud_clear_flags(pud, _PAGE_DIRTY); >> + return pud_clear_flags(pud, _PAGE_DIRTY_BITS); >> } >> >> static inline pud_t pud_wrprotect(pud_t pud) >> { >> + /* >> + * Blindly clearing _PAGE_RW might accidentally create >> + * a shadow stack PUD (RW=0, Dirty=1). Move the hardware >> + * dirty value to the software bit. >> + */ >> + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { >> + pudval_t v = native_pud_val(pud); >> + >> + v |= (v & _PAGE_DIRTY) >> _PAGE_BIT_DIRTY << _PAGE_BIT_COW; > > Ditto. > >> @@ -1131,6 +1222,12 @@ extern int pmdp_clear_flush_young(struct vm_area_struct *vma, >> #define pmd_write pmd_write >> static inline int pmd_write(pmd_t pmd) >> { >> + /* >> + * If _PAGE_DIRTY is set, then the PMD must either have _PAGE_RW or >> + * be a shadow stack PMD, which is logically writable. >> + */ >> + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) >> + return pmd_flags(pmd) & (_PAGE_RW | _PAGE_DIRTY); > > else > > >> return pmd_flags(pmd) & _PAGE_RW; >> } >>