From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5027C433E0 for ; Thu, 21 Jan 2021 18:44:15 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7553E23A1C for ; Thu, 21 Jan 2021 18:44:15 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7553E23A1C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=alien8.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id F17086B0007; Thu, 21 Jan 2021 13:44:14 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E9E2C6B0008; Thu, 21 Jan 2021 13:44:14 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D8DDD6B000A; Thu, 21 Jan 2021 13:44:14 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0198.hostedemail.com [216.40.44.198]) by kanga.kvack.org (Postfix) with ESMTP id C2B3F6B0007 for ; Thu, 21 Jan 2021 13:44:14 -0500 (EST) Received: from smtpin20.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 871B8180AD817 for ; Thu, 21 Jan 2021 18:44:14 +0000 (UTC) X-FDA: 77730657228.20.pump99_400567927565 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin20.hostedemail.com (Postfix) with ESMTP id 57F07180C07AB for ; Thu, 21 Jan 2021 18:44:14 +0000 (UTC) X-HE-Tag: pump99_400567927565 X-Filterd-Recvd-Size: 6127 Received: from mail.skyhub.de (mail.skyhub.de [5.9.137.197]) by imf35.hostedemail.com (Postfix) with ESMTP for ; Thu, 21 Jan 2021 18:44:12 +0000 (UTC) Received: from zn.tnic (p200300ec2f1575000bca919cfb20ab7c.dip0.t-ipconnect.de [IPv6:2003:ec:2f15:7500:bca:919c:fb20:ab7c]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 5826E1EC01E0; Thu, 21 Jan 2021 19:44:10 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim; t=1611254650; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:in-reply-to:in-reply-to: references:references; bh=FD6FQ6tukOCLwD6fNln6IY5OGb/JtEuN8OVZbhIeo5o=; b=C+ixRZOWUtm8yEAwwAKy0C/ygFqQvTETyMjJ0OwfhMpBAgwFydfj7120JKzkaTwVrFxde/ Oj8iKaH8j7vf/WYo85cHloFwNwzslaLwe7lW0Nb9cPQ9wdOsllCoE4u9d94EJv6HuV13HT arxbhF7qcqokTpmq1wXUDgm9Cf8k2SQ= Date: Thu, 21 Jan 2021 19:44:05 +0100 From: Borislav Petkov To: Yu-cheng Yu Cc: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang , Pengfei Xu Subject: Re: [PATCH v17 08/26] x86/mm: Introduce _PAGE_COW Message-ID: <20210121184405.GE32060@zn.tnic> References: <20201229213053.16395-1-yu-cheng.yu@intel.com> <20201229213053.16395-9-yu-cheng.yu@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Disposition: inline In-Reply-To: <20201229213053.16395-9-yu-cheng.yu@intel.com> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Dec 29, 2020 at 01:30:35PM -0800, Yu-cheng Yu wrote: > @@ -182,6 +182,12 @@ static inline int pud_young(pud_t pud) > > static inline int pte_write(pte_t pte) > { > + /* > + * If _PAGE_DIRTY is set, the PTE must either have _PAGE_RW or be > + * a shadow stack PTE, which is logically writable. > + */ > + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) > + return pte_flags(pte) & (_PAGE_RW | _PAGE_DIRTY); > return pte_flags(pte) & _PAGE_RW; if (cpu_feature_enabled(X86_FEATURE_SHSTK)) return pte_flags(pte) & (_PAGE_RW | _PAGE_DIRTY); else return pte_flags(pte) & _PAGE_RW; The else makes it ballanced and easier to read. > @@ -333,7 +339,7 @@ static inline pte_t pte_clear_uffd_wp(pte_t pte) > > static inline pte_t pte_mkclean(pte_t pte) > { > - return pte_clear_flags(pte, _PAGE_DIRTY); > + return pte_clear_flags(pte, _PAGE_DIRTY_BITS); > } > > static inline pte_t pte_mkold(pte_t pte) > @@ -343,6 +349,16 @@ static inline pte_t pte_mkold(pte_t pte) > > static inline pte_t pte_wrprotect(pte_t pte) > { > + /* > + * Blindly clearing _PAGE_RW might accidentally create > + * a shadow stack PTE (RW=0, Dirty=1). Move the hardware > + * dirty value to the software bit. > + */ > + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { > + pte.pte |= (pte.pte & _PAGE_DIRTY) >> _PAGE_BIT_DIRTY << _PAGE_BIT_COW; Why the unreadable shifting when you can simply do: if (pte.pte & _PAGE_DIRTY) pte.pte |= _PAGE_COW; ? > @@ -434,16 +469,40 @@ static inline pmd_t pmd_mkold(pmd_t pmd) > > static inline pmd_t pmd_mkclean(pmd_t pmd) > { > - return pmd_clear_flags(pmd, _PAGE_DIRTY); > + return pmd_clear_flags(pmd, _PAGE_DIRTY_BITS); > } > > static inline pmd_t pmd_wrprotect(pmd_t pmd) > { > + /* > + * Blindly clearing _PAGE_RW might accidentally create > + * a shadow stack PMD (RW=0, Dirty=1). Move the hardware > + * dirty value to the software bit. > + */ > + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { > + pmdval_t v = native_pmd_val(pmd); > + > + v |= (v & _PAGE_DIRTY) >> _PAGE_BIT_DIRTY << _PAGE_BIT_COW; As above. > @@ -488,17 +554,35 @@ static inline pud_t pud_mkold(pud_t pud) > > static inline pud_t pud_mkclean(pud_t pud) > { > - return pud_clear_flags(pud, _PAGE_DIRTY); > + return pud_clear_flags(pud, _PAGE_DIRTY_BITS); > } > > static inline pud_t pud_wrprotect(pud_t pud) > { > + /* > + * Blindly clearing _PAGE_RW might accidentally create > + * a shadow stack PUD (RW=0, Dirty=1). Move the hardware > + * dirty value to the software bit. > + */ > + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) { > + pudval_t v = native_pud_val(pud); > + > + v |= (v & _PAGE_DIRTY) >> _PAGE_BIT_DIRTY << _PAGE_BIT_COW; Ditto. > @@ -1131,6 +1222,12 @@ extern int pmdp_clear_flush_young(struct vm_area_struct *vma, > #define pmd_write pmd_write > static inline int pmd_write(pmd_t pmd) > { > + /* > + * If _PAGE_DIRTY is set, then the PMD must either have _PAGE_RW or > + * be a shadow stack PMD, which is logically writable. > + */ > + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) > + return pmd_flags(pmd) & (_PAGE_RW | _PAGE_DIRTY); else > return pmd_flags(pmd) & _PAGE_RW; > } > -- Regards/Gruss, Boris. https://people.kernel.org/tglx/notes-about-netiquette