From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id E6346C43217 for ; Mon, 3 Oct 2022 14:18:00 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 490C66B0072; Mon, 3 Oct 2022 10:18:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 440DF6B0073; Mon, 3 Oct 2022 10:18:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 2E1508E0001; Mon, 3 Oct 2022 10:18:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 1625C6B0072 for ; Mon, 3 Oct 2022 10:18:00 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay06.hostedemail.com (Postfix) with ESMTP id D40E8AB0B2 for ; Mon, 3 Oct 2022 14:17:59 +0000 (UTC) X-FDA: 79979842278.17.AA198B7 Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by imf30.hostedemail.com (Postfix) with ESMTP id ACAEA80007 for ; Mon, 3 Oct 2022 14:17:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664806678; x=1696342678; h=date:from:to:cc:subject:message-id:references: mime-version:in-reply-to; bh=a8ELpBrWXU8FHNEsI98ijK8e0KakbcFtxr1oBgu/Jbc=; b=HIuoQl1ZJWf5Dqx9tngchHiY19va4mfOepS06C1MaAAa2fWF9ajE2KGK zQKTH03L5i8BDjuG8ZoqSqhg+cKCY4qmySoiMYXkEctEMyA+dAZZlmRDk Jwyy3NtB74TDuaKsoajUYT5RaUCYChmq55FU5laty4z5XqoPR5WzN2YKd NWU9jAqMRnJiqPNvS5+TaypTFa5P5mGm3CSYjhCFNRORzpxapWQ7uXPTr APTfXcRWAA/V+j9UpS8RFCQJPqjtct1j4heiGIAtu+ttK5rkVVMZoBF6d FqZ9HhQsoOsYgAUfH0ihWIbUUTGg/zgX/2o2ilFBnCc4TJeAWHJKR1K6O w==; X-IronPort-AV: E=McAfee;i="6500,9779,10489"; a="282356738" X-IronPort-AV: E=Sophos;i="5.93,365,1654585200"; d="scan'208";a="282356738" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Oct 2022 07:17:51 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10489"; a="798736983" X-IronPort-AV: E=Sophos;i="5.93,365,1654585200"; d="scan'208";a="798736983" Received: from bandrei-mobl.ger.corp.intel.com (HELO box.shutemov.name) ([10.252.37.219]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 03 Oct 2022 07:17:42 -0700 Received: by box.shutemov.name (Postfix, from userid 1000) id 74B03104CE4; Mon, 3 Oct 2022 17:17:39 +0300 (+03) Date: Mon, 3 Oct 2022 17:17:39 +0300 From: "Kirill A . Shutemov" To: Rick Edgecombe Cc: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V . Shankar" , Weijiang Yang , joao.moreira@intel.com, John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, Yu-cheng Yu , Christoph Hellwig Subject: Re: [PATCH v2 08/39] x86/mm: Remove _PAGE_DIRTY from kernel RO pages Message-ID: <20221003141739.qdgdgfr67cycadgs@box.shutemov.name> References: <20220929222936.14584-1-rick.p.edgecombe@intel.com> <20220929222936.14584-9-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220929222936.14584-9-rick.p.edgecombe@intel.com> ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1664806679; a=rsa-sha256; cv=none; b=kpkGWXFE5nG1/Qhx/Zvp3lk4rz3cOrfXG8UrKFLsejsWMjJjWoJ851aaz958e/XLnjl0zH sqPGBbSlk40mcpUs5OwxTXV3GjSVqYSXd8tL5n+5fAAFjZ0OiHjD926GRhfIdepee0VPGW DULe9Lv6a/wdZBiFvaLT7xymaS/KbjU= ARC-Authentication-Results: i=1; imf30.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=HIuoQl1Z; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf30.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=kirill.shutemov@linux.intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1664806679; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=A4uQVAoRean9WGwhIayls1j7InPZQudSKatwL+TgQPE=; b=HQtOOMzRrBD9KUOgUx7vOEZ8QIuE/pknAL7En+KKU1qt5AfgAutdUjY8LMpIMp6yRyTLmR B0Qx0szpBgmd5aYq7VYSJWOQtPjO+nndyqHVx9RGVT6UKJgCTFPTKp0OhwF7ixG3zXfVAG 3QJZ22ST9EXNlV52n17N6nm5CYMjT40= X-Rspam-User: Authentication-Results: imf30.hostedemail.com; dkim=none ("invalid DKIM record") header.d=intel.com header.s=Intel header.b=HIuoQl1Z; dmarc=fail reason="No valid SPF" header.from=intel.com (policy=none); spf=none (imf30.hostedemail.com: domain of kirill.shutemov@linux.intel.com has no SPF policy when checking 192.55.52.136) smtp.mailfrom=kirill.shutemov@linux.intel.com X-Stat-Signature: 6p7srk39huxmh3quiozjyr4sqo3b3k8f X-Rspamd-Queue-Id: ACAEA80007 X-Rspamd-Server: rspam09 X-HE-Tag: 1664806678-774594 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Sep 29, 2022 at 03:29:05PM -0700, Rick Edgecombe wrote: > From: Yu-cheng Yu > > Processors sometimes directly create Write=0,Dirty=1 PTEs. These PTEs are > created by software. One such case is that kernel read-only pages are > historically set up as Dirty. > > New processors that support Shadow Stack regard Write=0,Dirty=1 PTEs as > shadow stack pages. When CR4.CET=1 and IA32_S_CET.SH_STK_EN=1, some > instructions can write to such supervisor memory. The kernel does not set > IA32_S_CET.SH_STK_EN, but to reduce ambiguity between shadow stack and > regular Write=0 pages, removed Dirty=1 from any kernel Write=0 PTEs. > > Signed-off-by: Yu-cheng Yu > Co-developed-by: Rick Edgecombe > Signed-off-by: Rick Edgecombe > Cc: "H. Peter Anvin" > Cc: Kees Cook > Cc: Thomas Gleixner > Cc: Dave Hansen > Cc: Christoph Hellwig > Cc: Andy Lutomirski > Cc: Ingo Molnar > Cc: Borislav Petkov > Cc: Peter Zijlstra > > --- > > v2: > - Normalize PTE bit descriptions between patches > > arch/x86/include/asm/pgtable_types.h | 6 +++--- > arch/x86/mm/pat/set_memory.c | 2 +- > 2 files changed, 4 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h > index aa174fed3a71..ff82237e7b6b 100644 > --- a/arch/x86/include/asm/pgtable_types.h > +++ b/arch/x86/include/asm/pgtable_types.h > @@ -192,10 +192,10 @@ enum page_cache_mode { > #define _KERNPG_TABLE (__PP|__RW| 0|___A| 0|___D| 0| 0| _ENC) > #define _PAGE_TABLE_NOENC (__PP|__RW|_USR|___A| 0|___D| 0| 0) > #define _PAGE_TABLE (__PP|__RW|_USR|___A| 0|___D| 0| 0| _ENC) > -#define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX|___D| 0|___G) > -#define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0|___D| 0|___G) > +#define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX| 0| 0|___G) > +#define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0| 0| 0|___G) > #define __PAGE_KERNEL_NOCACHE (__PP|__RW| 0|___A|__NX|___D| 0|___G| __NC) > -#define __PAGE_KERNEL_VVAR (__PP| 0|_USR|___A|__NX|___D| 0|___G) > +#define __PAGE_KERNEL_VVAR (__PP| 0|_USR|___A|__NX| 0| 0|___G) > #define __PAGE_KERNEL_LARGE (__PP|__RW| 0|___A|__NX|___D|_PSE|___G) > #define __PAGE_KERNEL_LARGE_EXEC (__PP|__RW| 0|___A| 0|___D|_PSE|___G) > #define __PAGE_KERNEL_WP (__PP|__RW| 0|___A|__NX|___D| 0|___G| __WP) > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c > index 1abd5438f126..ed9193b469ba 100644 > --- a/arch/x86/mm/pat/set_memory.c > +++ b/arch/x86/mm/pat/set_memory.c > @@ -1977,7 +1977,7 @@ int set_memory_nx(unsigned long addr, int numpages) > > int set_memory_ro(unsigned long addr, int numpages) > { > - return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW), 0); > + return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW | _PAGE_DIRTY), 0); > } Hm. Do we need to modify also *_wrprotect() helpers to clear dirty bit? I guess not (at least without a lot of audit), as we risk loosing dirty bit on page cache pages. But why is it safe? Do we only care about about kernel PTEs here? Userspace Write=0,Dirty=1 PTEs handled as before? > int set_memory_rw(unsigned long addr, int numpages) > -- > 2.17.1 > -- Kiryl Shutsemau / Kirill A. Shutemov