From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id A43EDCA9EAE for ; Tue, 13 Jun 2023 00:12:45 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id D4DF28E000E; Mon, 12 Jun 2023 20:12:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C86858E000B; Mon, 12 Jun 2023 20:12:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B272D8E000E; Mon, 12 Jun 2023 20:12:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id A3FED8E000B for ; Mon, 12 Jun 2023 20:12:23 -0400 (EDT) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 6FDB21C7BBB for ; Tue, 13 Jun 2023 00:12:23 +0000 (UTC) X-FDA: 80895797766.16.9DB1296 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by imf01.hostedemail.com (Postfix) with ESMTP id 26CD540004 for ; Tue, 13 Jun 2023 00:12:20 +0000 (UTC) Authentication-Results: imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=W1MZ76Nh; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf01.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1686615141; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=IQk0l3wwsjhshYURQZ9ElMIHgqOA76qR/5uNDXp6uwI=; b=iiJ+QQH8lnv4kXoj/dTmcp0sSE3x0/MwhIwnVGjgV3Ww4CKMPFiDv2+/pIiFiw/sYR1f+g 0sJuSszCDM4LIU+m9MHYwigJx20lvygA3PoI72mBwtF+9bnPk/dj7Hks1YERx5zFuK+wNZ Zp8Ez2JwZTcY/vb513SY3zqadCWG7po= ARC-Authentication-Results: i=1; imf01.hostedemail.com; dkim=pass header.d=intel.com header.s=Intel header.b=W1MZ76Nh; dmarc=pass (policy=none) header.from=intel.com; spf=pass (imf01.hostedemail.com: domain of rick.p.edgecombe@intel.com designates 134.134.136.65 as permitted sender) smtp.mailfrom=rick.p.edgecombe@intel.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1686615141; a=rsa-sha256; cv=none; b=4h2Sup1wHqekuIBK0ZRbrkhWWVjLJJb1exitTfSHtlTWR0x9RPeaij9/RkeOjspmad0E5d /7Cllks4CUPDcE66VVE6liKkqj4eHtI1bcMESkOAG7xz/vPIzJwnCLt/b7TTJvm0VYSe7X 9cAu5g9JKrksJsGoU5mwI+/JqNOyiC0= DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1686615141; x=1718151141; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=qno7FNbVCOlU0l2tsMlcvtzg5iSqFMH8Q3FgqO3yA3s=; b=W1MZ76NhryJv/MhDVPRZKTrm3sDN5Z5ujDrOg40E6VIdKU+CqctOOGQ/ 9AQFciPpJy5nr5HcM4fEspyumlqpo1cBWtEymY1pqQb61IPyMaZFuc2+j EetdC2u+J0dZbucfqz800TdSp1E4wHorgM3iHrkhZNvBxAiZOB/1xZLTO D7O8rhpad15pPrYv/kqvu6QJ0dF1M0e2Eyc8Dt7tBnuTlxGSw8PI7MU9n c0Lx7dDnML3wXzO6QkwFl39WYA0Q+zlmiUCppzGY/1gYucq3wmPIdggX9 9Je8UcM/f2euKy9ZF0yYnwowfAyrIq3F4BLJC2GNwm1NyN69/1hzuwHZH g==; X-IronPort-AV: E=McAfee;i="6600,9927,10739"; a="361556957" X-IronPort-AV: E=Sophos;i="6.00,238,1681196400"; d="scan'208";a="361556957" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 17:12:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10739"; a="835671016" X-IronPort-AV: E=Sophos;i="6.00,238,1681196400"; d="scan'208";a="835671016" Received: from almeisch-mobl1.amr.corp.intel.com (HELO rpedgeco-desk4.amr.corp.intel.com) ([10.209.42.242]) by orsmga004-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Jun 2023 17:12:18 -0700 From: Rick Edgecombe To: x86@kernel.org, "H . Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H . J . Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , Weijiang Yang , "Kirill A . Shutemov" , John Allen , kcc@google.com, eranian@google.com, rppt@kernel.org, jamorris@linux.microsoft.com, dethoma@microsoft.com, akpm@linux-foundation.org, Andrew.Cooper3@citrix.com, christina.schimpe@intel.com, david@redhat.com, debug@rivosinc.com, szabolcs.nagy@arm.com, torvalds@linux-foundation.org, broonie@kernel.org Cc: rick.p.edgecombe@intel.com, Yu-cheng Yu , Pengfei Xu Subject: [PATCH v9 13/42] x86/mm: Remove _PAGE_DIRTY from kernel RO pages Date: Mon, 12 Jun 2023 17:10:39 -0700 Message-Id: <20230613001108.3040476-14-rick.p.edgecombe@intel.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230613001108.3040476-1-rick.p.edgecombe@intel.com> References: <20230613001108.3040476-1-rick.p.edgecombe@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Rspamd-Server: rspam09 X-Rspamd-Queue-Id: 26CD540004 X-Stat-Signature: onxkh6rqt7t13krz8wwxu3huafgq3ei3 X-Rspam-User: X-HE-Tag: 1686615140-381 X-HE-Meta: U2FsdGVkX1/KsWPIdJMAVIYAMdm7FgwsawjMgJazK/ta7C/NPXn6tl4oz46+TA3wNHJgYSKcOURAI2vC0hJUga4aGSMp+Hqw4Qwi1Ezb1eFJ6Dmex7aMmLPzji4unvheedZ35nvb9CQfI9CS734qN1RsLS5E9l5fxG6aBxWNJBZEa7ePaCOFAmkQIjm8cZCBKRe0UGGTWwHSGli8IbF66FbK8sH+t1oKNDvMEdAm42y5Rj5OxIUfdEy+WLeaqeZ8efNb+SWDNYVJ1MOIp4+z11NkwtDtFwJS8U8ofD4Cozi1xzbXoGbgDmWMJR3mYnM5Mlw/MnwM9Ug+nRIIqbAkW9A6qtdmceLxpzLnLUHAIQeZWY71Yoh4ytv5HrIh0SJcou0ZA/sxwQi9IWffJ1wcd/EDY9W25OtgnQH8KHowY5j+7HM8s5padcrs41ME/eu2M00uputSAU9740m96IqbFF5mqgvIcPV3SQJwMVmzShjT8Av5sXmCL4IxzSRKn6v3z3cc80L09Yg8Db82XeJZAOLF20ShRp3XUcnYJz5x8sr9DRS4uVIvY9YKY86ee/pvMUoqPc/6X90DAPlZGYgfECx739O27ds3Pz465ovS94jZQBCDTuxyN+eVrvtcGeVOPWsZxQ1TDoYRZxUqNNzwixAWw1P9zGnnkgpUtV6BiMP+y+Lj8ax2mzSnAuCVaiQ/6elFcVgu+tdGF2YcO4kXYZzDbZPQgFm8NzeoprkFMMqTeGL2R16FncGYpDSVe7fyJ1udqz0GFceDVLLKbTOj3/kqMDcQm766hEEO3DoeXRWMU2/w2E11YoiJyQTk9TrBqYup6xy57xM0of0OaWCF5PX/AijfB7ngxKgohvQPskIJrPuFjufq1TFx9nluhv39NqWKiKo1E0ds0SnOX32kCwWy85o9+3hnyxHUd6Om9U5S4rtaixMSmjh1yg0cc89/Yca3kHe13rvkeLB4A2g 3ZoOxxKx jMYLMXPx+e5XzmX+efgxqY9BBACsGnzyBwgaQKde6W56Zi3rEgp2la9cG/IKDvngHhW7/jzEc0a9lxLrKk31FieJvDhZORZMFnE8w1rZfZtGe2bdGmVR6bXLal/T+TpexbHfJ3BU6Yb0OqHKV2OaGW/+prtvaBB/d3KaYHMwf+hbSbH4a38+Ck+Y1pSg3gnovL9D9XZoRVPLMehuckGiSmNAtfTjuVU1MKHr5I/OE8bRWntvrEzAPN5B0NtKsrUV0xBq6SqudcZYk9Nx6Ekwz3yOSH2ZRPmo6CVDbNntS5MAEwQsVLFYz+NaY/ulJbBodTkGHDOcwuEpl+9DGYn5upfpg4wMakGC5i1TtgidOPabmi7PIhxyZ2el6GEs93NOsrNs3UjC9pVtfeQixUkvBgK/rCFbUUpru5i7Xm+WH1z0TdQ3JcodTZGg7g2UrZRgwpz1z X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: New processors that support Shadow Stack regard Write=0,Dirty=1 PTEs as shadow stack pages. In normal cases, it can be helpful to create Write=1 PTEs as also Dirty=1 if HW dirty tracking is not needed, because if the Dirty bit is not already set the CPU has to set Dirty=1 when the memory gets written to. This creates additional work for the CPU. So traditional wisdom was to simply set the Dirty bit whenever you didn't care about it. However, it was never really very helpful for read-only kernel memory. When CR4.CET=1 and IA32_S_CET.SH_STK_EN=1, some instructions can write to such supervisor memory. The kernel does not set IA32_S_CET.SH_STK_EN, so avoiding kernel Write=0,Dirty=1 memory is not strictly needed for any functional reason. But having Write=0,Dirty=1 kernel memory doesn't have any functional benefit either, so to reduce ambiguity between shadow stack and regular Write=0 pages, remove Dirty=1 from any kernel Write=0 PTEs. Co-developed-by: Yu-cheng Yu Signed-off-by: Yu-cheng Yu Signed-off-by: Rick Edgecombe Reviewed-by: Borislav Petkov (AMD) Reviewed-by: Kees Cook Acked-by: Mike Rapoport (IBM) Tested-by: Pengfei Xu Tested-by: John Allen Tested-by: Kees Cook --- arch/x86/include/asm/pgtable_types.h | 8 +++++--- arch/x86/mm/pat/set_memory.c | 4 ++-- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index ee6f8e57e115..26f07d6d5758 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -222,10 +222,12 @@ enum page_cache_mode { #define _PAGE_TABLE_NOENC (__PP|__RW|_USR|___A| 0|___D| 0| 0) #define _PAGE_TABLE (__PP|__RW|_USR|___A| 0|___D| 0| 0| _ENC) -#define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX|___D| 0|___G) -#define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0|___D| 0|___G) +#define __PAGE_KERNEL_RO (__PP| 0| 0|___A|__NX| 0| 0|___G) +#define __PAGE_KERNEL_ROX (__PP| 0| 0|___A| 0| 0| 0|___G) +#define __PAGE_KERNEL (__PP|__RW| 0|___A|__NX|___D| 0|___G) +#define __PAGE_KERNEL_EXEC (__PP|__RW| 0|___A| 0|___D| 0|___G) #define __PAGE_KERNEL_NOCACHE (__PP|__RW| 0|___A|__NX|___D| 0|___G| __NC) -#define __PAGE_KERNEL_VVAR (__PP| 0|_USR|___A|__NX|___D| 0|___G) +#define __PAGE_KERNEL_VVAR (__PP| 0|_USR|___A|__NX| 0| 0|___G) #define __PAGE_KERNEL_LARGE (__PP|__RW| 0|___A|__NX|___D|_PSE|___G) #define __PAGE_KERNEL_LARGE_EXEC (__PP|__RW| 0|___A| 0|___D|_PSE|___G) #define __PAGE_KERNEL_WP (__PP|__RW| 0|___A|__NX|___D| 0|___G| __WP) diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c index 7159cf787613..fc627acfe40e 100644 --- a/arch/x86/mm/pat/set_memory.c +++ b/arch/x86/mm/pat/set_memory.c @@ -2073,12 +2073,12 @@ int set_memory_nx(unsigned long addr, int numpages) int set_memory_ro(unsigned long addr, int numpages) { - return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW), 0); + return change_page_attr_clear(&addr, numpages, __pgprot(_PAGE_RW | _PAGE_DIRTY), 0); } int set_memory_rox(unsigned long addr, int numpages) { - pgprot_t clr = __pgprot(_PAGE_RW); + pgprot_t clr = __pgprot(_PAGE_RW | _PAGE_DIRTY); if (__supported_pte_mask & _PAGE_NX) clr.pgprot |= _PAGE_NX; -- 2.34.1