From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87EBFC433EF for ; Sat, 8 Jan 2022 16:44:49 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8E9366B008C; Sat, 8 Jan 2022 11:44:38 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 846646B0092; Sat, 8 Jan 2022 11:44:38 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6725B6B0093; Sat, 8 Jan 2022 11:44:38 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0021.hostedemail.com [216.40.44.21]) by kanga.kvack.org (Postfix) with ESMTP id 4B4566B008C for ; Sat, 8 Jan 2022 11:44:38 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 00587181C990F for ; Sat, 8 Jan 2022 16:44:37 +0000 (UTC) X-FDA: 79007693394.06.2557C43 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf11.hostedemail.com (Postfix) with ESMTP id 8446640007 for ; Sat, 8 Jan 2022 16:44:37 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B799260DFB; Sat, 8 Jan 2022 16:44:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 057E2C36AF5; Sat, 8 Jan 2022 16:44:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1641660276; bh=4O5nnTjv81SB19Td4mAJDL0i8hlOmRdkCRWMKeHBKAk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gnQbS/8LDy7d4rBBPC1BeLgidoaa4RtK6uxKnx4yRs+qg7nT673dGWG923bTUnbgu CKLbe0NsOO4c7xL1BQs2xd7jQmQaBsQVaH1Rn0gkGZZksdInj7FKmM2GnZA5IkKASn KZWlFjYI+xzq9mzvwbAvpRy4OBAv+7xXSB988sz6TRqW25dcR0O5L6wCw15mQM8EBn KRPduVbOwWJ9KilaMRZ3jeMkvsdFquIkcOvIupgtz2QzMYFleWwpQ0ImjkV6IIrJB2 44XNQ8quVWQdliW7Y2Ld9o9sD2bALVt1bDziiFN7x8gsf9r0KdWrTqesM/M67AnVOm rIomDSz8xJKZg== From: Andy Lutomirski To: Andrew Morton , Linux-MM Cc: Nicholas Piggin , Anton Blanchard , Benjamin Herrenschmidt , Paul Mackerras , Randy Dunlap , linux-arch , x86@kernel.org, Rik van Riel , Dave Hansen , Peter Zijlstra , Nadav Amit , Mathieu Desnoyers , Andy Lutomirski Subject: [PATCH 17/23] x86/mm: Make use/unuse_temporary_mm() non-static Date: Sat, 8 Jan 2022 08:44:02 -0800 Message-Id: X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 X-Rspamd-Server: rspam08 X-Rspamd-Queue-Id: 8446640007 X-Stat-Signature: c7y9fissxerd9ybqukzjdszz88ttmytu Authentication-Results: imf11.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="gnQbS/8L"; spf=pass (imf11.hostedemail.com: domain of luto@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=luto@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-HE-Tag: 1641660277-323148 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: This prepares them for use outside of the alternative machinery. The code is unchanged. Signed-off-by: Andy Lutomirski --- arch/x86/include/asm/mmu_context.h | 7 ++++ arch/x86/kernel/alternative.c | 65 +----------------------------- arch/x86/mm/tlb.c | 60 +++++++++++++++++++++++++++ 3 files changed, 68 insertions(+), 64 deletions(-) diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mm= u_context.h index 27516046117a..2ca4fc4a8a0a 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -220,4 +220,11 @@ unsigned long __get_current_cr3_fast(void); =20 #include =20 +typedef struct { + struct mm_struct *mm; +} temp_mm_state_t; + +extern temp_mm_state_t use_temporary_mm(struct mm_struct *mm); +extern void unuse_temporary_mm(temp_mm_state_t prev_state); + #endif /* _ASM_X86_MMU_CONTEXT_H */ diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.= c index b47cd22b2eb1..af4c37e177ff 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -29,6 +29,7 @@ #include #include #include +#include =20 int __read_mostly alternatives_patched; =20 @@ -706,70 +707,6 @@ void __init_or_module text_poke_early(void *addr, co= nst void *opcode, } } =20 -typedef struct { - struct mm_struct *mm; -} temp_mm_state_t; - -/* - * Using a temporary mm allows to set temporary mappings that are not ac= cessible - * by other CPUs. Such mappings are needed to perform sensitive memory w= rites - * that override the kernel memory protections (e.g., W^X), without expo= sing the - * temporary page-table mappings that are required for these write opera= tions to - * other CPUs. Using a temporary mm also allows to avoid TLB shootdowns = when the - * mapping is torn down. - * - * Context: The temporary mm needs to be used exclusively by a single co= re. To - * harden security IRQs must be disabled while the temporary mm= is - * loaded, thereby preventing interrupt handler bugs from overr= iding - * the kernel memory protection. - */ -static inline temp_mm_state_t use_temporary_mm(struct mm_struct *mm) -{ - temp_mm_state_t temp_state; - - lockdep_assert_irqs_disabled(); - - /* - * Make sure not to be in TLB lazy mode, as otherwise we'll end up - * with a stale address space WITHOUT being in lazy mode after - * restoring the previous mm. - */ - if (this_cpu_read(cpu_tlbstate_shared.is_lazy)) - leave_mm(smp_processor_id()); - - temp_state.mm =3D this_cpu_read(cpu_tlbstate.loaded_mm); - switch_mm_irqs_off(NULL, mm, current); - - /* - * If breakpoints are enabled, disable them while the temporary mm is - * used. Userspace might set up watchpoints on addresses that are used - * in the temporary mm, which would lead to wrong signals being sent or - * crashes. - * - * Note that breakpoints are not disabled selectively, which also cause= s - * kernel breakpoints (e.g., perf's) to be disabled. This might be - * undesirable, but still seems reasonable as the code that runs in the - * temporary mm should be short. - */ - if (hw_breakpoint_active()) - hw_breakpoint_disable(); - - return temp_state; -} - -static inline void unuse_temporary_mm(temp_mm_state_t prev_state) -{ - lockdep_assert_irqs_disabled(); - switch_mm_irqs_off(NULL, prev_state.mm, current); - - /* - * Restore the breakpoints if they were disabled before the temporary m= m - * was loaded. - */ - if (hw_breakpoint_active()) - hw_breakpoint_restore(); -} - __ro_after_init struct mm_struct *poking_mm; __ro_after_init unsigned long poking_addr; =20 diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 74b7a615bc15..4e371f30e2ab 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -702,6 +702,66 @@ void enter_lazy_tlb(struct mm_struct *mm, struct tas= k_struct *tsk) this_cpu_write(cpu_tlbstate_shared.is_lazy, true); } =20 +/* + * Using a temporary mm allows to set temporary mappings that are not ac= cessible + * by other CPUs. Such mappings are needed to perform sensitive memory w= rites + * that override the kernel memory protections (e.g., W^X), without expo= sing the + * temporary page-table mappings that are required for these write opera= tions to + * other CPUs. Using a temporary mm also allows to avoid TLB shootdowns = when the + * mapping is torn down. + * + * Context: The temporary mm needs to be used exclusively by a single co= re. To + * harden security IRQs must be disabled while the temporary mm= is + * loaded, thereby preventing interrupt handler bugs from overr= iding + * the kernel memory protection. + */ +temp_mm_state_t use_temporary_mm(struct mm_struct *mm) +{ + temp_mm_state_t temp_state; + + lockdep_assert_irqs_disabled(); + + /* + * Make sure not to be in TLB lazy mode, as otherwise we'll end up + * with a stale address space WITHOUT being in lazy mode after + * restoring the previous mm. + */ + if (this_cpu_read(cpu_tlbstate_shared.is_lazy)) + leave_mm(smp_processor_id()); + + temp_state.mm =3D this_cpu_read(cpu_tlbstate.loaded_mm); + switch_mm_irqs_off(NULL, mm, current); + + /* + * If breakpoints are enabled, disable them while the temporary mm is + * used. Userspace might set up watchpoints on addresses that are used + * in the temporary mm, which would lead to wrong signals being sent or + * crashes. + * + * Note that breakpoints are not disabled selectively, which also cause= s + * kernel breakpoints (e.g., perf's) to be disabled. This might be + * undesirable, but still seems reasonable as the code that runs in the + * temporary mm should be short. + */ + if (hw_breakpoint_active()) + hw_breakpoint_disable(); + + return temp_state; +} + +void unuse_temporary_mm(temp_mm_state_t prev_state) +{ + lockdep_assert_irqs_disabled(); + switch_mm_irqs_off(NULL, prev_state.mm, current); + + /* + * Restore the breakpoints if they were disabled before the temporary m= m + * was loaded. + */ + if (hw_breakpoint_active()) + hw_breakpoint_restore(); +} + /* * Call this when reinitializing a CPU. It fixes the following potentia= l * problems: --=20 2.33.1