From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AC1C0C433EF for ; Sat, 8 Jan 2022 16:44:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 20A6F6B007B; Sat, 8 Jan 2022 11:44:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 167EA6B007D; Sat, 8 Jan 2022 11:44:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED6786B007E; Sat, 8 Jan 2022 11:44:26 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id D597A6B007B for ; Sat, 8 Jan 2022 11:44:26 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8D6FE180286D2 for ; Sat, 8 Jan 2022 16:44:26 +0000 (UTC) X-FDA: 79007692932.28.05134E7 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf08.hostedemail.com (Postfix) with ESMTP id 15B86160010 for ; Sat, 8 Jan 2022 16:44:25 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 094A4B80B3C; Sat, 8 Jan 2022 16:44:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9D4C8C36AED; Sat, 8 Jan 2022 16:44:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1641660263; bh=Ix0zIvWH9riMicQK8q6DeOk5TUbR/qLOVVFcIcVQ7C8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=auhasTA/47YoAeVi8R3e+QTXB01K8YsMLaA59SJ4dOZ+OaBIuznl4w5vUDhWLgVgd nsbcRscEmwFJ79KqOn6yfwoCuOHEZlzFhXChKiZeBbe8MmiTX9B1oE3pm9v1YxtY3+ 9z+fr8Q/bL6dMsZuyYWCd4b4Ra+QLFuWY60jotzHt7SgsS2Rfogbp7ysJM1fZ+zgDs QYO8jxwQtH0zVozrfDwwMie+4OEFtg02UUwn8wCAyY9dyICu63kPwCBKqbQHkj3Q9K DS4jfc3F0BqNZecvV3KoEVKhGNwNyFtsjMr7v27SdHqXoW5tXauL6Dy/JQdJt+axRX EW0Pjg36QKZAQ== From: Andy Lutomirski To: Andrew Morton , Linux-MM Cc: Nicholas Piggin , Anton Blanchard , Benjamin Herrenschmidt , Paul Mackerras , Randy Dunlap , linux-arch , x86@kernel.org, Rik van Riel , Dave Hansen , Peter Zijlstra , Nadav Amit , Mathieu Desnoyers , Andy Lutomirski , Michael Ellerman , Paul Mackerras , linuxppc-dev@lists.ozlabs.org Subject: [PATCH 06/23] powerpc/membarrier: Remove special barrier on mm switch Date: Sat, 8 Jan 2022 08:43:51 -0800 Message-Id: X-Mailer: git-send-email 2.33.1 In-Reply-To: References: MIME-Version: 1.0 X-Stat-Signature: fdtb5e39c4ma1rb3ixbkcg5ifqw49s8y X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 15B86160010 Authentication-Results: imf08.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="auhasTA/"; spf=pass (imf08.hostedemail.com: domain of luto@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=luto@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-HE-Tag: 1641660265-499857 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: powerpc did the following on some, but not all, paths through switch_mm_irqs_off(): /* * Only need the full barrier when switching between processes. * Barrier when switching from kernel to userspace is not * required here, given that it is implied by mmdrop(). Barrier * when switching from userspace to kernel is not needed after * store to rq->curr. */ if (likely(!(atomic_read(&next->membarrier_state) & (MEMBARRIER_STATE_PRIVATE_EXPEDITED | MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev)) return; This is puzzling: if !prev, then one might expect that we are switching from kernel to user, not user to kernel, which is inconsistent with the comment. But this is all nonsense, because the one and only caller would never have prev =3D=3D NULL and would, in fact, OOPS if prev =3D=3D NULL. In any event, this code is unnecessary, since the new generic membarrier_finish_switch_mm() provides the same barrier without arch help= . arch/powerpc/include/asm/membarrier.h remains as an empty header, because a later patch in this series will add code to it. Cc: Michael Ellerman Cc: Benjamin Herrenschmidt Cc: Paul Mackerras Cc: linuxppc-dev@lists.ozlabs.org Cc: Nicholas Piggin Cc: Mathieu Desnoyers Cc: Peter Zijlstra Signed-off-by: Andy Lutomirski --- arch/powerpc/include/asm/membarrier.h | 24 ------------------------ arch/powerpc/mm/mmu_context.c | 1 - 2 files changed, 25 deletions(-) diff --git a/arch/powerpc/include/asm/membarrier.h b/arch/powerpc/include= /asm/membarrier.h index de7f79157918..b90766e95bd1 100644 --- a/arch/powerpc/include/asm/membarrier.h +++ b/arch/powerpc/include/asm/membarrier.h @@ -1,28 +1,4 @@ #ifndef _ASM_POWERPC_MEMBARRIER_H #define _ASM_POWERPC_MEMBARRIER_H =20 -static inline void membarrier_arch_switch_mm(struct mm_struct *prev, - struct mm_struct *next, - struct task_struct *tsk) -{ - /* - * Only need the full barrier when switching between processes. - * Barrier when switching from kernel to userspace is not - * required here, given that it is implied by mmdrop(). Barrier - * when switching from userspace to kernel is not needed after - * store to rq->curr. - */ - if (IS_ENABLED(CONFIG_SMP) && - likely(!(atomic_read(&next->membarrier_state) & - (MEMBARRIER_STATE_PRIVATE_EXPEDITED | - MEMBARRIER_STATE_GLOBAL_EXPEDITED)) || !prev)) - return; - - /* - * The membarrier system call requires a full memory barrier - * after storing to rq->curr, before going back to user-space. - */ - smp_mb(); -} - #endif /* _ASM_POWERPC_MEMBARRIER_H */ diff --git a/arch/powerpc/mm/mmu_context.c b/arch/powerpc/mm/mmu_context.= c index 74246536b832..5f2daa6b0497 100644 --- a/arch/powerpc/mm/mmu_context.c +++ b/arch/powerpc/mm/mmu_context.c @@ -84,7 +84,6 @@ void switch_mm_irqs_off(struct mm_struct *prev, struct = mm_struct *next, asm volatile ("dssall"); =20 if (!new_on_cpu) - membarrier_arch_switch_mm(prev, next, tsk); =20 /* * The actual HW switching method differs between the various --=20 2.33.1