From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 57B83C19F2B for ; Mon, 1 Aug 2022 19:02:40 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9D59D6B0072; Mon, 1 Aug 2022 15:02:39 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 95E0A6B0073; Mon, 1 Aug 2022 15:02:39 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7B1028E0001; Mon, 1 Aug 2022 15:02:39 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 63E546B0072 for ; Mon, 1 Aug 2022 15:02:39 -0400 (EDT) Received: from smtpin07.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay04.hostedemail.com (Postfix) with ESMTP id 309381A08BE for ; Mon, 1 Aug 2022 19:02:39 +0000 (UTC) X-FDA: 79751945238.07.A5D7C8A Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf03.hostedemail.com (Postfix) with ESMTP id 65BC620101 for ; Mon, 1 Aug 2022 19:02:36 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 782DE61172; Mon, 1 Aug 2022 19:02:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 47A23C433C1; Mon, 1 Aug 2022 19:02:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659380555; bh=Bj0wVDsSnZRsEnQn0a5XbJSomxQzw+EhnKcIW2H2auk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vEDa9VWz8SYJEqcLVYPy4szy4HmfX8EJW1s2RK1gQXUB4VpJWHCK9FOYrtqD86yiN kZY5usdKov8NgjXMJ2sjd+haFY5EUC1Pd51Cs5pzrN9/syowugdC9Msqc4DRnWSISm /WgIQV3pNZEcM7Cq/ewtaXgL5+T8Jh2OPTKnMcXyg4xWKCvm2/APdgq0T53JQ0RP4H rk5wbHYa/wRWWbsVscxcYd+LzUx+aW4UhuptqYNNBye+iANEMvy2v8N196XWicNlIW 3j5MLgrfXhY9ihuVfBv1tQ9IYsntntz8lEMqOh8sIyJLRQvkmadPy13HFvghWDKck4 waUhqAK+sG3qg== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Peter Zijlstra , Will Deacon , Linus Torvalds , Sasha Levin , aneesh.kumar@linux.ibm.com, npiggin@gmail.com, linux-arch@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH AUTOSEL 5.18 06/10] mmu_gather: Let there be one tlb_{start,end}_vma() implementation Date: Mon, 1 Aug 2022 15:02:18 -0400 Message-Id: <20220801190222.3818378-6-sashal@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220801190222.3818378-1-sashal@kernel.org> References: <20220801190222.3818378-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659380558; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=npgzYzxUeEDAYsGVaXcHFkPnvvTrNpcN0xtOGV+mA8A=; b=K9bgP0TscHnDdpPV8T03SlxWb1PUXFdje+THaPnucFg9V+smj+cB7iLoZL9Wdk2dxPzL42 V3F2Z+Tl0Ls9klUYqG/ECy+hN2dcs1CQlhkyKxGXMtFP1Hf2Wcbe0/NNwMTeeYHDhoSr8D FiTRh5XwySWNMY1e7611xFeSKKsE0Gk= ARC-Authentication-Results: i=1; imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=vEDa9VWz; spf=pass (imf03.hostedemail.com: domain of sashal@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659380558; a=rsa-sha256; cv=none; b=vQz23fM7NBV/4ZrHjvuCB5WY+/b2f8G+/6K20qK+hljZ7sNIsF6j9UBKV97n+jB5HE9xXh acFIfRdDfHiTNletKB7lNsQy2a2DJFds/fXOYZvMPmv5UnSOts7gVdedTJwidj6cjAYsvM +eWjg9zp/cxEn/Tia13cVw1/O5xDhlc= X-Rspam-User: Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=vEDa9VWz; spf=pass (imf03.hostedemail.com: domain of sashal@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Stat-Signature: aumuab3zhrox4zobpprig3ixbeht69b4 X-Rspamd-Queue-Id: 65BC620101 X-Rspamd-Server: rspam10 X-HE-Tag: 1659380556-364118 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Peter Zijlstra [ Upstream commit 18ba064e42df3661e196ab58a23931fc732a420b ] Now that architectures are no longer allowed to override tlb_{start,end}_vma() re-arrange code so that there is only one implementation for each of these functions. This much simplifies trying to figure out what they actually do. Signed-off-by: Peter Zijlstra (Intel) Acked-by: Will Deacon Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- include/asm-generic/tlb.h | 15 ++------------- 1 file changed, 2 insertions(+), 13 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index eee6f7763a39..11ad549b5014 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -334,8 +334,8 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) #ifdef CONFIG_MMU_GATHER_NO_RANGE -#if defined(tlb_flush) || defined(tlb_start_vma) || defined(tlb_end_vma) -#error MMU_GATHER_NO_RANGE relies on default tlb_flush(), tlb_start_vma() and tlb_end_vma() +#if defined(tlb_flush) +#error MMU_GATHER_NO_RANGE relies on default tlb_flush() #endif /* @@ -355,17 +355,10 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { } -#define tlb_end_vma tlb_end_vma -static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { } - #else /* CONFIG_MMU_GATHER_NO_RANGE */ #ifndef tlb_flush -#if defined(tlb_start_vma) || defined(tlb_end_vma) -#error Default tlb_flush() relies on default tlb_start_vma() and tlb_end_vma() -#endif - /* * When an architecture does not provide its own tlb_flush() implementation * but does have a reasonably efficient flush_vma_range() implementation @@ -486,7 +479,6 @@ static inline unsigned long tlb_get_unmap_size(struct mmu_gather *tlb) * case where we're doing a full MM flush. When we're doing a munmap, * the vmas are adjusted to only cover the region to be torn down. */ -#ifndef tlb_start_vma static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { if (tlb->fullmm) @@ -495,9 +487,7 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct * tlb_update_vma_flags(tlb, vma); flush_cache_range(vma, vma->vm_start, vma->vm_end); } -#endif -#ifndef tlb_end_vma static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { if (tlb->fullmm) @@ -511,7 +501,6 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm */ tlb_flush_mmu_tlbonly(tlb); } -#endif /* * tlb_flush_{pte|pmd|pud|p4d}_range() adjust the tlb->start and tlb->end, -- 2.35.1