From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AE10DC00144 for ; Mon, 1 Aug 2022 19:03:27 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 40A248E0002; Mon, 1 Aug 2022 15:03:27 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3B9A76B0078; Mon, 1 Aug 2022 15:03:27 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 281558E0002; Mon, 1 Aug 2022 15:03:27 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0011.hostedemail.com [216.40.44.11]) by kanga.kvack.org (Postfix) with ESMTP id 18B966B0075 for ; Mon, 1 Aug 2022 15:03:27 -0400 (EDT) Received: from smtpin17.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay10.hostedemail.com (Postfix) with ESMTP id E2739C0B66 for ; Mon, 1 Aug 2022 19:03:26 +0000 (UTC) X-FDA: 79751947212.17.6C40F52 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf13.hostedemail.com (Postfix) with ESMTP id 560252000D for ; Mon, 1 Aug 2022 19:03:26 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 1DDCFB8163D; Mon, 1 Aug 2022 19:03:25 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 968C4C43470; Mon, 1 Aug 2022 19:03:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659380603; bh=Vk1fyfOrmClDp1ULmnoVQSdLcNlKkxopQ7DU/cAEJ3g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=d5/E6pEP+zuhEfvyWTGMPkZTLrXBeWh3KyoRBRgGuBNmwm7Li40wKLwKideCqjVsq dvVUW3ZYz9b1JbCvhl5aOAhblkN4G5jnV2IN2xOzNi8D5kfab0OlJ/vEWv2bp8ne2L ckCwOEx3LZI2/gsnIx4HfUdGEMWBuUf7PsAIarErPoya8AHJHosuS6jvYHFl7Tan9h RMpiBpERdQivoo6p92CIdOPmLH8oIv8J0VIxlEtTi5h5bF7NYioU9Ezc9VSN8iIait zm85QjMyBGrxcqxRHIoAsJC1k81nvEnJodw/sCdI1PKgF9DmZz/rMaKdjIsRJ5EJ6F pjB+4VCqtviJg== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Peter Zijlstra , Will Deacon , Linus Torvalds , Sasha Levin , aneesh.kumar@linux.ibm.com, npiggin@gmail.com, linux-arch@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH AUTOSEL 5.4 3/6] mmu_gather: Let there be one tlb_{start,end}_vma() implementation Date: Mon, 1 Aug 2022 15:03:14 -0400 Message-Id: <20220801190317.3819520-3-sashal@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220801190317.3819520-1-sashal@kernel.org> References: <20220801190317.3819520-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit ARC-Authentication-Results: i=1; imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="d5/E6pEP"; spf=pass (imf13.hostedemail.com: domain of sashal@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659380606; a=rsa-sha256; cv=none; b=o5nF+QGKungPFC0JtHs3h2VEf0hkmvizHfXLyowcsDxOAbLyMV4N2RhyVsIuXLLTyYz/MC swZiLvIaLMLlJi9E63SIAAtPJF/RiyPAquOrmbqiL13EwSvLUf5Pe7wNWA2rOiE3OO1eM/ qTVPwJ5QvKrJX6kT7n/cNoIoffawRiU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659380606; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=NO/nVabPsNV8oc4N0ka0X946ccEWluuR7Gl4bExs1HY=; b=x19z5HpOjwJf/JnQeN+iwJk5e2De0ARTpzmghic8RsEXASnvNrLExTgs07OWYhuN6guRo5 Iy1BnfGnTajeswvuR12EmIb6Q+Xn7w39oLAHqKL9R4jQBivzGyQXbJza8dyBPXfBt5/cZi cgBRmzGQy0/fA5xGgUEiGmSR8CuC55w= X-Stat-Signature: itjfk8igidhnep7r5qq4n1ph9jyqrwxg X-Rspamd-Queue-Id: 560252000D X-Rspam-User: Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="d5/E6pEP"; spf=pass (imf13.hostedemail.com: domain of sashal@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Rspamd-Server: rspam04 X-HE-Tag: 1659380606-22678 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Peter Zijlstra [ Upstream commit 18ba064e42df3661e196ab58a23931fc732a420b ] Now that architectures are no longer allowed to override tlb_{start,end}_vma() re-arrange code so that there is only one implementation for each of these functions. This much simplifies trying to figure out what they actually do. Signed-off-by: Peter Zijlstra (Intel) Acked-by: Will Deacon Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- include/asm-generic/tlb.h | 15 ++------------- 1 file changed, 2 insertions(+), 13 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index 268674c1d568..fe05a8562c52 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -321,8 +321,8 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) #ifdef CONFIG_MMU_GATHER_NO_RANGE -#if defined(tlb_flush) || defined(tlb_start_vma) || defined(tlb_end_vma) -#error MMU_GATHER_NO_RANGE relies on default tlb_flush(), tlb_start_vma() and tlb_end_vma() +#if defined(tlb_flush) +#error MMU_GATHER_NO_RANGE relies on default tlb_flush() #endif /* @@ -342,17 +342,10 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { } -#define tlb_end_vma tlb_end_vma -static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { } - #else /* CONFIG_MMU_GATHER_NO_RANGE */ #ifndef tlb_flush -#if defined(tlb_start_vma) || defined(tlb_end_vma) -#error Default tlb_flush() relies on default tlb_start_vma() and tlb_end_vma() -#endif - /* * When an architecture does not provide its own tlb_flush() implementation * but does have a reasonably efficient flush_vma_range() implementation @@ -468,7 +461,6 @@ static inline unsigned long tlb_get_unmap_size(struct mmu_gather *tlb) * case where we're doing a full MM flush. When we're doing a munmap, * the vmas are adjusted to only cover the region to be torn down. */ -#ifndef tlb_start_vma static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { if (tlb->fullmm) @@ -477,9 +469,7 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct * tlb_update_vma_flags(tlb, vma); flush_cache_range(vma, vma->vm_start, vma->vm_end); } -#endif -#ifndef tlb_end_vma static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { if (tlb->fullmm) @@ -493,7 +483,6 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm */ tlb_flush_mmu_tlbonly(tlb); } -#endif /* * tlb_flush_{pte|pmd|pud|p4d}_range() adjust the tlb->start and tlb->end, -- 2.35.1