From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D3309C00144 for ; Mon, 1 Aug 2022 19:03:12 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 717076B0071; Mon, 1 Aug 2022 15:03:12 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 6C6AC8E0002; Mon, 1 Aug 2022 15:03:12 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 58F0A8E0001; Mon, 1 Aug 2022 15:03:12 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0017.hostedemail.com [216.40.44.17]) by kanga.kvack.org (Postfix) with ESMTP id 48A2E6B0071 for ; Mon, 1 Aug 2022 15:03:12 -0400 (EDT) Received: from smtpin02.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 227101C5B4A for ; Mon, 1 Aug 2022 19:03:12 +0000 (UTC) X-FDA: 79751946624.02.4E8E57C Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by imf17.hostedemail.com (Postfix) with ESMTP id AEB1E400F4 for ; Mon, 1 Aug 2022 19:03:11 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 17C9F61218; Mon, 1 Aug 2022 19:03:11 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E27CC4347C; Mon, 1 Aug 2022 19:03:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1659380590; bh=9GOEyyKIu5txa7YOq6e4eOPfuw1vHedxTV7LyEIoq6A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q/Y0GaOjea5hxb0nbRQbHv42i4EIZg4TTo5H7rPHOVqXgzHbabjb2UWfS0Qjcqnys k55NTY9mvpEEXC5uQnd0ij2DSbI5lFeu/zTN/54TzSbQEMkt2KkaWyaS5DXpAUtc+p dr8SnMwkGxu8PP/tyi6270eTT1iO3VHUXQzBAJ3AsYXTzS5bEeLYmmHlZncybfbLlQ HVOeDngNCb0ipFB3Byd5ojmLpzZ9h+18vIzoqsFeHNf1UwmbueACCI+1VCKW7Qfzfn 9MiByR2hecFd+AGtLS1xFAtA118tGR8FOnF5+qaEQBhd7H3csFxJ/lfvZZc7av2bdi R1MBTAUe3CjPA== From: Sasha Levin To: linux-kernel@vger.kernel.org, stable@vger.kernel.org Cc: Peter Zijlstra , Will Deacon , Linus Torvalds , Sasha Levin , aneesh.kumar@linux.ibm.com, npiggin@gmail.com, linux-arch@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH AUTOSEL 5.10 4/7] mmu_gather: Let there be one tlb_{start,end}_vma() implementation Date: Mon, 1 Aug 2022 15:02:58 -0400 Message-Id: <20220801190301.3819065-4-sashal@kernel.org> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220801190301.3819065-1-sashal@kernel.org> References: <20220801190301.3819065-1-sashal@kernel.org> MIME-Version: 1.0 X-stable: review X-Patchwork-Hint: Ignore Content-Transfer-Encoding: 8bit ARC-Authentication-Results: i=1; imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="q/Y0GaOj"; spf=pass (imf17.hostedemail.com: domain of sashal@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (policy=none) header.from=kernel.org ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1659380591; a=rsa-sha256; cv=none; b=8XbUT2tZ60FgD2Q2mlFawdGF1WZdPyjpVikGnYY/et3ByW0qU9B+ZVriUtSwRW9sZAbC4u ya61Q3558ydobxYKen7QjPqzfM4Gojuh0FuCQyURDZOS/jLIFagpTe8kiaDTrqQ+T0WtBZ EhFkhR0Nkbt2eGpHOXTNmB+7PARXkcg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1659380591; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=2vG7Q3w8eMRYN6PwilSy9FAGQ0NWGd91CycQZ8nfrlQ=; b=To580u9TUl6Jb9lYTv0OB6fVcnpE5d7bI76Nc58GsOL6BqFUQ8B42Cka32IdWVrJQK0yJa 7K/mz+LabxozabXRHx1LmlvPcI23YyEgG4bK6Q7SONc/Entpanm8+6g4t6226KRfECkWcw WheDF7H9AlC/65vWACCZG+POk5++kv8= Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b="q/Y0GaOj"; spf=pass (imf17.hostedemail.com: domain of sashal@kernel.org designates 139.178.84.217 as permitted sender) smtp.mailfrom=sashal@kernel.org; dmarc=pass (policy=none) header.from=kernel.org X-Stat-Signature: ty6xhk3zgms8r8ofrkd4tik19kgcsojq X-Rspamd-Queue-Id: AEB1E400F4 X-Rspam-User: X-Rspamd-Server: rspam07 X-HE-Tag: 1659380591-58971 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Peter Zijlstra [ Upstream commit 18ba064e42df3661e196ab58a23931fc732a420b ] Now that architectures are no longer allowed to override tlb_{start,end}_vma() re-arrange code so that there is only one implementation for each of these functions. This much simplifies trying to figure out what they actually do. Signed-off-by: Peter Zijlstra (Intel) Acked-by: Will Deacon Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- include/asm-generic/tlb.h | 15 ++------------- 1 file changed, 2 insertions(+), 13 deletions(-) diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h index a0c4b99d2899..a8112510522b 100644 --- a/include/asm-generic/tlb.h +++ b/include/asm-generic/tlb.h @@ -332,8 +332,8 @@ static inline void __tlb_reset_range(struct mmu_gather *tlb) #ifdef CONFIG_MMU_GATHER_NO_RANGE -#if defined(tlb_flush) || defined(tlb_start_vma) || defined(tlb_end_vma) -#error MMU_GATHER_NO_RANGE relies on default tlb_flush(), tlb_start_vma() and tlb_end_vma() +#if defined(tlb_flush) +#error MMU_GATHER_NO_RANGE relies on default tlb_flush() #endif /* @@ -353,17 +353,10 @@ static inline void tlb_flush(struct mmu_gather *tlb) static inline void tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { } -#define tlb_end_vma tlb_end_vma -static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { } - #else /* CONFIG_MMU_GATHER_NO_RANGE */ #ifndef tlb_flush -#if defined(tlb_start_vma) || defined(tlb_end_vma) -#error Default tlb_flush() relies on default tlb_start_vma() and tlb_end_vma() -#endif - /* * When an architecture does not provide its own tlb_flush() implementation * but does have a reasonably efficient flush_vma_range() implementation @@ -484,7 +477,6 @@ static inline unsigned long tlb_get_unmap_size(struct mmu_gather *tlb) * case where we're doing a full MM flush. When we're doing a munmap, * the vmas are adjusted to only cover the region to be torn down. */ -#ifndef tlb_start_vma static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { if (tlb->fullmm) @@ -493,9 +485,7 @@ static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct * tlb_update_vma_flags(tlb, vma); flush_cache_range(vma, vma->vm_start, vma->vm_end); } -#endif -#ifndef tlb_end_vma static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { if (tlb->fullmm) @@ -509,7 +499,6 @@ static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vm */ tlb_flush_mmu_tlbonly(tlb); } -#endif /* * tlb_flush_{pte|pmd|pud|p4d}_range() adjust the tlb->start and tlb->end, -- 2.35.1