From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B398C433EF for ; Fri, 8 Jul 2022 13:32:59 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8EEE28E0001; Fri, 8 Jul 2022 09:32:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 89FF36B0074; Fri, 8 Jul 2022 09:32:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 78E138E0001; Fri, 8 Jul 2022 09:32:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 696B16B0073 for ; Fri, 8 Jul 2022 09:32:58 -0400 (EDT) Received: from smtpin12.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 2C55F60267 for ; Fri, 8 Jul 2022 13:32:58 +0000 (UTC) X-FDA: 79664023236.12.8D42D09 Received: from ams.source.kernel.org (ams.source.kernel.org [145.40.68.75]) by imf14.hostedemail.com (Postfix) with ESMTP id 7C79B100051 for ; Fri, 8 Jul 2022 13:32:57 +0000 (UTC) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 4B6AFB827E8; Fri, 8 Jul 2022 13:32:56 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 14690C341C0; Fri, 8 Jul 2022 13:32:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1657287175; bh=Ql4CWUsQY3uyAyViakp7n8C9XLj6sQivCxWpOs75cSs=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=DhHMPYrsbk0PaeLfqUeyAs6Lg8qKaDVciIXIqWuirB+/lV8kZwHlrxqFmFcRpp2hd 0OW+x9xDjlAbXhB4Clx+jFvyfoykLrLYvIKohLUNsD+BnXPyB28vk3Pm03nhud5URG M0Xbj4lAKo/y8Uo7D0uLHiHYosb6+V7+Cfb2bGmy4oRGPf0ZS7uT0vFinFgq71jYhV 8jiWW9VmH32nwkPJFcLGZj7cEOWZR/Y9uzh0kJmGNKD+W6BzZIy0nb20kT3vBbeDbs OCxSBz3OFLI3UH9hunh+UeUDkqRBsht9fJhpAZUMfFjHRALcmVJqk2eNowy6Fcsaz7 uKFT0IWGzyM7A== Date: Fri, 8 Jul 2022 14:32:49 +0100 From: Will Deacon To: Peter Zijlstra Cc: Jann Horn , Linus Torvalds , linux-kernel@vger.kernel.org, linux-mm@kvack.org, Dave Airlie , Daniel Vetter , Andrew Morton , Guo Ren , David Miller Subject: Re: [PATCH 3/4] mmu_gather: Let there be one tlb_{start,end}_vma() implementation Message-ID: <20220708133248.GD5989@willie-the-truck> References: <20220708071802.751003711@infradead.org> <20220708071834.084532973@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20220708071834.084532973@infradead.org> User-Agent: Mutt/1.10.1 (2018-07-13) ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1657287177; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=fROYCyDY4tBHu8SzdcrtnwDkPjmZG+X7lnoLP95y8Uk=; b=yky1e46jdtDHSG0rPhTCv39YKWw2KJaMsfkiRYPgmwDjMiL90CiBrn+3qF/Y9WkLhVaYl5 qiTa1JD27WJI7t+Rs1q7VMiesHuU9JbEQtV5VUruFkPMIvAfHycUCxYDXFxQwEmDhFB1vX lM39ZjBsY6UaB7bTY5B90KdO/RLTiY4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1657287177; a=rsa-sha256; cv=none; b=pyIR3W9fRWOkskj50UdGyJ+T5CgZefN10Lb3PmOiGvKFZFq6QcK12VY1kTIzlIUlYDsUNw h74h8ZWBa2PP7onQe49INpIqa6SdV9rnyjqs7MFiLvftvi7/LgYSDUYoEiBNxAsoCumM5m 4N2aprzQgATKOuU0wV6fJF3nlu3GJQk= ARC-Authentication-Results: i=1; imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DhHMPYrs; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf14.hostedemail.com: domain of will@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=will@kernel.org Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=kernel.org header.s=k20201202 header.b=DhHMPYrs; dmarc=pass (policy=none) header.from=kernel.org; spf=pass (imf14.hostedemail.com: domain of will@kernel.org designates 145.40.68.75 as permitted sender) smtp.mailfrom=will@kernel.org X-Stat-Signature: sw1sze8ewrc47y8n7eegdj61bqfz96x5 X-Rspamd-Queue-Id: 7C79B100051 X-Rspamd-Server: rspam07 X-Rspam-User: X-HE-Tag: 1657287177-418401 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 08, 2022 at 09:18:05AM +0200, Peter Zijlstra wrote: > Now that architectures are no longer allowed to override > tlb_{start,end}_vma() re-arrange code so that there is only one > implementation for each of these functions. > > This much simplifies trying to figure out what they actually do. > > Signed-off-by: Peter Zijlstra (Intel) > --- > include/asm-generic/tlb.h | 15 ++------------- > 1 file changed, 2 insertions(+), 13 deletions(-) > > --- a/include/asm-generic/tlb.h > +++ b/include/asm-generic/tlb.h > @@ -346,8 +346,8 @@ static inline void __tlb_reset_range(str > > #ifdef CONFIG_MMU_GATHER_NO_RANGE > > -#if defined(tlb_flush) || defined(tlb_start_vma) || defined(tlb_end_vma) > -#error MMU_GATHER_NO_RANGE relies on default tlb_flush(), tlb_start_vma() and tlb_end_vma() > +#if defined(tlb_flush) > +#error MMU_GATHER_NO_RANGE relies on default tlb_flush() > #endif > > /* > @@ -367,17 +367,10 @@ static inline void tlb_flush(struct mmu_ > static inline void > tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma) { } > > -#define tlb_end_vma tlb_end_vma > -static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) { } > - > #else /* CONFIG_MMU_GATHER_NO_RANGE */ > > #ifndef tlb_flush > > -#if defined(tlb_start_vma) || defined(tlb_end_vma) > -#error Default tlb_flush() relies on default tlb_start_vma() and tlb_end_vma() > -#endif > - > /* > * When an architecture does not provide its own tlb_flush() implementation > * but does have a reasonably efficient flush_vma_range() implementation > @@ -498,7 +491,6 @@ static inline unsigned long tlb_get_unma > * case where we're doing a full MM flush. When we're doing a munmap, > * the vmas are adjusted to only cover the region to be torn down. > */ > -#ifndef tlb_start_vma > static inline void tlb_start_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) > { > if (tlb->fullmm) > @@ -509,9 +501,7 @@ static inline void tlb_start_vma(struct > flush_cache_range(vma, vma->vm_start, vma->vm_end); > #endif > } > -#endif > > -#ifndef tlb_end_vma > static inline void tlb_end_vma(struct mmu_gather *tlb, struct vm_area_struct *vma) > { > if (tlb->fullmm || IS_ENABLED(CONFIG_MMU_GATHER_MERGE_VMAS)) > @@ -525,7 +515,6 @@ static inline void tlb_end_vma(struct mm > */ > tlb_flush_mmu_tlbonly(tlb); > } > -#endif Much nicer: Acked-by: Will Deacon Will