From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-21.5 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,FSL_HELO_FAKE, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8C272C433FE for ; Thu, 10 Dec 2020 23:44:34 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 25AE9230FF for ; Thu, 10 Dec 2020 23:44:33 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 25AE9230FF Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 6F57E6B0036; Thu, 10 Dec 2020 18:44:33 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 6A8A26B005D; Thu, 10 Dec 2020 18:44:33 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 596116B0068; Thu, 10 Dec 2020 18:44:33 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0159.hostedemail.com [216.40.44.159]) by kanga.kvack.org (Postfix) with ESMTP id 446736B0036 for ; Thu, 10 Dec 2020 18:44:33 -0500 (EST) Received: from smtpin21.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 061D0181AEF3C for ; Thu, 10 Dec 2020 23:44:33 +0000 (UTC) X-FDA: 77579004426.21.wren78_5d15cd4273fc Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin21.hostedemail.com (Postfix) with ESMTP id DE4A6180442D0 for ; Thu, 10 Dec 2020 23:44:32 +0000 (UTC) X-HE-Tag: wren78_5d15cd4273fc X-Filterd-Recvd-Size: 7551 Received: from mail-io1-f67.google.com (mail-io1-f67.google.com [209.85.166.67]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Thu, 10 Dec 2020 23:44:32 +0000 (UTC) Received: by mail-io1-f67.google.com with SMTP id i18so7606780ioa.1 for ; Thu, 10 Dec 2020 15:44:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=q/4tj4AEhiYsma+hTN7k7gj8dzjyREfJk2mVKvee3TE=; b=XnT0pKQFWIEosZy7M8s48zG8tfLwqHRybvSoHkJVcz3o0MLUUpWNXd+lqEXPcpau7+ Y6FewCu53HHlG+j8sQbs3sIgQEPiUZz+EbiWWXUfQZBosYApuyRAnlb6WWm3SmVGezvS Z1SIhiLwOZxpHurRzV4Gxm4pIt6OLBTWfdEdEEBUMreLgEV5e/0g8cpNr+gc9iZvFFBi 784VMl2pJNUwIym2owa6ysC2C9QIX/wsehopZvY8Yfl5U4ThjPELSVusMtneqUsjlO9z RDgd7ghJNKO1Z+Tq5CWNYTy5RKf231kbxk6nUpiTj/RgC/02m1a/ZcR78hqsjW80XgIA DP2g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=q/4tj4AEhiYsma+hTN7k7gj8dzjyREfJk2mVKvee3TE=; b=hLPVbnz09OKsjiO8MLk6lBzfQQ+zd7AYrYlBZT3tAnASzgLrJsB5qe6Vq6iwvYNqG0 QpkL/Ngmu++mZKUBjg8dyu3NefDpVwOyAoiyUtRBXpMOrSJXssBypTt0dc3uqZ/b/UBW xyS67qGE7skcuWkGRTgQvLJrBB3kWpHKOYk6DSyE03bPYkBbd0aWxmMytK1FTBNKAmN/ PdqzNuoXsv8SiVm0Ux4d5ENFo0DTSniPBB7PyOuC87+UnKU1IvZTkYOaB3GYG/JkcwcN P/HBCNhGyxxitRfzwf+FJeI0XPYBfaOm6odp2CoerHdx29ni/sydSqjLTWF3J7Yq90ts v2nQ== X-Gm-Message-State: AOAM530P0rgM6h7BlW7SSs+H+nPwFVyZwgy+jVHVb4Xtw3OVMlHNKLgS wI6721VUjcXIZfxlWLMTXIJjVQ== X-Google-Smtp-Source: ABdhPJx9Xh7SzDtpNi6BLGc841jg/VIKifu7TzcaXJnm1fzun2Jx8KVVLk+hnuY62U4xYFJCIrEPxw== X-Received: by 2002:a6b:8b4c:: with SMTP id n73mr11362755iod.143.1607643871702; Thu, 10 Dec 2020 15:44:31 -0800 (PST) Received: from google.com ([2620:15c:183:200:7220:84ff:fe09:2d90]) by smtp.gmail.com with ESMTPSA id m8sm3289294ioh.16.2020.12.10.15.44.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 10 Dec 2020 15:44:31 -0800 (PST) Date: Thu, 10 Dec 2020 16:44:26 -0700 From: Yu Zhao To: Will Deacon Cc: linux-kernel@vger.kernel.org, kernel-team@android.com, Minchan Kim , Peter Zijlstra , Thomas Gleixner , Linus Torvalds , Vlastimil Babka , Mohamed Alzayat , "Aneesh Kumar K.V" , linux-mm@kvack.org Subject: Re: [PATCH v2 3/6] tlb: mmu_gather: Introduce tlb_gather_mmu_fullmm() Message-ID: References: <20201210121110.10094-1-will@kernel.org> <20201210121110.10094-4-will@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201210121110.10094-4-will@kernel.org> X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Dec 10, 2020 at 12:11:07PM +0000, Will Deacon wrote: > Passing the range '0, -1' to tlb_gather_mmu() sets the 'fullmm' flag, > which indicates that the mm_struct being operated on is going away. In > this case, some architectures (such as arm64) can elide TLB invalidation > by ensuring that the TLB tag (ASID) associated with this mm is not > immediately reclaimed. Although this behaviour is documented in > asm-generic/tlb.h, it's subtle and easily missed. > > Introduce tlb_gather_mmu_fullmm() to make it clearer that this is for the > entire mm and WARN() if tlb_gather_mmu() is called with the 'fullmm' > address range. > > Signed-off-by: Will Deacon > --- > include/asm-generic/tlb.h | 6 ++++-- > include/linux/mm_types.h | 1 + > mm/mmap.c | 2 +- > mm/mmu_gather.c | 16 ++++++++++++++-- > 4 files changed, 20 insertions(+), 5 deletions(-) > > diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h > index 6661ee1cff47..2c68a545ffa7 100644 > --- a/include/asm-generic/tlb.h > +++ b/include/asm-generic/tlb.h > @@ -46,7 +46,9 @@ > * > * The mmu_gather API consists of: > * > - * - tlb_gather_mmu() / tlb_finish_mmu(); start and finish a mmu_gather > + * - tlb_gather_mmu() / tlb_gather_mmu_fullmm() / tlb_finish_mmu() > + * > + * start and finish a mmu_gather > * > * Finish in particular will issue a (final) TLB invalidate and free > * all (remaining) queued pages. > @@ -91,7 +93,7 @@ > * > * - mmu_gather::fullmm > * > - * A flag set by tlb_gather_mmu() to indicate we're going to free > + * A flag set by tlb_gather_mmu_fullmm() to indicate we're going to free > * the entire mm; this allows a number of optimizations. > * > * - We can ignore tlb_{start,end}_vma(); because we don't > diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h > index 7b90058a62be..42231729affe 100644 > --- a/include/linux/mm_types.h > +++ b/include/linux/mm_types.h > @@ -585,6 +585,7 @@ static inline cpumask_t *mm_cpumask(struct mm_struct *mm) > struct mmu_gather; > extern void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, > unsigned long start, unsigned long end); > +extern void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm); > extern void tlb_finish_mmu(struct mmu_gather *tlb); > > static inline void init_tlb_flush_pending(struct mm_struct *mm) > diff --git a/mm/mmap.c b/mm/mmap.c > index a3e5854cd01e..cdd3dae6547c 100644 > --- a/mm/mmap.c > +++ b/mm/mmap.c > @@ -3214,7 +3214,7 @@ void exit_mmap(struct mm_struct *mm) > > lru_add_drain(); > flush_cache_mm(mm); > - tlb_gather_mmu(&tlb, mm, 0, -1); > + tlb_gather_mmu_fullmm(&tlb, mm); > /* update_hiwater_rss(mm) here? but nobody should be looking */ > /* Use -1 here to ensure all VMAs in the mm are unmapped */ > unmap_vmas(&tlb, vma, 0, -1); > diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c > index b0be5a7aa08f..5f5e45d9eb50 100644 > --- a/mm/mmu_gather.c > +++ b/mm/mmu_gather.c > @@ -261,8 +261,8 @@ void tlb_flush_mmu(struct mmu_gather *tlb) > * respectively when @mm is without users and we're going to destroy > * the full address space (exit/execve). > */ > -void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, > - unsigned long start, unsigned long end) > +static void __tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, > + unsigned long start, unsigned long end) > { > tlb->mm = mm; > > @@ -287,6 +287,18 @@ void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, > inc_tlb_flush_pending(tlb->mm); > } > > +void tlb_gather_mmu(struct mmu_gather *tlb, struct mm_struct *mm, > + unsigned long start, unsigned long end) > +{ > + WARN_ON(!(start | (end + 1))); /* Use _fullmm() instead */ > + __tlb_gather_mmu(tlb, mm, start, end); > +} > + > +void tlb_gather_mmu_fullmm(struct mmu_gather *tlb, struct mm_struct *mm) > +{ > + __tlb_gather_mmu(tlb, mm, 0, -1); > +} > + IMO, there is no point adding the wrappers given you will remove the WARN_ON in the next patch. But if you prefer, they should be at least moved to the header for inlining. Consider this whole series Reviewed-by: Yu Zhao