From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf1-f198.google.com (mail-pf1-f198.google.com [209.85.210.198]) by kanga.kvack.org (Postfix) with ESMTP id 3FB096B2F9B for ; Fri, 24 Aug 2018 08:16:08 -0400 (EDT) Received: by mail-pf1-f198.google.com with SMTP id z18-v6so6034034pfe.19 for ; Fri, 24 Aug 2018 05:16:08 -0700 (PDT) Received: from bombadil.infradead.org (bombadil.infradead.org. [2607:7c80:54:e::133]) by mx.google.com with ESMTPS id u69-v6si6362222pgd.547.2018.08.24.05.16.06 for (version=TLS1_2 cipher=ECDHE-RSA-CHACHA20-POLY1305 bits=256/256); Fri, 24 Aug 2018 05:16:07 -0700 (PDT) Date: Fri, 24 Aug 2018 13:39:53 +0200 From: Peter Zijlstra Subject: Re: [PATCH 3/4] mm/tlb, x86/mm: Support invalidating TLB caches for RCU_TABLE_FREE Message-ID: <20180824113953.GL24142@hirez.programming.kicks-ass.net> References: <20180822153012.173508681@infradead.org> <20180822154046.823850812@infradead.org> <20180822155527.GF24124@hirez.programming.kicks-ass.net> <20180823134525.5f12b0d3@roar.ozlabs.ibm.com> <776104d4c8e4fc680004d69e3a4c2594b638b6d1.camel@au1.ibm.com> <20180823133958.GA1496@brain-police> <20180824084717.GK24124@hirez.programming.kicks-ass.net> <20180824113214.GK24142@hirez.programming.kicks-ass.net> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20180824113214.GK24142@hirez.programming.kicks-ass.net> Sender: owner-linux-mm@kvack.org List-ID: To: Will Deacon Cc: Linus Torvalds , Benjamin Herrenschmidt , Nick Piggin , Andrew Lutomirski , the arch/x86 maintainers , Borislav Petkov , Rik van Riel , Jann Horn , Adin Scannell , Dave Hansen , Linux Kernel Mailing List , linux-mm , David Miller , Martin Schwidefsky , Michael Ellerman On Fri, Aug 24, 2018 at 01:32:14PM +0200, Peter Zijlstra wrote: > On Fri, Aug 24, 2018 at 10:47:17AM +0200, Peter Zijlstra wrote: > > On Thu, Aug 23, 2018 at 02:39:59PM +0100, Will Deacon wrote: > > > The only problem with this approach is that we've lost track of the granule > > > size by the point we get to the tlb_flush(), so we can't adjust the stride of > > > the TLB invalidations for huge mappings, which actually works nicely in the > > > synchronous case (e.g. we perform a single invalidation for a 2MB mapping, > > > rather than iterating over it at a 4k granule). > > > > > > One thing we could do is switch to synchronous mode if we detect a change in > > > granule (i.e. treat it like a batch failure). > > > > We could use tlb_start_vma() to track that, I think. Shouldn't be too > > hard. > > Hurm.. look at commit: > > e77b0852b551 ("mm/mmu_gather: track page size with mmu gather and force flush if page size change") Ah, good, it seems that already got cleaned up a lot. But it all moved into the power code.. blergh.