From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-pf0-f198.google.com (mail-pf0-f198.google.com [209.85.192.198]) by kanga.kvack.org (Postfix) with ESMTP id 34CC46B03CB for ; Mon, 8 May 2017 11:35:00 -0400 (EDT) Received: by mail-pf0-f198.google.com with SMTP id a66so66149773pfl.6 for ; Mon, 08 May 2017 08:35:00 -0700 (PDT) Received: from mga11.intel.com (mga11.intel.com. [192.55.52.93]) by mx.google.com with ESMTPS id s72si13440081pfa.114.2017.05.08.08.34.59 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 08 May 2017 08:34:59 -0700 (PDT) Subject: Re: [RFC 03/10] x86/mm: Make the batched unmap TLB flush API more generic References: <983c5ee661d8fe8a70c596c4e77076d11ce3f80a.1494160201.git.luto@kernel.org> From: Dave Hansen Message-ID: Date: Mon, 8 May 2017 08:34:58 -0700 MIME-Version: 1.0 In-Reply-To: <983c5ee661d8fe8a70c596c4e77076d11ce3f80a.1494160201.git.luto@kernel.org> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Sender: owner-linux-mm@kvack.org List-ID: To: Andy Lutomirski , X86 ML Cc: "linux-kernel@vger.kernel.org" , Borislav Petkov , Linus Torvalds , Andrew Morton , Mel Gorman , "linux-mm@kvack.org" , Rik van Riel , Nadav Amit , Michal Hocko , Sasha Levin On 05/07/2017 05:38 AM, Andy Lutomirski wrote: > diff --git a/mm/rmap.c b/mm/rmap.c > index f6838015810f..2e568c82f477 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -579,25 +579,12 @@ void page_unlock_anon_vma_read(struct anon_vma *anon_vma) > void try_to_unmap_flush(void) > { > struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; > - int cpu; > > if (!tlb_ubc->flush_required) > return; > > - cpu = get_cpu(); > - > - if (cpumask_test_cpu(cpu, &tlb_ubc->cpumask)) { > - count_vm_tlb_event(NR_TLB_LOCAL_FLUSH_ALL); > - local_flush_tlb(); > - trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL); > - } > - > - if (cpumask_any_but(&tlb_ubc->cpumask, cpu) < nr_cpu_ids) > - flush_tlb_others(&tlb_ubc->cpumask, NULL, 0, TLB_FLUSH_ALL); > - cpumask_clear(&tlb_ubc->cpumask); > tlb_ubc->flush_required = false; > tlb_ubc->writable = false; > - put_cpu(); > } > > /* Flush iff there are potentially writable TLB entries that can race with IO */ > @@ -613,7 +600,7 @@ static void set_tlb_ubc_flush_pending(struct mm_struct *mm, bool writable) > { > struct tlbflush_unmap_batch *tlb_ubc = ¤t->tlb_ubc; > > - cpumask_or(&tlb_ubc->cpumask, &tlb_ubc->cpumask, mm_cpumask(mm)); > + arch_tlbbatch_add_mm(&tlb_ubc->arch, mm); > tlb_ubc->flush_required = true; > > /* Looking at this patch in isolation, how can this be safe? It removes TLB flushes from the generic code. Do other patches in the series fix this up? -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org