From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-oi0-f69.google.com (mail-oi0-f69.google.com [209.85.218.69]) by kanga.kvack.org (Postfix) with ESMTP id 2B1BF6B007E for ; Tue, 14 Jun 2016 23:09:53 -0400 (EDT) Received: by mail-oi0-f69.google.com with SMTP id t8so15876825oif.2 for ; Tue, 14 Jun 2016 20:09:53 -0700 (PDT) Received: from mail-oi0-x232.google.com (mail-oi0-x232.google.com. [2607:f8b0:4003:c06::232]) by mx.google.com with ESMTPS id t6si16362188oie.26.2016.06.14.20.09.52 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 14 Jun 2016 20:09:52 -0700 (PDT) Received: by mail-oi0-x232.google.com with SMTP id w5so14468334oib.2 for ; Tue, 14 Jun 2016 20:09:52 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <2E7DAAC8-7EB5-4F79-BDC5-55C8169F4F78@gmail.com> References: <1465919919-2093-1-git-send-email-lukasz.anaczkowski@intel.com> <7FB15233-B347-4A87-9506-A9E10D331292@gmail.com> <57603C61.5000408@linux.intel.com> <2471A3E8-FF69-4720-A3BF-BDC6094A6A70@gmail.com> <5760792D.90000@linux.intel.com> <3878B27E-756E-4086-93BE-B4799A70570F@gmail.com> <2E7DAAC8-7EB5-4F79-BDC5-55C8169F4F78@gmail.com> From: Andy Lutomirski Date: Tue, 14 Jun 2016 20:09:32 -0700 Message-ID: Subject: Re: [PATCH] Linux VM workaround for Knights Landing A/D leak Content-Type: text/plain; charset=UTF-8 Sender: owner-linux-mm@kvack.org List-ID: To: Nadav Amit Cc: Dave Hansen , Lukasz Anaczkowski , LKML , "linux-mm@kvack.org" , Thomas Gleixner , Ingo Molnar , Andi Kleen , "Kirill A. Shutemov" , Michal Hocko , Andrew Morton , "H. Peter Anvin" , harish.srinivasappa@intel.com, lukasz.odzioba@intel.com On Tue, Jun 14, 2016 at 7:44 PM, Nadav Amit wrote: > Andy Lutomirski wrote: > >> On Tue, Jun 14, 2016 at 7:35 PM, Nadav Amit wrote: >>> Andy Lutomirski wrote: >>> >>>> On Tue, Jun 14, 2016 at 2:37 PM, Dave Hansen >>>> wrote: >>>>> On 06/14/2016 01:16 PM, Nadav Amit wrote: >>>>>> Dave Hansen wrote: >>>>>> >>>>>>> On 06/14/2016 09:47 AM, Nadav Amit wrote: >>>>>>>> Lukasz Anaczkowski wrote: >>>>>>>> >>>>>>>>>> From: Andi Kleen >>>>>>>>>> +void fix_pte_leak(struct mm_struct *mm, unsigned long addr, pte_t *ptep) >>>>>>>>>> +{ >>>>>>>> Here there should be a call to smp_mb__after_atomic() to synchronize with >>>>>>>> switch_mm. I submitted a similar patch, which is still pending (hint). >>>>>>>> >>>>>>>>>> + if (cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < nr_cpu_ids) { >>>>>>>>>> + trace_tlb_flush(TLB_LOCAL_SHOOTDOWN, TLB_FLUSH_ALL); >>>>>>>>>> + flush_tlb_others(mm_cpumask(mm), mm, addr, >>>>>>>>>> + addr + PAGE_SIZE); >>>>>>>>>> + mb(); >>>>>>>>>> + set_pte(ptep, __pte(0)); >>>>>>>>>> + } >>>>>>>>>> +} >>>>>>> >>>>>>> Shouldn't that barrier be incorporated in the TLB flush code itself and >>>>>>> not every single caller (like this code is)? >>>>>>> >>>>>>> It is insane to require individual TLB flushers to be concerned with the >>>>>>> barriers. >>>>>> >>>>>> IMHO it is best to use existing flushing interfaces instead of creating >>>>>> new ones. >>>>> >>>>> Yeah, or make these things a _little_ harder to get wrong. That little >>>>> snippet above isn't so crazy that we should be depending on open-coded >>>>> barriers to get it right. >>>>> >>>>> Should we just add a barrier to mm_cpumask() itself? That should stop >>>>> the race. Or maybe we need a new primitive like: >>>>> >>>>> /* >>>>> * Call this if a full barrier has been executed since the last >>>>> * pagetable modification operation. >>>>> */ >>>>> static int __other_cpus_need_tlb_flush(struct mm_struct *mm) >>>>> { >>>>> /* cpumask_any_but() returns >= nr_cpu_ids if no cpus set. */ >>>>> return cpumask_any_but(mm_cpumask(mm), smp_processor_id()) < >>>>> nr_cpu_ids; >>>>> } >>>>> >>>>> >>>>> static int other_cpus_need_tlb_flush(struct mm_struct *mm) >>>>> { >>>>> /* >>>>> * Synchronizes with switch_mm. Makes sure that we do not >>>>> * observe a bit having been cleared in mm_cpumask() before >>>>> * the other processor has seen our pagetable update. See >>>>> * switch_mm(). >>>>> */ >>>>> smp_mb__after_atomic(); >>>>> >>>>> return __other_cpus_need_tlb_flush(mm) >>>>> } >>>>> >>>>> We should be able to deploy other_cpus_need_tlb_flush() in most of the >>>>> cases where we are doing "cpumask_any_but(mm_cpumask(mm), >>>>> smp_processor_id()) < nr_cpu_ids". >>>> >>>> IMO this is a bit nuts. smp_mb__after_atomic() doesn't do anything on >>>> x86. And, even if it did, why should the flush code assume that the >>>> previous store was atomic? >>>> >>>> What's the issue being fixed / worked around here? >>> >>> It does a compiler barrier, which prevents the decision whether a >>> remote TLB shootdown is required to be made before the PTE is set. >>> >>> I agree that PTEs may not be written atomically in certain cases >>> (although I am unaware of such cases, except on full-mm flush). >> >> How about plain set_pte? It's atomic (aligned word-sized write), but >> it's not atomic in the _after_atomic sense. > > Can you point me to a place where set_pte is used before a TLB > invalidation/shootdown, excluding this patch and the fullmm case? > > I am not claiming there is no such case, but I am unaware of such > one. PTEs are cleared on SMP using xchg, and similarly the dirty bit > is cleared with an atomic operation. > Hmm, you may be right. I still think this is all disgusting, but I don't have any better ideas. --Andy -- Andy Lutomirski AMA Capital Management, LLC -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: email@kvack.org