From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BDD64C433DF for ; Tue, 28 Jul 2020 10:08:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 86E2D20792 for ; Tue, 28 Jul 2020 10:08:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 86E2D20792 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0EE6B8D0003; Tue, 28 Jul 2020 06:08:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 09F8F6B0024; Tue, 28 Jul 2020 06:08:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED1378D0003; Tue, 28 Jul 2020 06:08:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0019.hostedemail.com [216.40.44.19]) by kanga.kvack.org (Postfix) with ESMTP id D9AE96B0023 for ; Tue, 28 Jul 2020 06:08:03 -0400 (EDT) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id A4265180AD817 for ; Tue, 28 Jul 2020 10:08:03 +0000 (UTC) X-FDA: 77087058846.25.year82_61130fb26f69 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id 53E661804E3A0 for ; Tue, 28 Jul 2020 10:08:03 +0000 (UTC) X-HE-Tag: year82_61130fb26f69 X-Filterd-Recvd-Size: 6780 Received: from out4436.biz.mail.alibaba.com (out4436.biz.mail.alibaba.com [47.88.44.36]) by imf39.hostedemail.com (Postfix) with ESMTP for ; Tue, 28 Jul 2020 10:08:01 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01f04427;MF=xuyu@linux.alibaba.com;NM=1;PH=DS;RN=14;SR=0;TI=SMTPD_---0U43eqhS_1595930864; Received: from xuyu-mbp15.local(mailfrom:xuyu@linux.alibaba.com fp:SMTPD_---0U43eqhS_1595930864) by smtp.aliyun-inc.com(127.0.0.1); Tue, 28 Jul 2020 18:07:45 +0800 Subject: Re: [patch 01/15] mm/memory.c: avoid access flag update TLB flush for retried page fault To: Catalin Marinas , Will Deacon Cc: Linus Torvalds , Yang Shi , Andrew Morton , Johannes Weiner , Hillf Danton , Hugh Dickins , Josef Bacik , "Kirill A . Shutemov" , Linux-MM , mm-commits@vger.kernel.org, Matthew Wilcox References: <20200723211432.b31831a0df3bc2cbdae31b40@linux-foundation.org> <20200724041508.QlTbrHnfh%akpm@linux-foundation.org> <7de20d4a-f86c-8e1f-b238-65f02b560325@linux.alibaba.com> <20200725155841.GA14490@gaia> <20200728092220.GA21800@willie-the-truck> <20200728093910.GB706@gaia> From: Yu Xu Message-ID: <1f04a55e-ebea-beb2-a2d8-7bddbd296ba8@linux.alibaba.com> Date: Tue, 28 Jul 2020 18:07:44 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0) Gecko/20100101 Thunderbird/68.10.0 MIME-Version: 1.0 In-Reply-To: <20200728093910.GB706@gaia> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit X-Rspamd-Queue-Id: 53E661804E3A0 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam02 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 7/28/20 5:39 PM, Catalin Marinas wrote: > On Tue, Jul 28, 2020 at 10:22:20AM +0100, Will Deacon wrote: >> On Sat, Jul 25, 2020 at 04:58:43PM +0100, Catalin Marinas wrote: >>> On Fri, Jul 24, 2020 at 06:29:43PM -0700, Linus Torvalds wrote: >>>> For any architecture that guarantees that a page fault will always >>>> flush the old TLB entry for this kind of situation, that >>>> flush_tlb_fix_spurious_fault() thing can be a no-op. >>>> >>>> So that's why on x86, we just do >>>> >>>> #define flush_tlb_fix_spurious_fault(vma, address) do { } while (0) >>>> >>>> and have no issues. >>>> >>>> Note that it does *not* need to do any cross-CPU flushing or anything >>>> like that. So it's actually wrong (I think) to have that default >>>> fallback for >>>> >>>> #define flush_tlb_fix_spurious_fault(vma, address) >>>> flush_tlb_page(vma, address) >>>> >>>> because flush_tlb_page() is the serious "do cross CPU etc". >>>> >>>> Does the arm64 flush_tlb_page() perhaps do the whole expensive >>>> cross-CPU thing rather than the much cheaper "just local invalidate" >>>> version? >>> >>> I think it makes sense to have a local-only >>> flush_tlb_fix_spurious_fault(), but with ptep_set_access_flags() updated >>> to still issue the full broadcast TLBI. In addition, I added a minor >>> optimisation to avoid the TLB flush if the old pte was not accessible. >>> In a read-access fault case (followed by mkyoung), the TLB wouldn't have >>> cached a non-accessible pte (not sure it makes much difference to Yang's >>> case). Anyway, from ARMv8.1 onwards, the hardware handles the access >>> flag automatically. >>> >>> I'm not sure the first dsb(nshst) below is of much use in this case. If >>> we got a spurious fault, the write to the pte happened on a different >>> CPU (IIUC, we shouldn't return to user with updated ptes without a TLB >>> flush on the same CPU). Anyway, we can refine this if it solves Yang's >>> performance regression. >>> >>> -------------8<----------------------- >>> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h >>> index d493174415db..d1401cbad7d4 100644 >>> --- a/arch/arm64/include/asm/tlbflush.h >>> +++ b/arch/arm64/include/asm/tlbflush.h >>> @@ -268,6 +268,20 @@ static inline void flush_tlb_page(struct vm_area_struct *vma, >>> dsb(ish); >>> } >>> >>> +static inline void local_flush_tlb_page(struct vm_area_struct *vma, >>> + unsigned long uaddr) >>> +{ >>> + unsigned long addr = __TLBI_VADDR(uaddr, ASID(vma->vm_mm)); >>> + >>> + dsb(nshst); >>> + __tlbi(vale1, addr); >>> + __tlbi_user(vale1, addr); >>> + dsb(nsh); >>> +} >>> + >>> +#define flush_tlb_fix_spurious_fault(vma, address) \ >>> + local_flush_tlb_page(vma, address) >> >> Why can't we just have flush_tlb_fix_spurious_fault() be a NOP on arm64? > > Possibly, as long as any other optimisations only defer the TLB flushing > for relatively short time (the fault is transient, it will get a > broadcast TLBI eventually). > > Either way, it's worth benchmarking the above patch but with > flush_tlb_fix_spurious_fault() a no-op (we still need flush_tlb_page() > in ptep_set_access_flags()). Xu, Yang, could you please give it a try? If I understand correctly, this should do as good as the patch of Linux or Yang in will-it-scale page_fault3 testcase, which both avoid doing flush_tlb_fix_spurious_fault(), in case of FAULT_FLAG_TRIED. Shouldn't we be concerned about data integrity if to have flush_tlb_fix_spurious_fault() be a nop on arm64? Thanks, Yu > >> Given that the architecture prohibits the TLB from caching invalid entries, >> then software access/dirty is fine without additional flushing. > > The access fault is fine, the TLB has not cached the entry. For a dirty > fault, however, the TLB could cache a read-only mapping, so it does need > flushing. Question is, do we make the pte dirty anywhere without a > subsequent (broadcast) TLBI? > >> The only >> problematic case I can think of is on the invalid->valid (i.e. map) path, >> where we elide the expensive DSB instruction because (a) most CPUs have a >> walker that can snoop the store buffer and (b) even if they don't, the >> store buffer tends to drain by the time we get back to userspace. Even >> if that was a problem, flush_tlb_fix_spurious_fault() wouldn't be the >> right hook, since the DSB must occur on the CPU that did the pte update. > > I guess the best a CPU can do is attempt the page table walk again, in > the hope that the write buffer on the other CPU eventually drains. >