From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id AB1F8C001DF for ; Wed, 5 Jul 2023 10:24:23 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 202468D0001; Wed, 5 Jul 2023 06:24:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 1B1826B0072; Wed, 5 Jul 2023 06:24:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 079068D0001; Wed, 5 Jul 2023 06:24:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id EC3E06B0071 for ; Wed, 5 Jul 2023 06:24:22 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id A04F5120AFF for ; Wed, 5 Jul 2023 10:24:22 +0000 (UTC) X-FDA: 80977173564.15.AF86B71 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf25.hostedemail.com (Postfix) with ESMTP id A0196A001D for ; Wed, 5 Jul 2023 10:24:18 +0000 (UTC) Authentication-Results: imf25.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf25.hostedemail.com: domain of yangyicong@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=yangyicong@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688552659; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=15BzfJGPFwN32bQLlNsSmTUDEO3W6Pzrn/4UIPPhN0A=; b=IaUiFpyhaorCo4GTSUSPeYd0q0rJK0WqMWCiQb9PR6w/xgaACKLWGTbA7vy5yzecQ0Mzay lkvPR/lH4hBZFHzftcrzVZ6JLRFGpikLJAB4lMJPRA84RIuphEVkqtUzamc4tcBn6S6uzK HPAK7oBJ4SvCqUnFOAg6c1G+t3BTjQ4= ARC-Authentication-Results: i=1; imf25.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf25.hostedemail.com: domain of yangyicong@huawei.com designates 45.249.212.188 as permitted sender) smtp.mailfrom=yangyicong@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688552659; a=rsa-sha256; cv=none; b=IPYj/61etcf3RL5NhjKO95QJm463H22tsfF3Fdhv6xaI1HrptL8PtM2JcZMJQrXmTOLcru icN6bLdXqX3CcPNlYj+hb4Nf/KUekeRlaFHoiiJI7NIyT+BAMY32NeaxYlq3tSp2queDwj nS/sdGWuYfKAmZZ5Q1xo4DgRHQ1C7c4= Received: from canpemm500009.china.huawei.com (unknown [172.30.72.57]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4QwwjC0D2yzTkkg; Wed, 5 Jul 2023 18:23:11 +0800 (CST) Received: from [10.67.102.169] (10.67.102.169) by canpemm500009.china.huawei.com (7.192.105.203) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Wed, 5 Jul 2023 18:24:13 +0800 CC: , Catalin Marinas , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , Barry Song , Nadav Amit , Mel Gorman Subject: Re: [RESEND PATCH v9 2/2] arm64: support batched/deferred tlb shootdown during page reclamation/migration To: Barry Song <21cnbao@gmail.com> References: <20230518065934.12877-1-yangyicong@huawei.com> <20230518065934.12877-3-yangyicong@huawei.com> <2f593850-797c-5422-2c80-ce214fac02bb@huawei.com> From: Yicong Yang Message-ID: <124b7798-94ae-ebfc-bbe5-21ebaaa02760@huawei.com> Date: Wed, 5 Jul 2023 18:24:12 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.5.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 8bit X-Originating-IP: [10.67.102.169] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To canpemm500009.china.huawei.com (7.192.105.203) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: A0196A001D X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: ns8z5y83kmrti89ohwqiwtx61tzjqy7n X-HE-Tag: 1688552658-428441 X-HE-Meta: U2FsdGVkX19S5QcryVG3WV+gnWLRuha0ITxkyIR++awo4iRIbpzR0xdwHZBWEhZJq/1x7gbyPnD/GKnHkG+dzVcP82tGlaldDSMat6DonvM29E50xOd/Q6pIn2I+/G6NnoDt5IK6APUuS+HvfHHUxxxgSOK43feNIx0/KklIBcwaVGG7nwrDw0SM0xAnE2boDS078HhPY1wKK4vU6PKG1iVcIPqUB4UwtOYTnfcpUsMHbBAlcvAlkwuZwnT5u39s0/QRyVIP3cma5ExcoIj9DpSkjZtlIk3QAErQmxCPLsC2OcWYErb26O8pZ36rjabHHrVUjYDchgvgPYoTRpNrriyMiqvyDcnkML3w+7m2fxA2Bz/Rpp0+YCAAgBTty/hWKbR8LOAKBm4dtL1ytP9T0qkr7BsVRA1rqYiQPUKzK/3iBTsDRx6gVv5GzdydQ8pI4mgHsXwOky9HI0zkoJeDm5xt2AV0Qi85pi15563En2R3sOP4WLOZGfvqdLsbRhyvbPLE7GOhhiu2dr2C5rVyd6oV7VszZ1/T5zuVsfZ3rPoI6nnxtNEiJJBiTfIS9lk1d2hLg913nJJiWC4e6ySSEO7g4fPewAuxZ8/1/7gOCHF9y962ll6wqhfUX+pVFMx6GOjQYjMTr5TbhcK0fVWl5A4efIQengSIq1JrqT1xW47l7iRTa+yNzVcmOs9q6ts4Cz2d5PCnSSU3do1xBP2f4IZ0lRU1W/7jByGeEYNNnZwJnSmnqyx4noGiF6yhyHTxeGSCpTvUv8d+i0TzBLdn4etxJjp8mWNdi1ZWt7YNOAPL9LKwBuCGW2j0LsHOygm70FoshAu3QSZeB8SAU/Vihv0d16NXX1cqAu6Fq20xzvtuVFLav0Len04k9H/jPuqn14XmCDUPGAZD1W1L30gvMSq/pBxQBsVRiGGYpZiZR3Osthurnt+Opk1I9F2PCMoK9kvvQAzERjemenzrDav frZ12nOM p9Ghyf2l8YRlkTpHPylXs2ZSx5D0VQe/U9Vp1HLYAdxDCfA33WOTX0OMr93JyxICO1tzKLdIvam3sPScPpA+e03FkdaWZOfiqngz5xqA7ShoEn446NZTWNZzjlMiuAfKaUqJURF7llSyOntry3JEQ5Eev90meN3N/9zxBj7A9/weaL31t9TeIOxWxspiKgE2UGVRbv++e99TJyanYI7uLM9Sh9JhMa1GPkgx6uTqf1Ojo3tAiq7qtMmzXj7A55sC0P5ib6O51U7X64IBitlX82TJSCYyNz+eRotfl48Ux554oTxBf4EgP6JdYzbJb3vIMBIqId4OKot+jK2wi0MAcderFWA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 2023/7/5 16:43, Barry Song wrote: > On Tue, Jul 4, 2023 at 10:36 PM Yicong Yang wrote: >> >> On 2023/6/30 1:26, Catalin Marinas wrote: >>> On Thu, Jun 29, 2023 at 05:31:36PM +0100, Catalin Marinas wrote: >>>> On Thu, May 18, 2023 at 02:59:34PM +0800, Yicong Yang wrote: >>>>> From: Barry Song >>>>> >>>>> on x86, batched and deferred tlb shootdown has lead to 90% >>>>> performance increase on tlb shootdown. on arm64, HW can do >>>>> tlb shootdown without software IPI. But sync tlbi is still >>>>> quite expensive. >>>> [...] >>>>> .../features/vm/TLB/arch-support.txt | 2 +- >>>>> arch/arm64/Kconfig | 1 + >>>>> arch/arm64/include/asm/tlbbatch.h | 12 ++++ >>>>> arch/arm64/include/asm/tlbflush.h | 33 ++++++++- >>>>> arch/arm64/mm/flush.c | 69 +++++++++++++++++++ >>>>> arch/x86/include/asm/tlbflush.h | 5 +- >>>>> include/linux/mm_types_task.h | 4 +- >>>>> mm/rmap.c | 12 ++-- >>>> >>>> First of all, this patch needs to be split in some preparatory patches >>>> introducing/renaming functions with no functional change for x86. Once >>>> done, you can add the arm64-only changes. >>>> >> >> got it. will try to split this patch as suggested. >> >>>> Now, on the implementation, I had some comments on v7 but we didn't get >>>> to a conclusion and the thread eventually died: >>>> >>>> https://lore.kernel.org/linux-mm/Y7cToj5mWd1ZbMyQ@arm.com/ >>>> >>>> I know I said a command line argument is better than Kconfig or some >>>> random number of CPUs heuristics but it would be even better if we don't >>>> bother with any, just make this always on. >> >> ok, will make this always on. >> >>>> Barry had some comments >>>> around mprotect() being racy and that's why we have >>>> flush_tlb_batched_pending() but I don't think it's needed (or, for >>>> arm64, it can be a DSB since this patch issues the TLBIs but without the >>>> DVM Sync). So we need to clarify this (see Barry's last email on the >>>> above thread) and before attempting new versions of this patchset. With >>>> flush_tlb_batched_pending() removed (or DSB), I have a suspicion such >>>> implementation would be faster on any SoC irrespective of the number of >>>> CPUs. >>> >>> I think I got the need for flush_tlb_batched_pending(). If >>> try_to_unmap() marks the pte !present and we have a pending TLBI, >>> change_pte_range() will skip the TLB maintenance altogether since it did >>> not change the pte. So we could be left with stale TLB entries after >>> mprotect() before TTU does the batch flushing. >>> > > Good catch. > This could be also true for MADV_DONTNEED. after try_to_unmap, we run > MADV_DONTNEED on this area, as pte is not present, we don't do anything > on this PTE in zap_pte_range afterwards. > >>> We can have an arch-specific flush_tlb_batched_pending() that can be a >>> DSB only on arm64 and a full mm flush on x86. >>> >> >> We need to do a flush/dsb in flush_tlb_batched_pending() only in a race >> condition so we first check whether there's a pended batched flush and >> if so do the tlb flush. The pending checking is common and the differences >> among the archs is how to flush the TLB here within the flush_tlb_batched_pending(), >> on arm64 it should only be a dsb. >> >> As we only needs to maintain the TLBs already pended in batched flush, >> does it make sense to only handle those TLBs in flush_tlb_batched_pending()? >> Then we can use the arch_tlbbatch_flush() rather than flush_tlb_mm() in >> flush_tlb_batched_pending() and no arch specific function needed. > > as we have issued no-sync tlbi on those pending addresses , that means > our hardware > has already "recorded" what should be flushed in the specific mm. so > DSB only will flush > them correctly. right? > yes it's right. I was just thought something like below. arch_tlbbatch_flush() will only be a dsb on arm64 so this will match what Catalin wants. But as you told that this maybe incorrect on x86 so we'd better have arch specific implementation for flush_tlb_batched_pending() as suggested. diff --git a/mm/rmap.c b/mm/rmap.c index 9699c6011b0e..afa3571503a0 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -717,7 +717,7 @@ void flush_tlb_batched_pending(struct mm_struct *mm) int flushed = batch >> TLB_FLUSH_BATCH_FLUSHED_SHIFT; if (pending != flushed) { - flush_tlb_mm(mm); + arch_tlbbatch_flush(¤t->tlb_ubc.arch); /* * If the new TLB flushing is pending during flushing, leave * mm->tlb_flush_batched as is, to avoid losing flushing.