From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id D03D1C07545 for ; Tue, 24 Oct 2023 13:48:58 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6DAF66B02A9; Tue, 24 Oct 2023 09:48:58 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 663D66B02AA; Tue, 24 Oct 2023 09:48:58 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4DB356B02AB; Tue, 24 Oct 2023 09:48:58 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 31C8B6B02A9 for ; Tue, 24 Oct 2023 09:48:58 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 0C8BB120A70 for ; Tue, 24 Oct 2023 13:48:58 +0000 (UTC) X-FDA: 81380485956.25.96DE08B Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by imf28.hostedemail.com (Postfix) with ESMTP id 7DE55C0019 for ; Tue, 24 Oct 2023 13:48:52 +0000 (UTC) Authentication-Results: imf28.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf28.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698155336; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Cl5YXRQr78z3n7xG855EzNr1KgTUHrysJ1V1vDHzhMU=; b=Sa4lfbbymyDrz1Eg7jfUboQvDNNNgjhWBvNUFjAlpWZ/aR+bC7LLbw5x0H3l0roED1b+aW e8iIFalVHzEclROYvl3iBuvXQAI9X593MqFyGW7ez3zKh8OhNDkPKlfi9nZvcFVxbvAzHE xEVIH3sC7SbV9WwdKeiVgsQraxIhPqM= ARC-Authentication-Results: i=1; imf28.hostedemail.com; dkim=none; dmarc=pass (policy=quarantine) header.from=huawei.com; spf=pass (imf28.hostedemail.com: domain of wangkefeng.wang@huawei.com designates 45.249.212.189 as permitted sender) smtp.mailfrom=wangkefeng.wang@huawei.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698155336; a=rsa-sha256; cv=none; b=RFr176R1a1FNxkoadzPHKQ7HPKu09TyjrDMV0kHUP+tmuRT/0DbUCcoX3+TdOhBitMCdUi Wd8jIJitWfhH2vFLHJxz4VhW34Cf+AmAM6goSe0UAdlwPpP6eaKFd8UTjncMIgi1bEKsKO CjsRQ1iMTImGwV8obtBQ/5Rs6T8RDRw= Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.57]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4SFCwH4tjHzMm3y; Tue, 24 Oct 2023 21:44:31 +0800 (CST) Received: from [10.174.177.243] (10.174.177.243) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 24 Oct 2023 21:48:43 +0800 Message-ID: Date: Tue, 24 Oct 2023 21:48:42 +0800 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH] arm64: mm: drop tlb flush operation when clearing the access bit Content-Language: en-US To: Baolin Wang , , CC: , , , , , References: From: Kefeng Wang In-Reply-To: Content-Type: text/plain; charset="UTF-8"; format=flowed Content-Transfer-Encoding: 7bit X-Originating-IP: [10.174.177.243] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 7DE55C0019 X-Rspam-User: X-Rspamd-Server: rspam04 X-Stat-Signature: s9akziur4gerf6hxswykfync11w6qe6i X-HE-Tag: 1698155332-74891 X-HE-Meta: U2FsdGVkX18mvDRAxjTYsBfxCVN5x4+asiqu6G0YGXTc9a3OmhskjgPBOIAiCdjVdIYCFzkSAyxbJOuHCTWTQFLdvdEvy8xu3FqvDiBx/7zBOfQssJe5i7Z2glz3TeW2d8a0riIHvIbkAWRKYln2K1UOVp3WaDjPuW64WoysycEmdSSC2f+iG3Gnep5I28EAV+smY3dGHS53Z7gIgVU+CZR/EPpSvrqRcxcuwgSPHRhAHs8kC3bncGw3pbNABIEnjeubpz3UPG133MlNHxHlo9sSgQ2PETRKYmRj2XZoLYFyYk8YHq96hBMaBVoTvmGNkMdkMxAyMx2WCUPxk+RFoI7KLYAg5hzyIOv+W6CGScLI3Y9RonpBdC+rQfNKKDUxWJWM3IgUeQRLMmxiclSEz9kHrBpXV3pwXOaOyaL1mH7EgCKce3H4DsvDQ7iu23DwESB+3vnlOrTyQBtgRHz2dPX48YR6GjfI5t63CjthWZ6BVhKQrEnkNENsHGT0S57460JHLJZBbRccUTGg9UMv2KA0uZFKGPixjlPONWWlp9ZtKyxRSpTn58Z4bMyJSgJYkbZ1+3/xuhOmzlrOZnmmLW9kWsgEIFlpBlax0dR4RTm0XhVHCigj80y4fy6qjhFgfPw3Ij7IyfQWx3Xonac/sr+CnmArIRn1Y/6xp1sJR9H2gJrXcawT8fjG/Bsvc3IpR9B2Bo6rz/C/ZFbF+zpVnMyH5005PJ95l/lU1M88wUbEnK99/if5j2feelgAqmK+O7FkAf5AWPhePzwIvcfNcpt7iwH+BYL/n151g1Hj0109N6KUrKjD2fJgrDFMnlxjvickcrGLRNP9Tazxeq8fIK4OgOqQHQPIImbku1qyoDCtQYN5//8u+uYpqvJVMjDaYotEk3HsUjzLfYoa/8QUuMgIYoA4fO0u9KefMgvlzhc6LVZG+E83BKjxG2jkek+bofZuiH1JYQY8mp8KrEj pyE9rXPD wRSQLzkI/TVV2922uDnpLOsyAbD+hTSPXMZQJK5l1Ia8ombTtcPTzZl7w/plMsH2bE85KQXumEUzxSNT4dO6PEp4VXT1EKO9HxAIN0oEDfnjpqEhlIvFagAhCKzbSNm3ebYcWvUcExK2et93T6DqBAWJiFR6quMaN4XOhqqyxqPk0hOLKNQ690VNRfI7s6QZOKH8n6cgGKjR6EvE1/bAk37C+PuFKyVZlJPS2pnqbrDcfsk+RInCFTnVD5U1MNGBpFMk+aG5FWG1y4wEW7TbfMpyQNA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On 2023/10/24 20:56, Baolin Wang wrote: > Now ptep_clear_flush_young() is only called by folio_referenced() to > check if the folio was referenced, and now it will call a tlb flush on > ARM64 architecture. However the tlb flush can be expensive on ARM64 > servers, especially for the systems with a large CPU numbers. > > Similar to the x86 architecture, below comments also apply equally to > ARM64 architecture. So we can drop the tlb flush operation in > ptep_clear_flush_young() on ARM64 architecture to improve the performance. > " > /* Clearing the accessed bit without a TLB flush > * doesn't cause data corruption. [ It could cause incorrect > * page aging and the (mistaken) reclaim of hot pages, but the > * chance of that should be relatively low. ] > * > * So as a performance optimization don't flush the TLB when > * clearing the accessed bit, it will eventually be flushed by > * a context switch or a VM operation anyway. [ In the rare > * event of it not getting flushed for a long time the delay > * shouldn't really matter because there's no real memory > * pressure for swapout to react to. ] > */ > " > Running the thpscale to show some obvious improvements for compaction > latency with this patch: > base patched > Amean fault-both-1 1093.19 ( 0.00%) 1084.57 * 0.79%* > Amean fault-both-3 2566.22 ( 0.00%) 2228.45 * 13.16%* > Amean fault-both-5 3591.22 ( 0.00%) 3146.73 * 12.38%* > Amean fault-both-7 4157.26 ( 0.00%) 4113.67 * 1.05%* > Amean fault-both-12 6184.79 ( 0.00%) 5218.70 * 15.62%* > Amean fault-both-18 9103.70 ( 0.00%) 7739.71 * 14.98%* > Amean fault-both-24 12341.73 ( 0.00%) 10684.23 * 13.43%* > Amean fault-both-30 15519.00 ( 0.00%) 13695.14 * 11.75%* > Amean fault-both-32 16189.15 ( 0.00%) 14365.73 * 11.26%* > base patched > Duration User 167.78 161.03 > Duration System 1836.66 1673.01 > Duration Elapsed 2074.58 2059.75 > > Barry Song submitted a similar patch [1] before, that replaces the > ptep_clear_flush_young_notify() with ptep_clear_young_notify() in > folio_referenced_one(). However, I'm not sure if removing the tlb flush > operation is applicable to every architecture in kernel, so dropping > the tlb flush for ARM64 seems a sensible change. At least x86/s390/riscv/powerpc already do it, also I think we could change pmdp_clear_flush_young_notify() too, since it is same with ptep_clear_flush_young_notify(), > > Note: I am okay for both approach, if someone can help to ensure that > all architectures do not need the tlb flush when clearing the accessed > bit, then I also think Barry's patch is better (hope Barry can resend > his patch). > > [1] https://lore.kernel.org/lkml/20220617070555.344368-1-21cnbao@gmail.com/ > Signed-off-by: Baolin Wang > --- > arch/arm64/include/asm/pgtable.h | 31 ++++++++++++++++--------------- > 1 file changed, 16 insertions(+), 15 deletions(-) > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index 0bd18de9fd97..2979d796ba9d 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -905,21 +905,22 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma, > static inline int ptep_clear_flush_young(struct vm_area_struct *vma, > unsigned long address, pte_t *ptep) > { > - int young = ptep_test_and_clear_young(vma, address, ptep); > - > - if (young) { > - /* > - * We can elide the trailing DSB here since the worst that can > - * happen is that a CPU continues to use the young entry in its > - * TLB and we mistakenly reclaim the associated page. The > - * window for such an event is bounded by the next > - * context-switch, which provides a DSB to complete the TLB > - * invalidation. > - */ > - flush_tlb_page_nosync(vma, address); > - } > - > - return young; > + /* > + * This comment is borrowed from x86, but applies equally to ARM64: > + * > + * Clearing the accessed bit without a TLB flush doesn't cause > + * data corruption. [ It could cause incorrect page aging and > + * the (mistaken) reclaim of hot pages, but the chance of that > + * should be relatively low. ] > + * > + * So as a performance optimization don't flush the TLB when > + * clearing the accessed bit, it will eventually be flushed by > + * a context switch or a VM operation anyway. [ In the rare > + * event of it not getting flushed for a long time the delay > + * shouldn't really matter because there's no real memory > + * pressure for swapout to react to. ] > + */ > + return ptep_test_and_clear_young(vma, address, ptep); > } > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE