From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87D26C25B48 for ; Tue, 24 Oct 2023 23:32:05 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id DE5CE6B02F6; Tue, 24 Oct 2023 19:32:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id D968A6B02F7; Tue, 24 Oct 2023 19:32:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id C5CF16B02F8; Tue, 24 Oct 2023 19:32:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id B5D946B02F6 for ; Tue, 24 Oct 2023 19:32:04 -0400 (EDT) Received: from smtpin28.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id 83DFB120B91 for ; Tue, 24 Oct 2023 23:32:04 +0000 (UTC) X-FDA: 81381955368.28.F28E037 Received: from mail-vs1-f45.google.com (mail-vs1-f45.google.com [209.85.217.45]) by imf22.hostedemail.com (Postfix) with ESMTP id B79CDC000E for ; Tue, 24 Oct 2023 23:32:02 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ineqn4TB; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.45 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698190322; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=jbnIucsHHXNfggobjGeG2HVO4IB6KqsTKM5AKaOuMVs=; b=wniwVrSDNxtNxGV+9IbtSJmd8NkQvfcvnpQ4/ChpRgDDol14tZEd1l0TN3Kr4C8TmGM+K4 CKtuUO/QPHUyWXvKxslfh4moHQNT71uyFuBQAB4px0+5cgff6diyxFund1/cmex/Blw6CI PjnY32YB+RcjxO79/FeU2M6MSOd66ek= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=Ineqn4TB; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.45 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698190322; a=rsa-sha256; cv=none; b=Tujv9tu2gNtYc8IyVo2DOTwKNqmvV4erCluUnxut1kD9AtWvC5V7WpWp74Egq5p81MoXtl 5AH/RlKFfoVBIzxEIya9LdxNWWkmWPU/QMkmaWzOk1PkWEPrr6YysXnWipYSUOIaie6zl6 /+fLPOCT1cVzi4MUtWRF1TvbyUSS4VU= Received: by mail-vs1-f45.google.com with SMTP id ada2fe7eead31-457c2b6713fso191627137.1 for ; Tue, 24 Oct 2023 16:32:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1698190322; x=1698795122; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=jbnIucsHHXNfggobjGeG2HVO4IB6KqsTKM5AKaOuMVs=; b=Ineqn4TBzVNDMjRoEWpz7r3JsqTwYKn5LG7/evmIMbGJGVqXtfg+2Ji2ckmWLB7DWK eTguvyPoadkDcltV3yx7AIuyfWnAB3IehQc+eE/0TLmUrf0/rc3xDmFvNlOP7L2BWFyh FD2XLyXK0ypa9yTc9fNTwzDSr3PtkjAuFGsHB3W7YgNfsaEvveKonJBo6l0WRhyL/Uo4 VKkZ7ZuzbAIDHI/hrhv8kcsia1O/5ZbNiZ9AtqdXrVislccPo8lLmHUaQvfF8ICmK404 sJOHg47K+Rq12KsxxCToXtrrrH+dRzgVLVuXdYbBtYxJ7INVDp6vuDXcH09Kr4I0c0UZ HutA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698190322; x=1698795122; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=jbnIucsHHXNfggobjGeG2HVO4IB6KqsTKM5AKaOuMVs=; b=Oo5GR+rx1r7700zt0ZYaIgluxKUrmnnA11TNGe3/c10iV1BK2bNKf0aqdMEGrOGdsV 6/UyzmujiAi6sm//Kv1G8CU89DRnB5SEEC4rtU6U357N5/MNDCobVdvtaZLBxANDkVkW NpK1rtVcV0Qzw+lJGUHiDuyL/4degBUAg2XgnQ9YMQOdxpcL/fbrVLKJPvBOWZ3fSUU5 Tg4nvtvVx5iy+VNghA0uhNq4cV08qQtV4sbT0TWoIt8VWFAStQlkxBGX8uENJM9BsGGS SiJESqOnBeQYE6/3Gf3M+v+sdujWIFkOSH2596HdinE8ZaEq4tjTlVLjf+y59gsJKyIG hgWQ== X-Gm-Message-State: AOJu0Yz3xgY/kn7HhdHd8ZN7uaoeW62VYAtVxXSO6lt36ZcgvbOclSMv F3WgxEhoEy5bc5ZDFMUK6bL6O62Y+gG2sXLTUM0= X-Google-Smtp-Source: AGHT+IE7/vUP+HZkBa0Sdc6BQRbBDGFfA+L9dp4137O2CF/zQxb/WK5pV4UYXQoAOHy/G9ipM5boP+oJQwOgofOI87Q= X-Received: by 2002:a67:c39a:0:b0:452:6ecb:e90 with SMTP id s26-20020a67c39a000000b004526ecb0e90mr8162460vsj.3.1698190321668; Tue, 24 Oct 2023 16:32:01 -0700 (PDT) MIME-Version: 1.0 References: In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Wed, 25 Oct 2023 07:31:50 +0800 Message-ID: Subject: Re: [PATCH] arm64: mm: drop tlb flush operation when clearing the access bit To: Baolin Wang Cc: catalin.marinas@arm.com, will@kernel.org, akpm@linux-foundation.org, v-songbaohua@oppo.com, yuzhao@google.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: B79CDC000E X-Rspam-User: X-Rspamd-Server: rspam02 X-Stat-Signature: nk11tx8dciz3i74jbr54hjw1antoo4oa X-HE-Tag: 1698190322-321439 X-HE-Meta: U2FsdGVkX1+657jA+bl3HLEnqCWC443fiK2RpDhx9CWMlZdirLlNXZZcQ96c3Mgo1C9EmTqWSkUv7u9IRweyXLrJuYMMuUu0DmaCYO4BZNbboTSGig8GMRKY6fDijwj0fIkUht64uwm7utwxhvmh10DrN5GrrbiBXyGz3WXZayJULCVVm1YeJsctYnDBT/8l1ic5Ia2gsktmdxI+Hfy7lY95qWuLDx/LKkTU5pKuxZOIV/S2OjZQN+NrTGeNtOKD/X2u5I1dVJAQ13M4DHou80nE+9Q41z374FsK0fmj269SK+yBvGjSs7fepECzGWWdh9vLribwhGlQ6CQbklCjLF14wRRzVf9P1+lQphpTMllmbcUqcLRWotgOKBoE0cQCQDCcnccH4luHtIB8v6UDjZXbzhgZpr4H005mMsN95QubNhNt++g00pBXqKJY571DuA4DTZ5AJIQKnBeuIEL/Q6ovClS7uw7JYy5B1lJSSmC4EZHZtdh5wSp6fNKm+p9EHULrkN9vCIOh1xfHi1HcKX5cuTFr/m3xnFEEUyULigeChrnrXfNxfFgGXvsLPAzmkWAOrvn9uCAJ37lT7VbWpnzGThgtZ33A5hTKipeMQoH9W4KA3hcyhAmoS3HWmkfcRV0O06hT0vfwUgG3OA68qPsyjIl1yCjdbkL4P8eNJ/MuqHc0D2kgGLKi3FM1dVhcKypJmvbN7WxviNs/QFowGSYlzyASKyPOXkGiM07nnIPcJnA027oJxCdt3JwMSCA5rzcMr0t19RRmAaZKy4YGzfyGRXoD4LGMCQN/6FGF+qaYgrzJksfk3zkr6wHnd7qOhAs5Xgy0ph4F8xAcl6f31OxZyTd2ZAzMM1eZUIZTzNm6K0rPDQZaqUxW/C7v2AEj6xIYd3RuQE0lA9q7UWlT3VzwFlqnyXqUZYIPMe1ChCFtXngf3oU8nUHuMF7yJR+uMPX5+BYFsy5eDteO36M S3PExqg3 rbtBJ/2melm8FRNwxSy9QBqkfUqqLplKasZgR/ysk00jKWEXXNDNxeZ/S7B5PBxt7i79gy0e35ezf+CChB+ZrMfhizJNQonPkxzXZVQmsGfxhw1AVulPYEEfrMTkWW4YN4sbS9NVut5WRIiU6ExfIm0MetpfmRE9EPtewSBitcv+zqg9hrGp5KDVPAHv/3LzlhBXqftgx5bpU/Y2v0vI5aQDhYLAeVkVNQ73y57N0z4RGqXMz4UOa1OVjwky9uqUvORi0uo72+K8QbtJGgzBhJRLXJ+TZpatn/OKJ2TsT/gIGP0T32VAh1u6lYB90XX6inwZ4RTkpj+czqif8IISD/WCpIlgF30KuX8OuB/kfwSN7Q7I= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Oct 25, 2023 at 7:16=E2=80=AFAM Barry Song <21cnbao@gmail.com> wrot= e: > > On Tue, Oct 24, 2023 at 8:57=E2=80=AFPM Baolin Wang > wrote: > > > > Now ptep_clear_flush_young() is only called by folio_referenced() to > > check if the folio was referenced, and now it will call a tlb flush on > > ARM64 architecture. However the tlb flush can be expensive on ARM64 > > servers, especially for the systems with a large CPU numbers. > > > > Similar to the x86 architecture, below comments also apply equally to > > ARM64 architecture. So we can drop the tlb flush operation in > > ptep_clear_flush_young() on ARM64 architecture to improve the performan= ce. > > " > > /* Clearing the accessed bit without a TLB flush > > * doesn't cause data corruption. [ It could cause incorrect > > * page aging and the (mistaken) reclaim of hot pages, but the > > * chance of that should be relatively low. ] > > * > > * So as a performance optimization don't flush the TLB when > > * clearing the accessed bit, it will eventually be flushed by > > * a context switch or a VM operation anyway. [ In the rare > > * event of it not getting flushed for a long time the delay > > * shouldn't really matter because there's no real memory > > * pressure for swapout to react to. ] > > */ > > " > > Running the thpscale to show some obvious improvements for compaction > > latency with this patch: > > base patched > > Amean fault-both-1 1093.19 ( 0.00%) 1084.57 * 0.79%* > > Amean fault-both-3 2566.22 ( 0.00%) 2228.45 * 13.16%* > > Amean fault-both-5 3591.22 ( 0.00%) 3146.73 * 12.38%* > > Amean fault-both-7 4157.26 ( 0.00%) 4113.67 * 1.05%* > > Amean fault-both-12 6184.79 ( 0.00%) 5218.70 * 15.62%* > > Amean fault-both-18 9103.70 ( 0.00%) 7739.71 * 14.98%* > > Amean fault-both-24 12341.73 ( 0.00%) 10684.23 * 13.43%* > > Amean fault-both-30 15519.00 ( 0.00%) 13695.14 * 11.75%* > > Amean fault-both-32 16189.15 ( 0.00%) 14365.73 * 11.26%* > > base patched > > Duration User 167.78 161.03 > > Duration System 1836.66 1673.01 > > Duration Elapsed 2074.58 2059.75 > > > > Barry Song submitted a similar patch [1] before, that replaces the > > ptep_clear_flush_young_notify() with ptep_clear_young_notify() in > > folio_referenced_one(). However, I'm not sure if removing the tlb flush > > operation is applicable to every architecture in kernel, so dropping > > the tlb flush for ARM64 seems a sensible change. > > > > Note: I am okay for both approach, if someone can help to ensure that > > all architectures do not need the tlb flush when clearing the accessed > > bit, then I also think Barry's patch is better (hope Barry can resend > > his patch). > > > > Thanks! > > ptep_clear_flush_young() with "flush" in its name clearly says it needs a > flush. but it happens in arm64, all other code which needs a flush has > called other variants, for example __flush_tlb_page_nosync(): > > static inline void arch_tlbbatch_add_pending(struct > arch_tlbflush_unmap_batch *batch, > struct mm_struct *mm, unsigned long uaddr) > { > __flush_tlb_page_nosync(mm, uaddr); > } > > so it seems folio_referenced is the only left user of this > ptep_clear_flush_young(). > The fact makes Baolin's patch look safe now. > > but this function still has "flush" in its name, so one day, one person m= ight > call it with the understanding that it will flush tlb but actually it > won't. This is > bad smell in code. > > I guess one side effect of not flushing tlb while clearing the access > flag is that > hardware won't see this cleared flag in the tlb, so it might not set this= bit in > memory even though the bit has been cleared before in memory(but not in t= lb) > while the page is accessed *again*. > > next time, someone reads the access flag in memory again by folio_referen= ced, > he/she will see the page is cold as hardware has lost a chance to set > the bit again > since it finds tlb already has a true access flag. > > But anyway, tlb is so small, it will be flushed by context switch and > other running > code soon. so it seems we don't actually require the access flag being in= stantly > updated. the time gap, in which access flag might lose the new set by har= dware, > seems to be too short to really affect the accuracy of page reclamation. = but its > cost is large. > > (A). Constant flush cost vs. (B). very very occasional reclaimed hot > page, B might > be a correct choice. Plus, I doubt B is really going to happen. as after a page is promoted to the head of lru list or new generation, it needs a long time to slide back to the inactive list tail or to the candidate generation of mglru. the time should have been large enough for tlb to be flushed. If the page is really hot, the hardware will get second, third, fourth etc opportunity to set an access flag in the long time in which the page is re-moved to the tail as the page can be accessed multiple times if it is really hot. > > > [1] https://lore.kernel.org/lkml/20220617070555.344368-1-21cnbao@gmail.= com/ > > Signed-off-by: Baolin Wang > > --- > > arch/arm64/include/asm/pgtable.h | 31 ++++++++++++++++--------------- > > 1 file changed, 16 insertions(+), 15 deletions(-) > > > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/= pgtable.h > > index 0bd18de9fd97..2979d796ba9d 100644 > > --- a/arch/arm64/include/asm/pgtable.h > > +++ b/arch/arm64/include/asm/pgtable.h > > @@ -905,21 +905,22 @@ static inline int ptep_test_and_clear_young(struc= t vm_area_struct *vma, > > static inline int ptep_clear_flush_young(struct vm_area_struct *vma, > > unsigned long address, pte_t *= ptep) > > { > > - int young =3D ptep_test_and_clear_young(vma, address, ptep); > > - > > - if (young) { > > - /* > > - * We can elide the trailing DSB here since the worst t= hat can > > - * happen is that a CPU continues to use the young entr= y in its > > - * TLB and we mistakenly reclaim the associated page. T= he > > - * window for such an event is bounded by the next > > - * context-switch, which provides a DSB to complete the= TLB > > - * invalidation. > > - */ > > - flush_tlb_page_nosync(vma, address); > > - } > > - > > - return young; > > + /* > > + * This comment is borrowed from x86, but applies equally to AR= M64: > > + * > > + * Clearing the accessed bit without a TLB flush doesn't cause > > + * data corruption. [ It could cause incorrect page aging and > > + * the (mistaken) reclaim of hot pages, but the chance of that > > + * should be relatively low. ] > > + * > > + * So as a performance optimization don't flush the TLB when > > + * clearing the accessed bit, it will eventually be flushed by > > + * a context switch or a VM operation anyway. [ In the rare > > + * event of it not getting flushed for a long time the delay > > + * shouldn't really matter because there's no real memory > > + * pressure for swapout to react to. ] > > + */ > > + return ptep_test_and_clear_young(vma, address, ptep); > > } > > > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > > -- > > 2.39.3 > > > > Thanks > Barry