From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id F241EC25B48 for ; Thu, 26 Oct 2023 05:54:35 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 8673F6B0366; Thu, 26 Oct 2023 01:54:35 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 816B66B0367; Thu, 26 Oct 2023 01:54:35 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6B7546B0368; Thu, 26 Oct 2023 01:54:35 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 5BD7B6B0366 for ; Thu, 26 Oct 2023 01:54:35 -0400 (EDT) Received: from smtpin25.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay07.hostedemail.com (Postfix) with ESMTP id 335E7160CBA for ; Thu, 26 Oct 2023 05:54:35 +0000 (UTC) X-FDA: 81386548110.25.67D846E Received: from mail-vs1-f43.google.com (mail-vs1-f43.google.com [209.85.217.43]) by imf16.hostedemail.com (Postfix) with ESMTP id 5D46E18000A for ; Thu, 26 Oct 2023 05:54:33 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=H7+Qf45b; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.43 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698299673; a=rsa-sha256; cv=none; b=QKsRrWdLA8ejx91tDeRMDQkGK27tUKeDBgejMcqesZjwqvCke94htB89ATprKTI2nsay/n Mh7fPki5IM/XoFL3FOwgvk+whgWCle+wiljAU3UvpvfpTJh5BwEm7EnM/LQwPuzVJNp1J1 hmvCePTKBmmkeaJRiTMsZ4mtqtN8jm8= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=H7+Qf45b; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf16.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.217.43 as permitted sender) smtp.mailfrom=21cnbao@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698299673; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=45UXuM2P3WPMuxY2JWYfx99wJ2INudmhZ7TG9Hdnc2s=; b=XpFA+zPOGPBu1dyIuH6amKfS2dkrS34KKQ/lMC2/vfqmNZxjiA0MtntV1K7ILWvqyqxgUY XdFSn1Q/Mtj7zOkb+tbKWvWkh8Gn3+rYs9pk9D99QQvWTK1bdbvGlI/ABXUVcK90dmhlpV 6W4Q/GTc0LVPXgeRbdW4nvcn3QeO1mc= Received: by mail-vs1-f43.google.com with SMTP id ada2fe7eead31-45865d6e588so344375137.0 for ; Wed, 25 Oct 2023 22:54:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1698299672; x=1698904472; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=45UXuM2P3WPMuxY2JWYfx99wJ2INudmhZ7TG9Hdnc2s=; b=H7+Qf45bjNXGcG10danC2oQuvBAjBmfsEACdf0vwghK6iUgjKM+KU4msZwynUVSbR3 xMkKbq107iojZwPH9U3MJjthrTqdIKSqtNBgkzvgP/1k+vEpk2xKJN84LC4zZACkmWQU tDqHspZLGdOL9nTFvfa83g92ePSzmxq9Yg4QgqbE0WTb1zewf4ocfQRAByR8CIEh4ipJ 4Fr19ZIHkX2FCPdvJh3cpjuHoGerE9eCUChEYzIfCUW4YP/tdDu31LlykLyoljR9lIFL cfEon/1rdNaS0JwnvN36M33oZXR++XIEBC9f84i+S4DleF0mmVHLSQr+KbrGjtISvDae 7xPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698299672; x=1698904472; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=45UXuM2P3WPMuxY2JWYfx99wJ2INudmhZ7TG9Hdnc2s=; b=DZWPT4dkq5bOwT39VT/svVDWwKPcmOe9hvJQoIJRg3VinPmFwlY8b4Chudz/4VGxxQ TH99/FajVfWjst3D3Ys2I6407Z2G5tjojt/aDRF5BFS9mmqnd+sBg47jOR5Bj7DwRoSV DOBofDvS393oiM2Pzd+uCGJv/anFcxd6vvYqhnU+BtioljWVYEazwAjCwl7h/MFvyYp/ hji6FygmsYjt93zWeXGbuZQBpEvltWCH0fcG+4sBz/8fBeOcJUWz0n/4dNte1z9QUNOk E1SVxhg6/ffJcaP8fG8uGA/vYGuAjGw889HVLCLtUw7WeBwe4eAVhX1jWpUzsyS6O9XY 4Plw== X-Gm-Message-State: AOJu0YzSvn2LhAzRg3K7GbhH5gABUI0aPl54boU8SQGHoymZPf2hsFhz VcTMmZMvIJEjuhBHqeTG6lEklFRnx8zYX8SewHE= X-Google-Smtp-Source: AGHT+IGzx8+MJHNaJIX3ZzoPMyVBD6h6GTZ59voKrLxouXBcB3JpBl5ekWfYJGvTAcP3/wftECBcXqmaX5IqCBPfiV8= X-Received: by 2002:a05:6102:10de:b0:457:c425:a696 with SMTP id t30-20020a05610210de00b00457c425a696mr1016115vsr.4.1698299672234; Wed, 25 Oct 2023 22:54:32 -0700 (PDT) MIME-Version: 1.0 References: <2f55f62b-cae2-4eee-8572-1b662a170880@arm.com> In-Reply-To: <2f55f62b-cae2-4eee-8572-1b662a170880@arm.com> From: Barry Song <21cnbao@gmail.com> Date: Thu, 26 Oct 2023 13:54:19 +0800 Message-ID: Subject: Re: [PATCH] arm64: mm: drop tlb flush operation when clearing the access bit To: Anshuman Khandual Cc: Baolin Wang , catalin.marinas@arm.com, will@kernel.org, akpm@linux-foundation.org, v-songbaohua@oppo.com, yuzhao@google.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspam-User: X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: 5D46E18000A X-Stat-Signature: jth6kbnbbuepdei6re8powzg8zwbzag1 X-HE-Tag: 1698299673-764669 X-HE-Meta: U2FsdGVkX18iq6ZU/vnDoYpvwy8WlAY5bnNPtjqJa41iOMG5A2+XEYX/oWzKG3xCXmIHI+e27luLNjOgeH0jWXqONzkqNl/vGgo9PuA/Kfn6/Ln0UJW2GAE8YMhvmRiCBRAP+SKfaC1eYSyNeweW4BUjzKbcoLrdzVgKzHTdC63lO9X0746zdV07y/tdNd7d8J8n8awkHwgeBLqzkMwbC/SKyrh6j9fZR9RU0zxC63Q6TQfSkPy26U4JoMqTT2EF/yjO0bTDDwM+cbsog5d7BvEXU9JrMLhZr4KyhrgnWPlWmMB9gTQ1d7cuYAVluCnAh5EI2yGwHpeW0q/dwi4O82zK/Tc5QcDPX5ZwQ0UW7TW2dfeXtL4wnKa9hgEJj/xpqVhDJpJoW+iz+D+TQnydenNcDSnzhTfad9XIDVAE8OULdf5nF02yMZYzR+1ltYh3/i+zt+YByMXpRl47jTwZFwtORSVAz4abbugk9w6/YNcotPKcpZGp5+TmYPelmRUj3A6TwPe/l+S46y1rRWfQNCqtItwCxLPuM14yf9DjDUiyU/354O3ZrpVDhfj7X5i5VLmYqZabYp2TmpxUt9tuuG1XCGBXbX7hMck2lacNVS53tmIB4HYnvdziQQpOwoHleveIHNvya/AE0N0n7OLPVYF54NI55+E2xvkk3EH0ID6Lk46Ww1mrBhbhhx8kRfLVXeMRT185KkHN8XzonGEMUvxHzoxPRAcz8y0ZPXEMgJPctJaNCKwnC2qiqga4urz5j5TkOwL5YpYwDSOX6JbS2EcchT3gin9ySTEaiQZg0zE48SfFwRDXXTKH1aSehQapwVQloGc8682PbN//pbvAaBns9yp6AWutus3gqFMsmMaVTDD1GaMyxe+5kJeGsuQ+cHeMoWHffRrZuMNaAsia9ImwsOSM0gFQ7OGVy1iVKEBEvvIXkUoqFNpwJ0GuzoXEnGi4xDBlM5Kp5VCXVaM x7LGtFP1 sDWWm2N2k7tf65tbumz61qPFAYScECu14T7Q2kTI0LST+vXYe5Z3RWfw71K1x0sGo1zIPHqQysz3NH7opmkanDnM7TkwPD97/PRR/gtyX7hnU4X1BNXGw2nFagrkUOfCHDHvkZnCIaeiyk1pNdwL6mS2eZk4SPVS2/Ne94nys5PRPdZPSqiUy7RlBtRaHMWn8DpKX5Q17YYxwuLQ3qJKBHFMVIWil/fDwMFG88p3WBFfzwC/z3u9+XsGzS0NUNR6DLHRiol/bVvhegO3U+oZ1nQZ8tV7vYWpObdNSU0Fk4MbEiJAKFVriCs9OjFrfiJfNPYOe3CNxF+CZXyX33G46H09PT5Lugk9pFxk9+aiYyHr8pBQrT9NfkVScNiCZSUrk4Z61HRVQBbGmt4N0AW8b4a/NdFwz2duSsFEKKb8KYT4bPPA= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Thu, Oct 26, 2023 at 12:55=E2=80=AFPM Anshuman Khandual wrote: > > > > On 10/24/23 18:26, Baolin Wang wrote: > > Now ptep_clear_flush_young() is only called by folio_referenced() to > > check if the folio was referenced, and now it will call a tlb flush on > > ARM64 architecture. However the tlb flush can be expensive on ARM64 > > servers, especially for the systems with a large CPU numbers. > > TLB flush would be expensive on *any* platform with large CPU numbers ? > > > > > Similar to the x86 architecture, below comments also apply equally to > > ARM64 architecture. So we can drop the tlb flush operation in > > ptep_clear_flush_young() on ARM64 architecture to improve the performan= ce. > > " > > /* Clearing the accessed bit without a TLB flush > > * doesn't cause data corruption. [ It could cause incorrect > > * page aging and the (mistaken) reclaim of hot pages, but the > > * chance of that should be relatively low. ] > > * > > * So as a performance optimization don't flush the TLB when > > * clearing the accessed bit, it will eventually be flushed by > > * a context switch or a VM operation anyway. [ In the rare > > * event of it not getting flushed for a long time the delay > > * shouldn't really matter because there's no real memory > > * pressure for swapout to react to. ] > > */ > > If always true, this sounds generic enough for all platforms, why only > x86 and arm64 ? > > > " > > Running the thpscale to show some obvious improvements for compaction > > latency with this patch: > > base patched > > Amean fault-both-1 1093.19 ( 0.00%) 1084.57 * 0.79%* > > Amean fault-both-3 2566.22 ( 0.00%) 2228.45 * 13.16%* > > Amean fault-both-5 3591.22 ( 0.00%) 3146.73 * 12.38%* > > Amean fault-both-7 4157.26 ( 0.00%) 4113.67 * 1.05%* > > Amean fault-both-12 6184.79 ( 0.00%) 5218.70 * 15.62%* > > Amean fault-both-18 9103.70 ( 0.00%) 7739.71 * 14.98%* > > Amean fault-both-24 12341.73 ( 0.00%) 10684.23 * 13.43%* > > Amean fault-both-30 15519.00 ( 0.00%) 13695.14 * 11.75%* > > Amean fault-both-32 16189.15 ( 0.00%) 14365.73 * 11.26%* > > base patched > > Duration User 167.78 161.03 > > Duration System 1836.66 1673.01 > > Duration Elapsed 2074.58 2059.75 > > Could you please point to the test repo you are running ? > > > > > Barry Song submitted a similar patch [1] before, that replaces the > > ptep_clear_flush_young_notify() with ptep_clear_young_notify() in > > folio_referenced_one(). However, I'm not sure if removing the tlb flush > > operation is applicable to every architecture in kernel, so dropping > > the tlb flush for ARM64 seems a sensible change. > > The reasoning provided here sounds generic when true, hence there seems > to be no justification to keep it limited just for arm64 and x86. Also > what about pmdp_clear_flush_young_notify() when THP is enabled. Should > that also not do a TLB flush after clearing access bit ? Although arm64 > does not enable __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH, rather depends on > the generic pmdp_clear_flush_young() which also does a TLB flush via > flush_pmd_tlb_range() while clearing the access bit. > > > > > Note: I am okay for both approach, if someone can help to ensure that > > all architectures do not need the tlb flush when clearing the accessed > > bit, then I also think Barry's patch is better (hope Barry can resend > > his patch). > > This paragraph belongs after the '----' below and not part of the commit > message. > > > > > [1] https://lore.kernel.org/lkml/20220617070555.344368-1-21cnbao@gmail.= com/ > > Signed-off-by: Baolin Wang > > --- > > arch/arm64/include/asm/pgtable.h | 31 ++++++++++++++++--------------- > > 1 file changed, 16 insertions(+), 15 deletions(-) > > > > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/= pgtable.h > > index 0bd18de9fd97..2979d796ba9d 100644 > > --- a/arch/arm64/include/asm/pgtable.h > > +++ b/arch/arm64/include/asm/pgtable.h > > @@ -905,21 +905,22 @@ static inline int ptep_test_and_clear_young(struc= t vm_area_struct *vma, > > static inline int ptep_clear_flush_young(struct vm_area_struct *vma, > > unsigned long address, pte_t *pt= ep) > > { > > - int young =3D ptep_test_and_clear_young(vma, address, ptep); > > - > > - if (young) { > > - /* > > - * We can elide the trailing DSB here since the worst tha= t can > > - * happen is that a CPU continues to use the young entry = in its > > - * TLB and we mistakenly reclaim the associated page. The > > - * window for such an event is bounded by the next > > - * context-switch, which provides a DSB to complete the T= LB > > - * invalidation. > > - */ > > - flush_tlb_page_nosync(vma, address); > > - } > > - > > - return young; > > + /* > > + * This comment is borrowed from x86, but applies equally to ARM6= 4: > > + * > > + * Clearing the accessed bit without a TLB flush doesn't cause > > + * data corruption. [ It could cause incorrect page aging and > > + * the (mistaken) reclaim of hot pages, but the chance of that > > + * should be relatively low. ] > > + * > > + * So as a performance optimization don't flush the TLB when > > + * clearing the accessed bit, it will eventually be flushed by > > + * a context switch or a VM operation anyway. [ In the rare > > + * event of it not getting flushed for a long time the delay > > + * shouldn't really matter because there's no real memory > > + * pressure for swapout to react to. ] > > + */ > > + return ptep_test_and_clear_young(vma, address, ptep); > > } > > > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > > There are three distinct concerns here > > 1) What are the chances of this misleading existing hot page reclaim proc= ess > 2) How secondary MMU such as SMMU adapt to change in mappings without a f= lush > 3) Could this break the architecture rule requiring a TLB flush after acc= ess > bit clear on a page table entry In terms of all of above concerns, though 2 is different, which is an issue between cpu and non-cpu, i feel kernel has actually dropped tlb flush at least for mglru, there is no flush in lru_gen_look_around(), static bool folio_referenced_one(struct folio *folio, struct vm_area_struct *vma, unsigned long address, void *ar= g) { ... if (pvmw.pte) { if (lru_gen_enabled() && pte_young(ptep_get(pvmw.pte))) { lru_gen_look_around(&pvmw); referenced++; } if (ptep_clear_flush_young_notify(vma, address, pvmw.pte)) referenced++; } return true; } and so is in walk_pte_range() of vmscan. linux has been surviving with all above concerns for a while, believing it or not :-) Thanks Barry