From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09331C07545 for ; Wed, 25 Oct 2023 01:44:56 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 80A076B0303; Tue, 24 Oct 2023 21:44:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 7BA386B0304; Tue, 24 Oct 2023 21:44:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6823C6B0305; Tue, 24 Oct 2023 21:44:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 56C9F6B0303 for ; Tue, 24 Oct 2023 21:44:56 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id 387FA1CB3CD for ; Wed, 25 Oct 2023 01:44:56 +0000 (UTC) X-FDA: 81382290192.23.2CBB105 Received: from mail-ua1-f42.google.com (mail-ua1-f42.google.com [209.85.222.42]) by imf16.hostedemail.com (Postfix) with ESMTP id 6527A180022 for ; Wed, 25 Oct 2023 01:44:54 +0000 (UTC) Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=UCcKRxMJ; spf=pass (imf16.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.42 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1698198294; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=QvYlReFl2iFwDppsisTa/4XhhXlafVLVna0q5Gl6KZ8=; b=nESFEb228De7K5WpOoHEoEpJZf7U/cjhbpf9BpFHvFSsKOm7Idf/Mt4A1cfer45xqffPr0 qGayH7EEjJHbNnjstjzZaLm7PtGR6wc++u0YEfrKDjDxE1YOs3H8XyoPwDh0U6tYHL21AF mEnFeW/H3lS+1gLh1qzI8P48HsO7sa4= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1698198294; a=rsa-sha256; cv=none; b=YdSK1oo7Xm6DZAP5nr9Bi4nXDY7tlGoKtxpTIC4iu+UDuqEIHCIlQvLkX+lpBTp4p5H532 YdyGn2OKBaC7LvA5DyfqrE2amD2aODZ4TEK79isCdPMDCRn1ldiC40w1R569uU6FQwJiPm QFA/5pU3XbZKdqsv1DBt+eu8OM1lmqU= ARC-Authentication-Results: i=1; imf16.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=UCcKRxMJ; spf=pass (imf16.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.222.42 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com Received: by mail-ua1-f42.google.com with SMTP id a1e0cc1a2514c-7b6043d0b82so1837846241.1 for ; Tue, 24 Oct 2023 18:44:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1698198293; x=1698803093; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=QvYlReFl2iFwDppsisTa/4XhhXlafVLVna0q5Gl6KZ8=; b=UCcKRxMJDdUod9BktLNANYWN9EkdyzXnMBHf46NxzK2CSpia6c0FRt/GlYVU1W1GPC OtqnoWhT0GLM+6qxhWRerQfda+awYOURBhPo7TgeYJInQWjXghN2alLe8rVqVjD0YUSg 7/iUVEb42+Aq2o38HBLuB5gLJWt0KUzkCNOLzQEWufvWI6l7xE5JvhtfAuZq9Rt7z4aa m1FnxhGnE/cuU+aEDmx2ALjMvn7iJ6plB5tobrOT+3zEBZIHK/WIPCqZEFBN4vovkvtf kBELGV0EpMGSUT5swmcDOtf8r1crMUP0R8D+HIgs8fYgkR0x2cHR0v3ji35soAba9+Cw 233Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1698198293; x=1698803093; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QvYlReFl2iFwDppsisTa/4XhhXlafVLVna0q5Gl6KZ8=; b=U+EM4XkF86ijFgiOEgEd2FH0k4GYyggjq4caeJQPLhFHHu8hiO6Fb6ymepujVJ69uv ezMcPr1m2rvfP4v/QHWQdoFZcVMLs5Uf16p43ew7Lcywq53qGUcp5RW/WpaZBYvZUDz0 VFgzlT6ERjRn2/iRklr++bj8YXwekmaNQMtclJsoB+8jhxa/nmAx7xocf9s9HYWnw4sZ iPnDnW9G8kwF6AwoqNMLhCuvx3fUtLx3d/qRyvTJb+C5/vK5uGptVUghBjm72XBNOp6z 2+F4YBdGnmzoa/zbXvtqetQryS30FaaVsp3fnS9Z3pW67M8hcDWMsGdMy1qFZTnOH/Jd VvHw== X-Gm-Message-State: AOJu0YyPdSkb+0zGd66nhb3ghiFqyZOFUVToZf6zGmRE6BICZL+bPk7L fwLTYe+dGe2BqRJ8Ol6YtszvRxzWjElvlnfwT44= X-Google-Smtp-Source: AGHT+IHHQ/64kU/wjSFBYKOc/erC/5nP1nhYJPI9Gj/z5cI4PH/ZXR2KmQnm4B4cnNtN35XqOvVPFa0hZSBe4O5Zvvw= X-Received: by 2002:a67:ae05:0:b0:452:5798:64bd with SMTP id x5-20020a67ae05000000b00452579864bdmr11995808vse.35.1698198293316; Tue, 24 Oct 2023 18:44:53 -0700 (PDT) MIME-Version: 1.0 References: <87y1frqz2u.fsf@nvdebian.thelocal> In-Reply-To: <87y1frqz2u.fsf@nvdebian.thelocal> From: Barry Song <21cnbao@gmail.com> Date: Wed, 25 Oct 2023 09:44:41 +0800 Message-ID: Subject: Re: [PATCH] arm64: mm: drop tlb flush operation when clearing the access bit To: Alistair Popple Cc: Baolin Wang , catalin.marinas@arm.com, will@kernel.org, akpm@linux-foundation.org, v-songbaohua@oppo.com, yuzhao@google.com, linux-mm@kvack.org, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 6527A180022 X-Rspam-User: X-Rspamd-Server: rspam11 X-Stat-Signature: ni3zqmawejh96w1wokh73xiiry77ef7c X-HE-Tag: 1698198294-196704 X-HE-Meta: U2FsdGVkX19SNeVBrozBBVMlUnlAG/pG7mcT7QBv3Qr1fbS9+UVzZWGvriHkj/G9HYLWumlqpY1upV9utGxxoSTg+oXSZxguMMfo7SscvTCucxEXiVSm0XOPDAMYIiCs3V+2sp7ICSN5/9Vv/dYFptV8oa0bPmMXtizcvMCPk0gX+8p5mIwLadnmtZmUSayi4V+bYns4iV0rY9PkQlkZ+XQKWVFCh+aDLtOZ32dgu1qcFUE8SAeTGrnyZwveotuvHm5wB53sIRKnYwA/oqn4Goun0jJyNyJfSEJvO1zwD21T4H2ujoXyGl+2uEcA08bJEeyqxsvJVZZeRS68EnpWwrME+3lxHkEAWxlJ1KeI8nmvLfKgBguGBEYYE5gnxpoXQ0gIFjH4LE0vkjjN6uOCRPxbwcv7oGrsds640D8DMO+HqdrBBwsKWJ1grBT9Z/9OqFLfIWjLIZTcp59l3JSw/ytOMxGsrrVXIRmMJglV4fHDqThqtWWxInYpDwvmtmU5eVZjUa1E+TepDQXVASRUyL3tGcmZ2EhkjBSeeLrJ2EqM+0jdzJmzaTLBYGJYS8Ksq6UClTBXoaDa5UdcgNNcacEy7M/o+g16c2CTwqqYjceg2hc7E4X81JlyEpbVQhr/DdmZ1r4dARe1gDAQu2da2YEkQHePJrtTkVKBsRhNnbvUeWeHCc6oK5Zz3KwoET0gut7MsQnrW0VbOKhUissVqgk3PARwPKqJt+WDWIwoJI5eClA3TWDwqowv/xiMaLIkxjSHGZ8fyujMZbySV0k0eZjTWLfr1R0j0qFrJiWpaVjOU0yG7q03kBuAICEYayDc9VpeOP29IRqH9hPPdz87nbHvqE2DVxq2Kvc3gskhgaDXtBOX6y4ZTsZwo2IAuTkBoZ5v+d83ed0jhMKzVWQh2BxPYjwQnjXFdppcqmNP3sijuGfwNSuDZdZf1x6Qh/XPjrZR+HYeXB4LFvXKSP/ egtFHyL0 4t31EJPSvTa7tP27e2zco+GFOz8fTH3DqeTDPf8919abeP1+FTLnKezyloUjoB2lMplEdIBo/RPAuUj6MkL+WjFnczye9ZoFejWNGh8++xxepyU2MVzuEo3zTmlHqpHMnFK/S5pswDA8J4r8gKzP6YyObyw69OAh1NiHWYoNRj8wiOtI2akXiAq9UWswIHKSlB+fY6Xn3J7M9BraPTcY2NnSmwfBHMGNivK3S2boLIzU3y2xhu8hVGooCmyRNmhkC5v+m1gZnFipZxS4qb41R/HrmE54XSUiH/ESavtqpcodu3wX5eyLv5mAyQp0QV0FNvfkeY1jfdk7YH9jZdjVTMBuvGfriHe0bihvamDIaIxe/1XlW23CYaSuZ7iuj+b2z14yNCysV6F/VUqo= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: On Wed, Oct 25, 2023 at 9:18=E2=80=AFAM Alistair Popple wrote: > > > Barry Song <21cnbao@gmail.com> writes: > > > On Wed, Oct 25, 2023 at 7:16=E2=80=AFAM Barry Song <21cnbao@gmail.com> = wrote: > >> > >> On Tue, Oct 24, 2023 at 8:57=E2=80=AFPM Baolin Wang > >> wrote: > >> > > >> > Now ptep_clear_flush_young() is only called by folio_referenced() to > >> > check if the folio was referenced, and now it will call a tlb flush = on > >> > ARM64 architecture. However the tlb flush can be expensive on ARM64 > >> > servers, especially for the systems with a large CPU numbers. > >> > > >> > Similar to the x86 architecture, below comments also apply equally t= o > >> > ARM64 architecture. So we can drop the tlb flush operation in > >> > ptep_clear_flush_young() on ARM64 architecture to improve the perfor= mance. > >> > " > >> > /* Clearing the accessed bit without a TLB flush > >> > * doesn't cause data corruption. [ It could cause incorrect > >> > * page aging and the (mistaken) reclaim of hot pages, but the > >> > * chance of that should be relatively low. ] > >> > * > >> > * So as a performance optimization don't flush the TLB when > >> > * clearing the accessed bit, it will eventually be flushed by > >> > * a context switch or a VM operation anyway. [ In the rare > >> > * event of it not getting flushed for a long time the delay > >> > * shouldn't really matter because there's no real memory > >> > * pressure for swapout to react to. ] > >> > */ > >> > " > >> > Running the thpscale to show some obvious improvements for compactio= n > >> > latency with this patch: > >> > base patched > >> > Amean fault-both-1 1093.19 ( 0.00%) 1084.57 * 0.79%= * > >> > Amean fault-both-3 2566.22 ( 0.00%) 2228.45 * 13.16%= * > >> > Amean fault-both-5 3591.22 ( 0.00%) 3146.73 * 12.38%= * > >> > Amean fault-both-7 4157.26 ( 0.00%) 4113.67 * 1.05%= * > >> > Amean fault-both-12 6184.79 ( 0.00%) 5218.70 * 15.62%= * > >> > Amean fault-both-18 9103.70 ( 0.00%) 7739.71 * 14.98%= * > >> > Amean fault-both-24 12341.73 ( 0.00%) 10684.23 * 13.43%= * > >> > Amean fault-both-30 15519.00 ( 0.00%) 13695.14 * 11.75%= * > >> > Amean fault-both-32 16189.15 ( 0.00%) 14365.73 * 11.26%= * > >> > base patched > >> > Duration User 167.78 161.03 > >> > Duration System 1836.66 1673.01 > >> > Duration Elapsed 2074.58 2059.75 > >> > > >> > Barry Song submitted a similar patch [1] before, that replaces the > >> > ptep_clear_flush_young_notify() with ptep_clear_young_notify() in > >> > folio_referenced_one(). However, I'm not sure if removing the tlb fl= ush > >> > operation is applicable to every architecture in kernel, so dropping > >> > the tlb flush for ARM64 seems a sensible change. > >> > > >> > Note: I am okay for both approach, if someone can help to ensure tha= t > >> > all architectures do not need the tlb flush when clearing the access= ed > >> > bit, then I also think Barry's patch is better (hope Barry can resen= d > >> > his patch). > >> > > >> > >> Thanks! > >> > >> ptep_clear_flush_young() with "flush" in its name clearly says it need= s a > >> flush. but it happens in arm64, all other code which needs a flush has > >> called other variants, for example __flush_tlb_page_nosync(): > >> > >> static inline void arch_tlbbatch_add_pending(struct > >> arch_tlbflush_unmap_batch *batch, > >> struct mm_struct *mm, unsigned long uaddr) > >> { > >> __flush_tlb_page_nosync(mm, uaddr); > >> } > >> > >> so it seems folio_referenced is the only left user of this > >> ptep_clear_flush_young(). > >> The fact makes Baolin's patch look safe now. > >> > >> but this function still has "flush" in its name, so one day, one perso= n might > >> call it with the understanding that it will flush tlb but actually it > >> won't. This is > >> bad smell in code. > >> > >> I guess one side effect of not flushing tlb while clearing the access > >> flag is that > >> hardware won't see this cleared flag in the tlb, so it might not set t= his bit in > >> memory even though the bit has been cleared before in memory(but not i= n tlb) > >> while the page is accessed *again*. > >> > >> next time, someone reads the access flag in memory again by folio_refe= renced, > >> he/she will see the page is cold as hardware has lost a chance to set > >> the bit again > >> since it finds tlb already has a true access flag. > >> > >> But anyway, tlb is so small, it will be flushed by context switch and > >> other running > >> code soon. so it seems we don't actually require the access flag being= instantly > >> updated. the time gap, in which access flag might lose the new set by = hardware, > >> seems to be too short to really affect the accuracy of page reclamatio= n. but its > >> cost is large. > >> > >> (A). Constant flush cost vs. (B). very very occasional reclaimed hot > >> page, B might > >> be a correct choice. > > > > Plus, I doubt B is really going to happen. as after a page is promoted = to > > the head of lru list or new generation, it needs a long time to slide b= ack > > to the inactive list tail or to the candidate generation of mglru. the = time > > should have been large enough for tlb to be flushed. If the page is rea= lly > > hot, the hardware will get second, third, fourth etc opportunity to set= an > > access flag in the long time in which the page is re-moved to the tail > > as the page can be accessed multiple times if it is really hot. > > This might not be true if you have external hardware sharing the page > tables with software through either HMM or hardware supported ATS > though. > > In those cases I think it's much more likely hardware can still be > accessing the page even after a context switch on the CPU say. So those > pages will tend to get reclaimed even though hardware is still actively > using them which would be quite expensive and I guess could lead to > thrashing as each page is reclaimed and then immediately faulted back > in. i am not quite sure i got your point. has the external hardware sharing cpu= 's pagetable the ability to set access flag in page table entries by itself? if yes, I don't see how our approach will hurt as folio_referenced can notify the hardware driver and the driver can flush its own tlb. If no, i don't see either as the external hardware can't set access flags, that means we have ignored its reference and only knows cpu's access even in the current mainline code. so we are not getting worse. so the external hardware can also see cpu's TLB? or cpu's tlb flush can also broadcast to external hardware, then external hardware sees the cleared access flag, thus, it can set access flag in page table when the hardware access it? If this is the case, I feel what you said is true. > > Of course TLB flushes are equally (perhaps even more) expensive for this > kind of external HW so reducing them would still be beneficial. I wonder > if there's some way they could be deferred until the page is moved to > the inactive list say? > > >> > >> > [1] https://lore.kernel.org/lkml/20220617070555.344368-1-21cnbao@gma= il.com/ > >> > Signed-off-by: Baolin Wang > >> > --- > >> > arch/arm64/include/asm/pgtable.h | 31 ++++++++++++++++-------------= -- > >> > 1 file changed, 16 insertions(+), 15 deletions(-) > >> > > >> > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/a= sm/pgtable.h > >> > index 0bd18de9fd97..2979d796ba9d 100644 > >> > --- a/arch/arm64/include/asm/pgtable.h > >> > +++ b/arch/arm64/include/asm/pgtable.h > >> > @@ -905,21 +905,22 @@ static inline int ptep_test_and_clear_young(st= ruct vm_area_struct *vma, > >> > static inline int ptep_clear_flush_young(struct vm_area_struct *vma= , > >> > unsigned long address, pte_= t *ptep) > >> > { > >> > - int young =3D ptep_test_and_clear_young(vma, address, ptep); > >> > - > >> > - if (young) { > >> > - /* > >> > - * We can elide the trailing DSB here since the wors= t that can > >> > - * happen is that a CPU continues to use the young e= ntry in its > >> > - * TLB and we mistakenly reclaim the associated page= . The > >> > - * window for such an event is bounded by the next > >> > - * context-switch, which provides a DSB to complete = the TLB > >> > - * invalidation. > >> > - */ > >> > - flush_tlb_page_nosync(vma, address); > >> > - } > >> > - > >> > - return young; > >> > + /* > >> > + * This comment is borrowed from x86, but applies equally to= ARM64: > >> > + * > >> > + * Clearing the accessed bit without a TLB flush doesn't cau= se > >> > + * data corruption. [ It could cause incorrect page aging an= d > >> > + * the (mistaken) reclaim of hot pages, but the chance of th= at > >> > + * should be relatively low. ] > >> > + * > >> > + * So as a performance optimization don't flush the TLB when > >> > + * clearing the accessed bit, it will eventually be flushed = by > >> > + * a context switch or a VM operation anyway. [ In the rare > >> > + * event of it not getting flushed for a long time the delay > >> > + * shouldn't really matter because there's no real memory > >> > + * pressure for swapout to react to. ] > >> > + */ > >> > + return ptep_test_and_clear_young(vma, address, ptep); > >> > } > >> > > >> > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > >> > -- > >> > 2.39.3 > >> > > >> > >> Thanks > >> Barry > Thanks Barry