From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 25A8EC001DF for ; Thu, 27 Jul 2023 03:29:32 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 7AF748D0001; Wed, 26 Jul 2023 23:29:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 75FED6B0074; Wed, 26 Jul 2023 23:29:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 600808D0001; Wed, 26 Jul 2023 23:29:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 4D7E76B0072 for ; Wed, 26 Jul 2023 23:29:31 -0400 (EDT) Received: from smtpin23.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay03.hostedemail.com (Postfix) with ESMTP id F207FA049E for ; Thu, 27 Jul 2023 03:29:30 +0000 (UTC) X-FDA: 81055961700.23.7DC86B4 Received: from mail-qt1-f172.google.com (mail-qt1-f172.google.com [209.85.160.172]) by imf02.hostedemail.com (Postfix) with ESMTP id 08CBE8000C for ; Thu, 27 Jul 2023 03:29:28 +0000 (UTC) Authentication-Results: imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=6VJzOsbI; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf02.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.172 as permitted sender) smtp.mailfrom=yuzhao@google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1690428569; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=UR/aQf8pqvQCFkWUb4QSR05Ljl1WefLU+jERptRur3s=; b=QbsaWeEC3gpbQv2ZINUmu9c3FB8ole0IK14zFSpgtY1qkYtVNXgSyC48ogH4wHqZC5AHy+ yhZHnATxfEb/xYvjB5pwhooM0POvgieGF/nQDFdxkCk4gZYLS/GiBBZy8PK75RcYrRuRvW fefkejXGqBh4AkwADEPfViX68Q96JCU= ARC-Authentication-Results: i=1; imf02.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=6VJzOsbI; dmarc=pass (policy=reject) header.from=google.com; spf=pass (imf02.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.172 as permitted sender) smtp.mailfrom=yuzhao@google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1690428569; a=rsa-sha256; cv=none; b=1ake1IghiRDlZZJRqlPAJ1ZuKWtzivlM0CAm1ir5i4FQNiPTPK2G3ZZZBbdnzbmwvwEVh4 H/3qOAUrS2MaVXE8jjcuFkEqWzVMWOuk7zYd8Y16s4gJd9uI69XJxrHdn+OcYdhq04Sz5C KyWcO8wEjpp1dbZJqzsy9Ev6Mw9NqFI= Received: by mail-qt1-f172.google.com with SMTP id d75a77b69052e-40550136e54so169151cf.0 for ; Wed, 26 Jul 2023 20:29:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1690428568; x=1691033368; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=UR/aQf8pqvQCFkWUb4QSR05Ljl1WefLU+jERptRur3s=; b=6VJzOsbIAYsiIIZqk8CErimyT8lGuAE11I/fcrZV2DEC3YRi27OMEhvT9FJVCtmIFJ zQt6yprsDWDi5PdSqY6+jhePkHVM81URbCd/x2yARUxP3AGfQKta1IqGG2SjaYxHscpQ xHjHoUVGrlg8O6k7XJuPgiOVjm6HCx2myycweW0ZfL9Sc92IGJcN9iAVNU2qlcOLiM5v Vj51dmIcHcx4c9HrnkXG7jraqzICDhmGIU4K1rmb7GjxMHhnE/1KSZIkc7DZCmGsyOWO jnXOHFiuLv+m3YzFEFmPUMcwPpG2iMgvCDvi7SFzqsIO8C+zUcchauC38jXo8+63+xRv toww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1690428568; x=1691033368; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=UR/aQf8pqvQCFkWUb4QSR05Ljl1WefLU+jERptRur3s=; b=cU4EGvFdsXflaF+v71Vopfi/C/rLPL9Dp75p4h8fbCTVdkbZ97SuHYENfOYWqcmhsZ 3KaVL54H6xCK0ILuYlnj4Kd57EEsXjEEH6n40fueNyojUBudvvOF8Yl3QzBB1UCU8hxe EUhU8sWKDovzKpQC3XwXVYHhtBjS8yOsgKekA+9ZXw2Tveo874UDB17pjTmldo9p+kyk 3KreoB8hZj4/htk64N20L5N03beqjduwP59RAAMyqT6zXD56ddZUUpIR9sVhDN0fe5m+ YviVNghp+BoUjNh2GfRmvkj4OYvFuMJSMdcMApPCVDxZUIDUMq5Loo4FixP96rWzPYv5 2jGQ== X-Gm-Message-State: ABy/qLbXbe2QMhCXJoMvhpOp3lWn7D1esQa45y1vYXDfQSACst6ckqRn iru+79RPYmqEgwxlExjgz4WSj4x1B58jv4jEDRyp6g== X-Google-Smtp-Source: APBJJlH1hADbkX3AZtozELz5dTYwcP5hurIMd+CUwoyThEbw36gzKuxHDm49vv/xeg7dq5lSs/Bxy/SPutrN1GAE6KA= X-Received: by 2002:ac8:7d0c:0:b0:405:3a65:b3d6 with SMTP id g12-20020ac87d0c000000b004053a65b3d6mr87323qtb.13.1690428568042; Wed, 26 Jul 2023 20:29:28 -0700 (PDT) MIME-Version: 1.0 References: <20230721094043.2506691-1-fengwei.yin@intel.com> <20230721094043.2506691-3-fengwei.yin@intel.com> <05bc90b6-4954-b945-f0d8-373f565c1248@intel.com> <0843fb4d-ab0b-2766-281c-ef32b6031dd7@intel.com> In-Reply-To: <0843fb4d-ab0b-2766-281c-ef32b6031dd7@intel.com> From: Yu Zhao Date: Wed, 26 Jul 2023 21:28:52 -0600 Message-ID: Subject: Re: [RFC PATCH v2 2/4] madvise: Use notify-able API to clear and flush page table entries To: Yin Fengwei Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, akpm@linux-foundation.org, minchan@kernel.org, willy@infradead.org, david@redhat.com, ryan.roberts@arm.com, shy828301@gmail.com Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 08CBE8000C X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: nqxd7imty14thr3qscqskdb1xbxz1kuz X-HE-Tag: 1690428568-737411 X-HE-Meta: U2FsdGVkX1+bLGiMQRbv2Pi5WWZ75XYNFfPOvyJcwCcqJsaKfI9wl2wyIgJjOSjvhHTGpzZSOCk90AWHGBN5DCNEHkW08x1Apfzv7hTe8+tY9j95htEe1NbEh5A1SzR1R0fSAyqWHW6dmVbGJ8aHNGvUAGPirV8tgh08ruM5G1Dn6PZpCE4XKdFsJvJpmPPKmDRMWRa151GN3RIv9B/xX+ooS773+eyl300opkVtuWWi6IOlzKEu90tuPwcAe9ndKHMbVaqoi6vGjjP42OSdxfYe2jOg9WeFg9WAvQlaPHuIPxRyWfbnMepfuzrUZP7X83bRwM627jxNkhWUgIsTCoDAmZlueyCh/8ETjsDnf6/Nwm2LOk7uwZwfIc/6j89duzL8HgZP27FAhNReM2PBOTtFGppWne+IO3QR4veCsHN46bKnzq3vftfBpJ/8lxcooaIZTF/syV5/7LPWjf0LPGKYX/YkPdyzsQbooiM8fgQJzk/P/0UArDfKz8//VBxQyzvW5a1WYs+Y1/4qij4qVpzXDn6Ab8w+yeVKbz+59h9T7J1kqW1Wh4tT9DlSEPzwjoWLpbA5Ua/Vk/YH+m8QjdTe+NjH6F5tRGNaR4Zbz7kVSulT0B//H0NaKLCB3qenGWa+dWLKQeDvJ79jEj1bmRs+fnnugcU/gaMGscJOei04ReKJ+nxS6IKnSeGpJtd8nYIp6Ce5WIR0hCVBOERvTGL71o26OaK1oMUlNgZpRyKdnh9ZlaRLOhO13PJnJSE1CcXeW1S0JXn4ZcS+rHrclo7CQzQndlkb07r9DzJjWn/hIZpD9AQNkTrzxDTdwIWIXnzGTk+pSlslAxRh/08pn7O4h9B8IIuG5yjCTZVDWBow5CLH3NiHoTPja4U4w15/JhRCx98PNPgVh9iDIn5TVzF5Q+C1I8WJ+G6ESWFBME5nKg0JiIaLu/beUTdmU843oBd932RvkPBVyCSJnSV jgmi4D15 0c8duqh/+51Qm+H2mcHd2VsqHJNY0kwyW0ZB7OXA6KmLbzVKcW8DQsuvt/5NQZyeOJ+Rr77xpPbBjwpbOu6iF73rTrDSbWJ7yCGW95zmjcrdovpaiAltNTQEpp9sFb7BO3EhAWE3KGeiUuQctIke5D+8mny/tC5bQcwaA0k3pylXteMu51Jj241sCva/pHGN+FtUlOyAVsLwUMWacwP4Hk1W+kUDnEXyjpYR0HJKvcorBebS5hFZOK1uBKBbWaRsc7Tv9OneuctS6bJ8PXXkuBRJbnbvwKW2o9ekBMpBbPuPQpb4UrW9gm2hpoFL3vTCVYy0uJLCggCtBDmnyxPBb/Lk6mxiopN1jSJqAd3KDT0vGphk= X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Jul 26, 2023 at 12:21=E2=80=AFAM Yin Fengwei wrote: > > > On 7/26/23 13:40, Yu Zhao wrote: > > On Tue, Jul 25, 2023 at 10:44=E2=80=AFPM Yin Fengwei wrote: > >> > >> > >> On 7/26/23 11:26, Yu Zhao wrote: > >>> On Tue, Jul 25, 2023 at 8:49=E2=80=AFPM Yin Fengwei wrote: > >>>> > >>>> > >>>> On 7/25/23 13:55, Yu Zhao wrote: > >>>>> On Fri, Jul 21, 2023 at 3:41=E2=80=AFAM Yin Fengwei wrote: > >>>>>> > >>>>>> Currently, in function madvise_cold_or_pageout_pte_range(), the > >>>>>> young bit of pte/pmd is cleared notify subscripter. > >>>>>> > >>>>>> Using notify-able API to make sure the subscripter is signaled abo= ut > >>>>>> the young bit clearing. > >>>>>> > >>>>>> Signed-off-by: Yin Fengwei > >>>>>> --- > >>>>>> mm/madvise.c | 18 ++---------------- > >>>>>> 1 file changed, 2 insertions(+), 16 deletions(-) > >>>>>> > >>>>>> diff --git a/mm/madvise.c b/mm/madvise.c > >>>>>> index f12933ebcc24..b236e201a738 100644 > >>>>>> --- a/mm/madvise.c > >>>>>> +++ b/mm/madvise.c > >>>>>> @@ -403,14 +403,7 @@ static int madvise_cold_or_pageout_pte_range(= pmd_t *pmd, > >>>>>> return 0; > >>>>>> } > >>>>>> > >>>>>> - if (pmd_young(orig_pmd)) { > >>>>>> - pmdp_invalidate(vma, addr, pmd); > >>>>>> - orig_pmd =3D pmd_mkold(orig_pmd); > >>>>>> - > >>>>>> - set_pmd_at(mm, addr, pmd, orig_pmd); > >>>>>> - tlb_remove_pmd_tlb_entry(tlb, pmd, addr); > >>>>>> - } > >>>>>> - > >>>>>> + pmdp_clear_flush_young_notify(vma, addr, pmd); > >>>>>> folio_clear_referenced(folio); > >>>>>> folio_test_clear_young(folio); > >>>>>> if (folio_test_active(folio)) > >>>>>> @@ -496,14 +489,7 @@ static int madvise_cold_or_pageout_pte_range(= pmd_t *pmd, > >>>>>> > >>>>>> VM_BUG_ON_FOLIO(folio_test_large(folio), folio); > >>>>>> > >>>>>> - if (pte_young(ptent)) { > >>>>>> - ptent =3D ptep_get_and_clear_full(mm, addr= , pte, > >>>>>> - tlb->fullm= m); > >>>>>> - ptent =3D pte_mkold(ptent); > >>>>>> - set_pte_at(mm, addr, pte, ptent); > >>>>>> - tlb_remove_tlb_entry(tlb, pte, addr); > >>>>>> - } > >>>>>> - > >>>>>> + ptep_clear_flush_young_notify(vma, addr, pte); > >>>>> > >>>>> These two places are tricky. > >>>>> > >>>>> I agree there is a problem here, i.e., we are not consulting the mm= u > >>>>> notifier. In fact, we do pageout on VMs on ChromeOS, and it's been = a > >>>>> known problem to me for a while (not a high priority one). > >>>>> > >>>>> tlb_remove_tlb_entry() is batched flush, ptep_clear_flush_young() i= s > >>>>> not. But, on x86, we might see a performance improvement since > >>>>> ptep_clear_flush_young() doesn't flush TLB at all. On ARM, there mi= ght > >>>>> be regressions though. > >>>>> > >>>>> I'd go with ptep_clear_young_notify(), but IIRC, Minchan mentioned = he > >>>>> prefers flush. So I'll let him chime in. > >>>> I am OK with either way even no flush way here is more efficient for > >>>> arm64. Let's wait for Minchan's comment. > >>> > >>> Yes, and I don't think there would be any "negative" consequences > >>> without tlb flushes when clearing the A-bit. > >>> > >>>>> If we do end up with ptep_clear_young_notify(), please remove > >>>>> mmu_gather -- it should have been done in this patch. > >>>> > >>>> I suppose "remove mmu_gather" means to trigger flush tlb operation i= n > >>>> batched way to make sure no stale data in TLB for long time on arm64 > >>>> platform. > >>> > >>> In madvise_cold_or_pageout_pte_range(), we only need struct > >>> mmu_gather *tlb because of tlb_remove_pmd_tlb_entry(), i.e., flushing > >>> tlb after clearing the A-bit. There is no correction, e.g., potential > >>> data corruption, involved there. > >> > >> From https://lore.kernel.org/lkml/20181029105515.GD14127@arm.com/, > >> the reason that arm64 didn't drop whole flush tlb in ptep_clear_flush_= young() > >> is to prevent the stale data in TLB. I suppose there is no correction = issue > >> there also. > >> > >> So why keep stale data in TLB in madvise_cold_or_pageout_pte_range() i= s fine? > > > > Sorry, I'm not sure I understand your question here. > Oh. Sorry for the confusion. I will explain my understanding and > question in detail. > > > > > In this patch, you removed tlb_remove_tlb_entry(), so we don't need > > struct mmu_gather *tlb any more. > Yes. You are right. > > > > > If you are asking why I prefer ptep_clear_young_notify() (no flush), > > which also doesn't need tlb_remove_tlb_entry(), then the answer is > > that the TLB size doesn't scale like DRAM does: the gap has been > > growing exponentially. So there is no way TLB can hold stale entries > > long enough to cause a measurable effect on the A-bit. This isn't a > > conjecture -- it's been proven conversely: we encountered bugs (almost > > every year) caused by missing TLB flushes and resulting in data > > corruption. They were never easy to reproduce, meaning stale entries > > never stayed long in TLB. > > when I read https://lore.kernel.org/lkml/20181029105515.GD14127@arm.com/, > > my understanding is that arm64 still keep something in ptep_clear_flush_y= oung. > The reason is finishing TLB flush by next context-switch to make sure no > stale entries in TLB cross next context switch. > > Should we still keep same behavior (no stable entries in TLB cross next > context switch) for madvise_cold_or_pageout_pte_range()? > > So two versions work (I assume we should keep same behavior) for me: > 1. using xxx_flush_xxx() version. but with possible regression on arm64= . > 2. using none flush version. But do batched TLB flush. I see. I don't think we need to flush at all, i.e., the no flush version *without* batched TLB flush. So far nobody can actually prove that flushing TLB while clearing the A-bit is beneficial, not even in theory :)