From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3EB10EB64DA for ; Sat, 8 Jul 2023 02:07:30 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 6F2398D0001; Fri, 7 Jul 2023 22:07:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 67B2C6B0072; Fri, 7 Jul 2023 22:07:29 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4F44C8D0001; Fri, 7 Jul 2023 22:07:29 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 2E4FE6B0071 for ; Fri, 7 Jul 2023 22:07:29 -0400 (EDT) Received: from smtpin22.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay02.hostedemail.com (Postfix) with ESMTP id DE564120175 for ; Sat, 8 Jul 2023 02:07:28 +0000 (UTC) X-FDA: 80986807776.22.1D6011D Received: from mail-qt1-f169.google.com (mail-qt1-f169.google.com [209.85.160.169]) by imf20.hostedemail.com (Postfix) with ESMTP id 09F131C0014 for ; Sat, 8 Jul 2023 02:07:26 +0000 (UTC) Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=lrjyQkMY; spf=pass (imf20.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1688782047; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=wvM94awIH3hccsLYy12Fit7HSAt6+NVlWYMD44E4TNs=; b=pf5yIhT0gwWs7ZZdGy+yZUbfuTk26/7Ya24+X4ukJrFVx0IbBwpKcgChrmbf1QAK+mxRKb Z3IiWyBrZsBiJ8dLaBUi2/nZSWvDK3d8aSRJfBEoa+U+/aTfbjJfciUBFKQxUYsy2SXUFD JFHCjeViz1CUCJt6vD3yTzLHisHkuII= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1688782047; a=rsa-sha256; cv=none; b=czASY37RquXTtB3i4eXsQ+k7pdW2S7hHcu6jpLZ9ZU6w126oW89OL5YqV8rZQFPZo7ICcR /l3O0L1ZLhcuqp9qgImeOsYgLS+vwz1ZY2IgwKgF8+R7HOlREeZp5HQ97eb+uc6p2zi/bs Z/sPEgouWs2RnzNOCPVIpgdySE2jBMY= ARC-Authentication-Results: i=1; imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20221208 header.b=lrjyQkMY; spf=pass (imf20.hostedemail.com: domain of yuzhao@google.com designates 209.85.160.169 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com Received: by mail-qt1-f169.google.com with SMTP id d75a77b69052e-401f4408955so59851cf.1 for ; Fri, 07 Jul 2023 19:07:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20221208; t=1688782046; x=1691374046; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=wvM94awIH3hccsLYy12Fit7HSAt6+NVlWYMD44E4TNs=; b=lrjyQkMYzNpvHdiZn3R3XFQNeMwNkjFRcKSAy+LM6i1jc6Qxe8Q339N6HdXJl+fMzb FHfVtPprGOxLgEzujWQ7YcNC4eiXKIYlJMOuXroe8MeXGWomamkkkisG0a65yaZJSfwx M1MsSmozVX0fHGZ+FKmhefeZ8QFpJn+SWLQvBmA/oYe/UcYBboWUV9QdICyd4rXx/QDo szw5QzHTPmh54zFMx63iZ0qpuKcY5ArzX/55Y49bRnFLj/5HAjpibyCye7TAcseXjHdJ 2nW9q/ugz6XHFtyHMIKPNeEB5cpb4pJoxDXOFiyJ/F4mgNGR5RzwrwpID0qS77hLKq3e 8eKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688782046; x=1691374046; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=wvM94awIH3hccsLYy12Fit7HSAt6+NVlWYMD44E4TNs=; b=XXtzCRx3Bcc9iS+ejNuZXGkhvd22oYF+Dm44bLf1W68XcHI+UtP0bL6SjEMsMP/z0C SZzBcqRSh03sXs50jxfLPxplx9Se+FaIpDiGGLFPnRW2tW91cOt+vb5/Zf2+43H4EfPN rgBWb5EooTG2iocqYGDcbrXyFJICkqwA/gDmMfM7jkjveOtKtayvcjkF0mSRea2pSkGS 7J1xijay1348ZsiBefGSo7fjcid420axBY1MIJNarUYyibIZGtnDDRHLIKyV0FzgrnXi Hdf09dIraaGA7xDYtVBXqgzmDSLs8+dQQsi2VoXR70R6NBJTuB58KohkI5IjglGV6R7I lHAQ== X-Gm-Message-State: ABy/qLZ5aZI5pcph/w/hTpflw6OdgVckhFP45fIDDB2sg0GSwDdSIWCT p4x2KRwZcDwEefOBoLi+/F+jqDt2UA0SHClRI2PrHlj8bXikn/I3DR0VQA== X-Google-Smtp-Source: APBJJlEb7E6cPrnN7WbPSxeqePiL9mcltzBF05DA3NrbHzHconccAsTiJ/CJYR8gXq5Ttafe55W6l6XkgCBbBmCKZiw= X-Received: by 2002:ac8:7f0e:0:b0:3ef:3361:75d5 with SMTP id f14-20020ac87f0e000000b003ef336175d5mr44736qtk.11.1688782045979; Fri, 07 Jul 2023 19:07:25 -0700 (PDT) MIME-Version: 1.0 References: <20230706062044.816068-1-aneesh.kumar@linux.ibm.com> <47066176-bd93-55dd-c2fa-002299d9e034@linux.ibm.com> In-Reply-To: <47066176-bd93-55dd-c2fa-002299d9e034@linux.ibm.com> From: Yu Zhao Date: Fri, 7 Jul 2023 20:06:49 -0600 Message-ID: Subject: Re: [PATCH v2 0/5] Avoid building lrugen page table walk code To: Aneesh Kumar K V Cc: linux-mm@kvack.org, akpm@linux-foundation.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Stat-Signature: cp1sik11opt4nb7rgm8cbmk6ri4pzjgj X-Rspamd-Server: rspam10 X-Rspamd-Queue-Id: 09F131C0014 X-Rspam-User: X-HE-Tag: 1688782046-493850 X-HE-Meta: U2FsdGVkX19zBoLVNk9pu1xOLKVDCQMyas2A1UyMyaKsxYTvpj+FHzElFUzk+BLljzpIZJJQfqUrpGlAmMZ82idvyQ7F8kKzcqJpOFR9+W8ZS0z1eCrrGpMvUzy26yr5RltRFUbooboU+nYQHqgVW3gke7xlIA5JcYh/nOtEVkGVd/7LkNzlUlvWC/4QGM0d5LgugSPjYsDE6NclEGZX6IwRRmIiomHKvlp753pKhv1kWSEmV3E1Ndc6/XZYwmNZbTvB3VbvwVwi+W/feRHgK9fuBzABPDUG0fUcVWC55yZ8YriMgrXngpCA2A61IsbZRlvB08ZvQ+YrigWwYLTMAgIGuA6fSWxb9VNWnzfUikJB36e1dyOfEW48xAvWPSThS827mvm3T7S5lOwU58WA8yM6IMp2XiPvoC1ZG9abTJg/JVz7vZ1B+FZhPon1oW4lehl6dP9DhSfOdF/4uycmDVi9I8gLD9D6785Ki283qxXNab/1C918zx0U4UwnX4VoyRCZFxajRu1NuG1rtHb394ssPuxOq4HqxcM3GJGyGLHCzweITPXf2Ofn5EAFyIiQB7ZPHlcKhS8Y1XkpyFr4+HQlXDy/EZQ0wNp8m5nptbdRWQsaxSkxTZg0VYwGGNZH8TMKuG6a8SA1VmlQEc/BqZLQKF3VteZxCo4yIuy1x6a1OPbETQjs7EjUB1hBxwoPMpIWKj9lX2tASULpHrMRhqNAtqRYEPfymaKuzwD0LBT/s7qp0Fxv53XZqHk/5KXGkP2NrirkxfVVl5iDSskTxbmM0Pxsa8hLrt1Us+PhnWTy7erx7BoD0dtrHGN9vNwM5hqsFGRmgX5gLRcj5KUJAZj4uFOdcpkoRtMtEfPJ047E2s4iX1p37ziLg82i6X1wdJFbJQLSzW79boQ3PxNyz54GQNDRift5ts8m3raaYJFkpNMDWhUw+1tN6aXTvCauDDSN8LopLffGWQ+dhFR wqnGL0jJ 4XBcjJ/BEF1OtMVQCJv2SHtoKwo0kDU2Bw4B2275NLKf6Cx+LBW1nKqY7xwWzz4sSNOZFWgGP5bBCqjbYpCdKi6rtjt1U/fFV32Gms857c3vXBosoZdjOGJua13ao9KtLBX4mrfvpgtjq5iMM+W+b63/08+3czIBCPrPmLRvNjatUInd7B/+maaYhtOROQwOzn5V2IgnEos2HOiuTalP/J+KGjjp7LnYjsc6zAVotTY9cIVokQ3QrgjkMhg== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 7, 2023 at 7:24=E2=80=AFAM Aneesh Kumar K V wrote: > > On 7/7/23 1:27 PM, Yu Zhao wrote: > > On Thu, Jul 6, 2023 at 12:21=E2=80=AFAM Aneesh Kumar K.V > > wrote: > >> > >> This patchset avoids building changes added by commit bd74fdaea146 ("m= m: > >> multi-gen LRU: support page table walks") on platforms that don't supp= ort > >> hardware atomic updates of access bits. > >> > >> Aneesh Kumar K.V (5): > >> mm/mglru: Create a new helper iterate_mm_list_walk > >> mm/mglru: Move Bloom filter code around > >> mm/mglru: Move code around to make future patch easy > >> mm/mglru: move iterate_mm_list_walk Helper > >> mm/mglru: Don't build multi-gen LRU page table walk code on > >> architecture not supported > >> > >> arch/Kconfig | 3 + > >> arch/arm64/Kconfig | 1 + > >> arch/x86/Kconfig | 1 + > >> include/linux/memcontrol.h | 2 +- > >> include/linux/mm_types.h | 10 +- > >> include/linux/mmzone.h | 12 +- > >> kernel/fork.c | 2 +- > >> mm/memcontrol.c | 2 +- > >> mm/vmscan.c | 955 +++++++++++++++++++-----------------= - > >> 9 files changed, 528 insertions(+), 460 deletions(-) > > > > 1. There is no need for a new Kconfig -- the condition is simply > > defined(CONFIG_LRU_GEN) && !defined(arch_has_hw_pte_young) > > > > 2. The best practice to disable static functions is not by macros but: > > > > static int const_cond(void) > > { > > return 1; > > } > > > > int main(void) > > { > > int a =3D const_cond(); > > > > if (a) > > return 0; > > > > /* the compiler doesn't generate code for static funcs below */ > > static_func_1(); > > ... > > static_func_N(); > > > > LTO also optimizes external functions. But not everyone uses it. So we > > still need macros for them, and of course data structures. > > > > 3. In 4/5, you have: > > > > @@ -461,6 +461,7 @@ enum { > > struct lru_gen_mm_state { > > /* set to max_seq after each iteration */ > > unsigned long seq; > > +#ifdef CONFIG_LRU_TASK_PAGE_AGING > > /* where the current iteration continues after */ > > struct list_head *head; > > /* where the last iteration ended before */ > > @@ -469,6 +470,11 @@ struct lru_gen_mm_state { > > unsigned long *filters[NR_BLOOM_FILTERS]; > > /* the mm stats for debugging */ > > unsigned long stats[NR_HIST_GENS][NR_MM_STATS]; > > +#else > > + /* protect the seq update above */ > > + /* May be we can use lruvec->lock? */ > > + spinlock_t lock; > > +#endif > > }; > > > > The answer is yes, and not only that, we don't need lru_gen_mm_state at= all. > > > > I'm attaching a patch that fixes all above. If you want to post it, > > please feel free -- fully test it please, since I didn't. Otherwise I > > can ask TJ to help make this work for you. > > > > $ git diff --stat > > include/linux/memcontrol.h | 2 +- > > include/linux/mm_types.h | 12 +- > > include/linux/mmzone.h | 2 + > > kernel/bounds.c | 6 +- > > kernel/fork.c | 2 +- > > mm/vmscan.c | 169 +++++++++++++++++++-------- > > 6 files changed, 137 insertions(+), 56 deletions(-) > > > > On x86: > > > > $ ./scripts/bloat-o-meter mm/vmscan.o.old mm/vmscan.o > > add/remove: 24/34 grow/shrink: 2/7 up/down: 966/-8716 (-7750) > > Function old new delta > > ... > > should_skip_vma 206 - -206 > > get_pte_pfn 261 - -261 > > lru_gen_add_mm 323 - -323 > > lru_gen_seq_show 1710 1370 -340 > > lru_gen_del_mm 432 - -432 > > reset_batch_size 572 - -572 > > try_to_inc_max_seq 2947 1635 -1312 > > walk_pmd_range_locked 1508 - -1508 > > walk_pud_range 3238 - -3238 > > Total: Before=3D99449, After=3D91699, chg -7.79% > > > > $ objdump -S mm/vmscan.o | grep -A 20 ":" > > 000000000000a350 : > > { > > a350: e8 00 00 00 00 call a355 > > a355: 55 push %rbp > > a356: 48 89 e5 mov %rsp,%rbp > > a359: 41 57 push %r15 > > a35b: 41 56 push %r14 > > a35d: 41 55 push %r13 > > a35f: 41 54 push %r12 > > a361: 53 push %rbx > > a362: 48 83 ec 70 sub $0x70,%rsp > > a366: 41 89 d4 mov %edx,%r12d > > a369: 49 89 f6 mov %rsi,%r14 > > a36c: 49 89 ff mov %rdi,%r15 > > spin_lock_irq(&lruvec->lru_lock); > > a36f: 48 8d 5f 50 lea 0x50(%rdi),%rbx > > a373: 48 89 df mov %rbx,%rdi > > a376: e8 00 00 00 00 call a37b > > success =3D max_seq =3D=3D lrugen->max_seq; > > a37b: 49 8b 87 88 00 00 00 mov 0x88(%r15),%rax > > a382: 4c 39 f0 cmp %r14,%rax > > For the below diff: > > @@ -4497,14 +4547,16 @@ static bool try_to_inc_max_seq(struct lruvec *lru= vec, unsigned long max_seq, > struct lru_gen_mm_walk *walk; > struct mm_struct *mm =3D NULL; > struct lru_gen_folio *lrugen =3D &lruvec->lrugen; > + struct lru_gen_mm_state *mm_state =3D get_mm_state(lruvec); > > VM_WARN_ON_ONCE(max_seq > READ_ONCE(lrugen->max_seq)); > > + if (!mm_state) > + return inc_max_seq(lruvec, max_seq, can_swap, force_scan)= ; > + > /* see the comment in iterate_mm_list() */ > - if (max_seq <=3D READ_ONCE(lruvec->mm_state.seq)) { > - success =3D false; > - goto done; > - } > + if (max_seq <=3D READ_ONCE(mm_state->seq)) > + return false; > > /* > * If the hardware doesn't automatically set the accessed bit, fa= llback > @@ -4534,8 +4586,10 @@ static bool try_to_inc_max_seq(struct lruvec *lruv= ec, unsigned long max_seq, > walk_mm(lruvec, mm, walk); > } while (mm); > done: > - if (success) > - inc_max_seq(lruvec, can_swap, force_scan); > + if (success) { > + success =3D inc_max_seq(lruvec, max_seq, can_swap, force_= scan); > + WARN_ON_ONCE(!success); > + } > > return success; > } > @ > > We did discuss a possible race that can happen if we allow multiple calle= rs hit inc_max_seq at the same time. > inc_max_seq drop the lru_lock and restart the loop at the previous value = of type. ie. if we want to do the above > we might also need the below? Yes, you are right. In fact, there is an existing bug here: even though max_seq can't change without the patch, min_seq still can, and if it does, the initial condition for inc_min_seq() to keep looping, i.e., nr_gens at max, doesn't hold anymore. for (type =3D ANON_AND_FILE - 1; type >=3D 0; type--) { if (get_nr_gens(lruvec, type) !=3D MAX_NR_GENS) continue; This only affects the debugfs interface (force_scan=3Dtrue), which is probably why it wasn't never reported. > modified mm/vmscan.c > @@ -4368,6 +4368,7 @@ void inc_max_seq(struct lruvec *lruvec, bool can_sw= ap, bool force_scan) > int type, zone; > struct lru_gen_struct *lrugen =3D &lruvec->lrugen; > > +retry: > spin_lock_irq(&lruvec->lru_lock); > > VM_WARN_ON_ONCE(!seq_is_valid(lruvec)); > @@ -4381,7 +4382,7 @@ void inc_max_seq(struct lruvec *lruvec, bool can_sw= ap, bool force_scan) > while (!inc_min_seq(lruvec, type, can_swap)) { > spin_unlock_irq(&lruvec->lru_lock); > cond_resched(); > - spin_lock_irq(&lruvec->lru_lock); > + goto retry; > } > } > > I also found that allowing only one cpu to increment max seq value and ma= king other request > with the same max_seq return false is also useful in performance runs. ie= , we need an equivalent of this? > > > + if (max_seq <=3D READ_ONCE(mm_state->seq)) > + return false; Yes, but the condition should be "<" -- ">" is a bug, which was asserted in try_to_in_max_seq().