From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 06C53C47077 for ; Sun, 14 Jan 2024 17:42:34 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 850A16B0072; Sun, 14 Jan 2024 12:42:34 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 800866B0074; Sun, 14 Jan 2024 12:42:34 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 6A2C76B0075; Sun, 14 Jan 2024 12:42:34 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0014.hostedemail.com [216.40.44.14]) by kanga.kvack.org (Postfix) with ESMTP id 5B0FE6B0072 for ; Sun, 14 Jan 2024 12:42:34 -0500 (EST) Received: from smtpin20.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 271CE14039D for ; Sun, 14 Jan 2024 17:42:34 +0000 (UTC) X-FDA: 81678636228.20.E27E0FC Received: from mail-lj1-f171.google.com (mail-lj1-f171.google.com [209.85.208.171]) by imf22.hostedemail.com (Postfix) with ESMTP id 5088DC0008 for ; Sun, 14 Jan 2024 17:42:32 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=i8ljMfd4; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1705254152; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=FopLFC80dOf1fsN3Wz4ZFt4KjR5ko9Ne4EZUQ437vhE=; b=3SWRFpMqTtKu5KjR7b5obkdD1HozW6GbMoXLvLeUn9hQ+wxw2lf7mQCcfu26qoBwSJFG/T XUm+gIIypIbaMU4NXzNt9/GXZQ/xp7XDCt03eLHd1ivPAnYnQuKBpUgn8NAw1QZ1YTi412 oQby27ugap58q7qLul7vN1H9I/3hSlg= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20230601 header.b=i8ljMfd4; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of ryncsn@gmail.com designates 209.85.208.171 as permitted sender) smtp.mailfrom=ryncsn@gmail.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1705254152; a=rsa-sha256; cv=none; b=K1kYHhg447lAaTmnqpRfUlSVvzVxqyM0I4dzsE/7429zAQAFcCp4TNHMxr8mxNeyMI7tTh aixA/0L1BB08rRG4fxlsdY92GoQCPoXIp10GE0MbKPkPN69gII5T3ZZQoR8S/NQ6tgwHc6 p531e+Pk9juE/1bb9PBjZwzPA4PTjx0= Received: by mail-lj1-f171.google.com with SMTP id 38308e7fff4ca-2cceb5f0918so83902081fa.2 for ; Sun, 14 Jan 2024 09:42:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1705254150; x=1705858950; darn=kvack.org; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=FopLFC80dOf1fsN3Wz4ZFt4KjR5ko9Ne4EZUQ437vhE=; b=i8ljMfd4MhtYWotI8/52EjhwrfPljnyn9P4UCFXrsx18hJ6MtPBnQJOKP6HHkdieTM trPG+BtrwtkVe8C1cNwP+25d4N2RNf+2AgCubPIAczkBHrFqZiupeVjNGDTQHkBAfKJF FgGuInCiMvtaiIGCOmsscan1mcL8fhyvIjy6IVzXDKBH9FcnXEetwKQ2tHkggErCoilA WcC7CQajsyhWhP8Wygc4Y22oCZOktLRD+XwjFeAMtN9gdzVXMHBMXS/1xIKhs7RJG0OX h9pOPfgXJ4qWOnEyNyAiItXKazWzVRbgl7STSaS5Y5UwEtvNP4pfMZwgPzeIbdo1AOJa SVOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1705254150; x=1705858950; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=FopLFC80dOf1fsN3Wz4ZFt4KjR5ko9Ne4EZUQ437vhE=; b=E5k169cv24szqCKCqrOHCzZveNdrOrc+PHZDy8KnO+OAEF7YTHzlU0wKiZtFX6nZn3 MH1q6twgcXR1X9B0VIZXMBjxvEzI6WtZWsuX2Tv9u19OaQ8R57KdKlv4/VoctclA9JLH xoH3crJVsYutRhlCRu/VSi/3mjutP6UtZ/CMI+hjNjlJFnXnhaS/xRMh5hnKbwHFfKLU 75126GXc8+8Qrnkjs6AAdb/qaOgPAdNwJDx3EhAoEZExLvqYcBbTCqcjN5EasOGZXudb 8bpg1MkYWHWHfskQA1jdSoGHtpsKc3dd0s2EehSMbjgftOsHR1U5rBMgdyW5l9IuCTJ5 WVaA== X-Gm-Message-State: AOJu0Yx6iUFeNKi1OnEXZMcW6e+e3eghAH1nitkjzgEvQWjxJLCS2T8T h9kLsSgBVe25vUUHJHPFHbOl3gbW2BTKe1nSJ34= X-Google-Smtp-Source: AGHT+IHzaIBlNFz6Pu5GIxN523Wrwgv14ieBgneCLoBprg3Rp5Zg+w6u++LKXVswLVwFcxbGoNHkDJMBFV+EHHqaGOs= X-Received: by 2002:a2e:8447:0:b0:2cd:7f14:9b33 with SMTP id u7-20020a2e8447000000b002cd7f149b33mr1079998ljh.51.1705254150168; Sun, 14 Jan 2024 09:42:30 -0800 (PST) MIME-Version: 1.0 References: <20240111183321.19984-1-ryncsn@gmail.com> <20240111183321.19984-2-ryncsn@gmail.com> In-Reply-To: From: Kairui Song Date: Mon, 15 Jan 2024 01:42:12 +0800 Message-ID: Subject: Re: [PATCH v2 1/3] mm, lru_gen: batch update counters on againg To: Wei Xu Cc: linux-mm@kvack.org, Andrew Morton , Yu Zhao , Chris Li , Matthew Wilcox , linux-kernel@vger.kernel.org, Greg Thelen Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Queue-Id: 5088DC0008 X-Rspam-User: X-Rspamd-Server: rspam05 X-Stat-Signature: z8quinwfu9aopbbw9stjq7iis8hff4d4 X-HE-Tag: 1705254152-167856 X-HE-Meta: U2FsdGVkX1/Mh9ov2r+/PS5W/nKmXXr9nZkoY84YoIsXo9U7J0K+dtxFsGfWr9cFkAOt6ySl8xNZU5IP9QbuyYW100fTAVL4PT7j9kuGv5Cu9+AVufZh13y/mACNCpeWIhT5e4BibN+YmSM1QQ7gtMvltWr5oAvZpRbwHqb6ionzg7eQKE99Xyv4r8ICp8znk2Qaqsoz9NBBLEzOHqQmUIdQk2hJwU7MCycBDPwtzMwZtxcX3IgEX/qSwB7I8OUFB7I2akx2FtvBOcXn/tSh4obACOMaoAY0ltGaTo5jweArk6GhxStATnJwV0dwkJNJF+IgtDKK8tmgZEAS3KQibTop86xO0ltwSb+2CZyCB+q+oe01LYMCZEsp/gbAdGG/9IhrhljS1XX1fcQ2xHsCqHfOJcrNquUIzeXAQ+kZomgO9p+MeaYG0rUrgne8ILPtKu+j2id7T7zkuVLJrrjhL0BUliQIR5Vp8IPPamcGHaHT3FV8+j1aQn8s6xV3/KLT5Wql23/X+NJb0JpKXixHeWDKP6VZGZyTMbMtumRec58+6SalDk5X0XRGh0VF8jKeo5Q1HYSR/doDSIgQ7TK83QsFuaxEh3h4iWWrVUjqHQ/c3jnU7ttJMs4dj7fCMd33wwzgSAuuD6TUyCAq50E8dhbO9lzmwk1tgNriwyIhvVaUs+lJsXX7S9zHNwWuzSwOqckWSgOCzlc+Fn0ahkKOpd036RwHtBc8euBuTkDSJ1pHBzSy1FTLQQSbWScrJTc/9949BC4/KRPznGo7GTnbdBMzt7AvKwACEegxAEAT95lXAKPTjI6L7cKki2HpfbMG67eouE0USfDhGrTKewX4T0YdPQrxXGLNLIg0+1e4IdEGGjHHDaqLXWnxVfyT12LnWtKhi4Dkzoar/hqsp8jKZY+8rRTDpHMsokPevhmOmV/sdFJj+gSIIr/q9AUmUfVzGb07VgUxUSMDv42EwfL Wl7rz8r9 45BI+jvjdVDh8T9LACZ/R1JeDXLhjTHJQJoMvftgUQhCaZL1NVHVgLaMK606HP7GcMYBWurEErBFcWQm1pI6xiNJeUuJ1oU6I0Q3tn1lppUwRhkBPxpjDhnmOlzad0cF+0Z3akBqKoIE+P6OVxC4dmO6jzCLY//LcNhcPsDtwObk4+sjAGkc4+fErb9nQZIk8Udilg1hFgxwnX5ERfZ8+BAdjhQDMrAYEDNSPoJWGKts4Mqf+5DtrBnfemH4oz2ArSI/3GdIMEA+B4Z5Yf0cD1+gY5U0wzDKCMfbuV5sYWIea0SBV8uzuDDRTWU6T2fqDzetRFhCvllPHVB6g0za2GBT3vWh6AWinywvpyoE4K/0rscCXreK3kk3u1x9AHr7PuEWDT5ZGN2aMBFT1/5vciTp6X3tuQm2kLmlNcnhIPAHcpJgIHidcPG9heZY8MS2Y7bY02a/hUhA6Pc9K8C0e0ag+xzUjPDDB2bdH4htp6IxDh3tsm3ZLrSgn2bJhfUS+Gumh X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Wei Xu =E4=BA=8E2024=E5=B9=B41=E6=9C=8813=E6=97=A5=E5= =91=A8=E5=85=AD 05:01=E5=86=99=E9=81=93=EF=BC=9A > > On Thu, Jan 11, 2024 at 10:33=E2=80=AFAM Kairui Song w= rote: > > > > From: Kairui Song > > > > When lru_gen is aging, it will update mm counters page by page, > > which causes a higher overhead if age happens frequently or there > > are a lot of pages in one generation getting moved. > > Optimize this by doing the counter update in batch. > > > > Although most __mod_*_state has its own caches the overhead > > is still observable. > > > > Tested in a 4G memcg on a EPYC 7K62 with: > > > > memcached -u nobody -m 16384 -s /tmp/memcached.socket \ > > -a 0766 -t 16 -B binary & > > > > memtier_benchmark -S /tmp/memcached.socket \ > > -P memcache_binary -n allkeys \ > > --key-minimum=3D1 --key-maximum=3D16000000 -d 1024 \ > > --ratio=3D1:0 --key-pattern=3DP:P -c 2 -t 16 --pipeline 8 -x 6 > > > > Average result of 18 test runs: > > > > Before: 44017.78 Ops/sec > > After: 44687.08 Ops/sec (+1.5%) > > I see the same performance numbers get quoted in all the 3 patches. > How much performance improvements does this particular patch provide > (the same for the other 2 patches)? If as the cover letter says, the > most performance benefits come from patch 3 (prefetching), can we just > have that patch alone to avoid the extra complexities. Hi Wei, Indeed these are two different optimization technique, I can reorder the series, prefetch is the first one and should be more acceptable, other optimizations can come later. And add standalone info about improvement of batch operations. > > > Signed-off-by: Kairui Song > > --- > > mm/vmscan.c | 64 +++++++++++++++++++++++++++++++++++++++++++++-------- > > 1 file changed, 55 insertions(+), 9 deletions(-) > > > > diff --git a/mm/vmscan.c b/mm/vmscan.c > > index 4f9c854ce6cc..185d53607c7e 100644 > > --- a/mm/vmscan.c > > +++ b/mm/vmscan.c > > @@ -3113,9 +3113,47 @@ static int folio_update_gen(struct folio *folio,= int gen) > > return ((old_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; > > } > > > > +/* > > + * Update LRU gen in batch for each lru_gen LRU list. The batch is lim= ited to > > + * each gen / type / zone level LRU. Batch is applied after finished o= r aborted > > + * scanning one LRU list. > > + */ > > +struct gen_update_batch { > > + int delta[MAX_NR_GENS]; > > +}; > > + > > +static void lru_gen_update_batch(struct lruvec *lruvec, int type, int = zone, > > + struct gen_update_batch *batch) > > +{ > > + int gen; > > + int promoted =3D 0; > > + struct lru_gen_folio *lrugen =3D &lruvec->lrugen; > > + enum lru_list lru =3D type ? LRU_INACTIVE_FILE : LRU_INACTIVE_A= NON; > > + > > + for (gen =3D 0; gen < MAX_NR_GENS; gen++) { > > + int delta =3D batch->delta[gen]; > > + > > + if (!delta) > > + continue; > > + > > + WRITE_ONCE(lrugen->nr_pages[gen][type][zone], > > + lrugen->nr_pages[gen][type][zone] + delta); > > + > > + if (lru_gen_is_active(lruvec, gen)) > > + promoted +=3D delta; > > + } > > + > > + if (promoted) { > > + __update_lru_size(lruvec, lru, zone, -promoted); > > + __update_lru_size(lruvec, lru + LRU_ACTIVE, zone, promo= ted); > > + } > > +} > > + > > /* protect pages accessed multiple times through file descriptors */ > > -static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, b= ool reclaiming) > > +static int folio_inc_gen(struct lruvec *lruvec, struct folio *folio, > > + bool reclaiming, struct gen_update_batch *batc= h) > > { > > + int delta =3D folio_nr_pages(folio); > > int type =3D folio_is_file_lru(folio); > > struct lru_gen_folio *lrugen =3D &lruvec->lrugen; > > int new_gen, old_gen =3D lru_gen_from_seq(lrugen->min_seq[type]= ); > > @@ -3138,7 +3176,8 @@ static int folio_inc_gen(struct lruvec *lruvec, s= truct folio *folio, bool reclai > > new_flags |=3D BIT(PG_reclaim); > > } while (!try_cmpxchg(&folio->flags, &old_flags, new_flags)); > > > > - lru_gen_update_size(lruvec, folio, old_gen, new_gen); > > + batch->delta[old_gen] -=3D delta; > > + batch->delta[new_gen] +=3D delta; > > > > return new_gen; > > } > > @@ -3672,6 +3711,7 @@ static bool inc_min_seq(struct lruvec *lruvec, in= t type, bool can_swap) > > { > > int zone; > > int remaining =3D MAX_LRU_BATCH; > > + struct gen_update_batch batch =3D { }; > > Can this batch variable be moved away from the stack? We (Google) use > a much larger value for MAX_NR_GENS internally. This large stack > allocation from "struct gen_update_batch batch" can significantly > increase the risk of stack overflow for our use cases. > Thanks for the info. Do you have any suggestion about where we should put the batch info? I though about merging it with lru_gen_mm_walk but that depend on kzalloc and not useable for slow allocation path, the overhead could be larger than benefit in many cases. Not sure if we can use some thing like a preallocated per-cpu cache here to avoid all the issues.