From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7605DC433F5 for ; Tue, 19 Apr 2022 04:26:01 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id EF0B46B00CB; Tue, 19 Apr 2022 00:26:00 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E78DF6B00CC; Tue, 19 Apr 2022 00:26:00 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CCC736B00CD; Tue, 19 Apr 2022 00:26:00 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.25]) by kanga.kvack.org (Postfix) with ESMTP id B79216B00CB for ; Tue, 19 Apr 2022 00:26:00 -0400 (EDT) Received: from smtpin03.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay08.hostedemail.com (Postfix) with ESMTP id 75D18202F9 for ; Tue, 19 Apr 2022 04:26:00 +0000 (UTC) X-FDA: 79372340880.03.47545BD Received: from mail-ej1-f47.google.com (mail-ej1-f47.google.com [209.85.218.47]) by imf31.hostedemail.com (Postfix) with ESMTP id 5A2F820003 for ; Tue, 19 Apr 2022 04:25:59 +0000 (UTC) Received: by mail-ej1-f47.google.com with SMTP id lc2so30360522ejb.12 for ; Mon, 18 Apr 2022 21:25:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=UFEDeH+u/yZ2kD/CX+L0gxICKjfTs2lb5V62EoGBNG0=; b=bH6bVb0ILVONTz9SoorouB42/TZf8iquryniHGVNM26THSxcf47weMUePoWxpnj7Zp RadRYnKNdoKp9u2PvK4qsWTOg5KJTH1oiUN7dyIYgpYFg4XxPAgcBNqePeXmAW1+ozxR NJHz00WqNbGgmxshlhmPGWw7dtZh6Bfn1mcX1Cndv8jC45veZ4HjGFZw5o5tCOr1fEwM YNXlqTMwy7cusUAYFZv8XORJfYsFcAmr/IpGa0tkwtfButH/2tOljo0zKf/8Rh16k+HB WhvP4FkLF2H+O5mLdtm21T50GfHFuD/R/KwE/pJ+Ry55QGt77MT6tqEoOBzFZ1AIe/rU WUPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=UFEDeH+u/yZ2kD/CX+L0gxICKjfTs2lb5V62EoGBNG0=; b=ZkSp/bw3kHGyeYjGUvufso6ll6gXEWeW6J43UcMKHhTJSRqsEjs4cZpdMOQMdR0InZ 1S9R5sCxo0U89/lH0Jrwa7Qee8AtCXXunLgP3UN5ujn0nPT+9dRXKt82ifDIjkHX4Hdm YSAT9qdaoj1xAcgKO8xy/zzH49Xo8YwvFQ9XUu+8rdvo/7gcfySkSVnV4qETSKr+uwGG Ov2n0cyn6cPJduVjz5AyjkoKVy+VOC4kZiewFWL6p7CF3jhV+UHKdUKp9gPO1OEOQLPW HKEesPdX85wrERdFSirpqgs7HBr/GZrgiNIOuDbve/TyG8mpLJ+vwRft3xpbyDanWvhT QJJA== X-Gm-Message-State: AOAM532d6LM0Pg0xSmv0YMmms7/kUgXYz1GfjT2t9uitT76tVrqa6KTU yYGknRHuClNOp+QZyBPeUr8PBzVEdzTcV4kdfzc= X-Google-Smtp-Source: ABdhPJy/PEBhKxZjXr9MA5FzioG1cE0W71pob5lQTCteNhL/PJaD8c8LZNDHExym58lIChLM+5Hz8/L+kM0+Sn6xu9U= X-Received: by 2002:a17:906:a05a:b0:6ef:a44d:2f46 with SMTP id bg26-20020a170906a05a00b006efa44d2f46mr6733988ejb.192.1650342358338; Mon, 18 Apr 2022 21:25:58 -0700 (PDT) MIME-Version: 1.0 References: <20220407031525.2368067-1-yuzhao@google.com> <20220407031525.2368067-7-yuzhao@google.com> In-Reply-To: From: Barry Song <21cnbao@gmail.com> Date: Tue, 19 Apr 2022 16:25:46 +1200 Message-ID: Subject: Re: [PATCH v10 06/14] mm: multi-gen LRU: minimal implementation To: Yu Zhao Cc: Stephen Rothwell , Linux-MM , Andi Kleen , Andrew Morton , Aneesh Kumar , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , LAK , Linux Doc Mailing List , LKML , Kernel Page Reclaim v2 , x86 , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , =?UTF-8?Q?Holger_Hoffst=C3=A4tte?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 5A2F820003 X-Rspam-User: Authentication-Results: imf31.hostedemail.com; dkim=pass header.d=gmail.com header.s=20210112 header.b=bH6bVb0I; spf=pass (imf31.hostedemail.com: domain of 21cnbao@gmail.com designates 209.85.218.47 as permitted sender) smtp.mailfrom=21cnbao@gmail.com; dmarc=pass (policy=none) header.from=gmail.com X-Stat-Signature: 3f5dnfx8xqzz1r6b8kcfic4u1ooeaca8 X-HE-Tag: 1650342359-340926 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Apr 19, 2022 at 12:54 PM Yu Zhao wrote: > > On Mon, Apr 18, 2022 at 3:58 AM Barry Song <21cnbao@gmail.com> wrote: > > > > On Thu, Apr 7, 2022 at 3:16 PM Yu Zhao wrote: > > > > > > To avoid confusion, the terms "promotion" and "demotion" will be > > > applied to the multi-gen LRU, as a new convention; the terms > > > "activation" and "deactivation" will be applied to the active/inactive > > > LRU, as usual. > > > > > > The aging produces young generations. Given an lruvec, it increments > > > max_seq when max_seq-min_seq+1 approaches MIN_NR_GENS. The aging > > > promotes hot pages to the youngest generation when it finds them > > > accessed through page tables; the demotion of cold pages happens > > > consequently when it increments max_seq. The aging has the complexity > > > O(nr_hot_pages), since it is only interested in hot pages. Promotion > > > in the aging path does not require any LRU list operations, only the > > > updates of the gen counter and lrugen->nr_pages[]; demotion, unless as > > > the result of the increment of max_seq, requires LRU list operations, > > > e.g., lru_deactivate_fn(). > > > > > > The eviction consumes old generations. Given an lruvec, it increments > > > min_seq when the lists indexed by min_seq%MAX_NR_GENS become empty. A > > > feedback loop modeled after the PID controller monitors refaults over > > > anon and file types and decides which type to evict when both types > > > are available from the same generation. > > > > > > Each generation is divided into multiple tiers. Tiers represent > > > different ranges of numbers of accesses through file descriptors. A > > > page accessed N times through file descriptors is in tier > > > order_base_2(N). Tiers do not have dedicated lrugen->lists[], only > > > bits in folio->flags. In contrast to moving across generations, which > > > requires the LRU lock, moving across tiers only involves operations on > > > folio->flags. The feedback loop also monitors refaults over all tiers > > > and decides when to protect pages in which tiers (N>1), using the > > > first tier (N=0,1) as a baseline. The first tier contains single-use > > > unmapped clean pages, which are most likely the best choices. The > > > eviction moves a page to the next generation, i.e., min_seq+1, if the > > > feedback loop decides so. This approach has the following advantages: > > > 1. It removes the cost of activation in the buffered access path by > > > inferring whether pages accessed multiple times through file > > > descriptors are statistically hot and thus worth protecting in the > > > eviction path. > > > 2. It takes pages accessed through page tables into account and avoids > > > overprotecting pages accessed multiple times through file > > > descriptors. (Pages accessed through page tables are in the first > > > tier, since N=0.) > > > 3. More tiers provide better protection for pages accessed more than > > > twice through file descriptors, when under heavy buffered I/O > > > workloads. > > > > > > > Hi Yu, > > As I told you before, I tried to change the current LRU (not MGLRU) by only > > promoting unmapped file pages to the head of the inactive head rather than > > the active head on its second access: > > https://lore.kernel.org/lkml/CAGsJ_4y=TkCGoWWtWSAptW4RDFUEBeYXwfwu=fUFvV4Sa4VA4A@mail.gmail.com/ > > I have already seen some very good results by the decease of cpu consumption of > > kswapd and direct reclamation in the testing. > > Glad to hear. I suspected you'd see some good results with that change :) > > > in mglru, it seems "twice" isn't a concern at all, one unmapped file > > page accessed > > twice has no much difference with those ones which are accessed once as you > > only begin to increase refs from the third time: > > refs are *additional* accesses: > PG_referenced: N=1 > PG_referenced+PG_workingset: N=2 > PG_referenced+PG_workingset+refs: N=3,4,5 > > When N=2, order_base_2(N)=1. So pages accessed twice are in the second > tier. Therefore they are "different". > > More details [1]: > > +/* > + * Each generation is divided into multiple tiers. Tiers represent different > + * ranges of numbers of accesses through file descriptors. A page accessed N > + * times through file descriptors is in tier order_base_2(N). A page in the > + * first tier (N=0,1) is marked by PG_referenced unless it was faulted in > + * though page tables or read ahead. A page in any other tier (N>1) is marked > + * by PG_referenced and PG_workingset. > + * > + * In contrast to moving across generations which requires the LRU lock, moving > + * across tiers only requires operations on folio->flags and therefore has a > + * negligible cost in the buffered access path. In the eviction path, > + * comparisons of refaulted/(evicted+protected) from the first tier and the > + * rest infer whether pages accessed multiple times through file descriptors > + * are statistically hot and thus worth protecting. > + * > + * MAX_NR_TIERS is set to 4 so that the multi-gen LRU can support twice of the > + * categories of the active/inactive LRU when keeping track of accesses through > + * file descriptors. It requires MAX_NR_TIERS-2 additional bits in > folio->flags. > + */ > +#define MAX_NR_TIERS 4U > > [1] https://lore.kernel.org/linux-mm/20220407031525.2368067-7-yuzhao@google.com/ > > > +static void folio_inc_refs(struct folio *folio) > > +{ > > + unsigned long refs; > > + unsigned long old_flags, new_flags; > > + > > + if (folio_test_unevictable(folio)) > > + return; > > + > > + /* see the comment on MAX_NR_TIERS */ > > + do { > > + new_flags = old_flags = READ_ONCE(folio->flags); > > + > > + if (!(new_flags & BIT(PG_referenced))) { > > + new_flags |= BIT(PG_referenced); > > + continue; > > + } > > + > > + if (!(new_flags & BIT(PG_workingset))) { > > + new_flags |= BIT(PG_workingset); > > + continue; > > + } > > + > > + refs = new_flags & LRU_REFS_MASK; > > + refs = min(refs + BIT(LRU_REFS_PGOFF), LRU_REFS_MASK); > > + > > + new_flags &= ~LRU_REFS_MASK; > > + new_flags |= refs; > > + } while (new_flags != old_flags && > > + cmpxchg(&folio->flags, old_flags, new_flags) != old_flags); > > +} > > > > So my question is what makes you so confident that twice doesn't need > > any special treatment while the vanilla kernel is upgrading this kind of page > > to the head of the active instead? I am asking this because I am considering > > reclaiming unmapped file pages which are only accessed twice when they > > get to the tail of the inactive list. > > Per above, pages accessed twice are in their own tier. Hope this clarifies it. Yep, I found the trick here , "+1" is magic behind the code, haha. +static int folio_lru_tier(struct folio *folio) +{ + int refs; + unsigned long flags = READ_ONCE(folio->flags); + + refs = (flags & LRU_REFS_FLAGS) == LRU_REFS_FLAGS ? + ((flags & LRU_REFS_MASK) >> LRU_REFS_PGOFF) + 1 : 0; + + return lru_tier_from_refs(refs); +} + TBH, this might need some comments, otherwise, it is easy to misunderstand we are beginning to have protection from 3rd access :-) Thanks barry