From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id 21CF6C64EC4 for ; Thu, 9 Mar 2023 09:39:43 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id B6324280001; Thu, 9 Mar 2023 04:39:42 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id B13926B0078; Thu, 9 Mar 2023 04:39:42 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 9B418280001; Thu, 9 Mar 2023 04:39:42 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 8C6956B0075 for ; Thu, 9 Mar 2023 04:39:42 -0500 (EST) Received: from smtpin16.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay05.hostedemail.com (Postfix) with ESMTP id 5C94A4056F for ; Thu, 9 Mar 2023 09:39:42 +0000 (UTC) X-FDA: 80548862604.16.EAFA16E Received: from mail-ed1-f46.google.com (mail-ed1-f46.google.com [209.85.208.46]) by imf06.hostedemail.com (Postfix) with ESMTP id 838A418001B for ; Thu, 9 Mar 2023 09:39:40 +0000 (UTC) Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=MK2v7qt9; spf=pass (imf06.hostedemail.com: domain of yosryahmed@google.com designates 209.85.208.46 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1678354780; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=WGNuRvZ8lS7XESHNCrOlWjPsv1FnRrcTXW2fRVQu+M8=; b=s5S5vOpT+zcZSOOLhjsZ8REtijXvsoX4v2BLJw/E02lRgb+Ik61WZqq+vS2K+nos5P8zpF jc6pztFT62s9DlRQtgF9mZLTvkn1cS014P0NkyQDQYylEHskN7r7SrtA95LwIdPGbyLsiI wfsf1h3GN0DZyUs7x6ta+Kz+krdJSvI= ARC-Authentication-Results: i=1; imf06.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=MK2v7qt9; spf=pass (imf06.hostedemail.com: domain of yosryahmed@google.com designates 209.85.208.46 as permitted sender) smtp.mailfrom=yosryahmed@google.com; dmarc=pass (policy=reject) header.from=google.com ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1678354780; a=rsa-sha256; cv=none; b=dK5bwb2etfynxX23b5V5d1pog7Y8nJ0xw6cRlrCPzY4rThaEAUDVJ6y6QYOv9LOonsHZcr s32lTLUmvdCuH2Uc7qmahpzArnYraPuOCe38YvFJSPFD1ZZ5spIF/CRk6boWmpkSXWkcD9 VcLGcB9WswPHdk9SjGxJrnZpgbgU5kU= Received: by mail-ed1-f46.google.com with SMTP id i34so4470290eda.7 for ; Thu, 09 Mar 2023 01:39:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678354779; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:from:to:cc:subject:date :message-id:reply-to; bh=WGNuRvZ8lS7XESHNCrOlWjPsv1FnRrcTXW2fRVQu+M8=; b=MK2v7qt9neRMXXISpAiAvnjRTzv8Mx0W015Pd0IwUbcVGmAGFtQ83Oxeu06anLRDFP duMgUJmvzIKuuPMLOLy0RmL6j3O3JguC/ztcuZhHSArgzzWcUEetTcuBoTyLXDxyw7Nj 9fDWoURoX5ylkK0CY9QFIxIuhEw1izEg+zCTZvnhU4g9gPJgINKQakpSw5gpYk6+o3WE ttWgPWn+G2Qdakb23BBYMti0QTYW43AVm2zAUMA0deh+DJWbY3KBgKMjcU16R3nGlIPK 9I+cD5rOwLmRm7x1+MN+VLPqf2IUUKkl6GuSxrrp0tC45yvBXvcPkUk3t/XJo2jqFV9b U7Qw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678354779; h=content-transfer-encoding:cc:to:subject:message-id:date:from :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=WGNuRvZ8lS7XESHNCrOlWjPsv1FnRrcTXW2fRVQu+M8=; b=WLBTIogPSDLrpV1ylHAlHpeZ33JrN4jhcMqiXUmcanaYY/hTL+8bXq9cXrS4FUnNJh GmaZDi8VzZDbExsGjwmBStli6YzXy9nWAQsQymDp5B9FZsTLEehyvp8Bcvq4gqVqwh8G ELE8uzMNT9TiYSHIrSx/Gi/8K3vxu5aEQEm5+/Xa9zTY+qeRpGyBF0jN+Xl7M6SkhLpT 8K/x7kSpyJhv7oPrWTLIrN0f8X+eW+FdS6oWBwHE0ANZx84625DQIHyQO1RuciNBQ+s/ qTfAeCMPcdRnxE38c18pNrwfstmxLut9W9I2Y/06giAOkHcJ0QE19f3cdCQsit6WQg7u LsgA== X-Gm-Message-State: AO0yUKWGz4o0G2Ic1vCAx2rwUjq2je9+jf/giYroehaSqYAwRKBhGT3u DxKn4HlVxKg6F/eYaDpSEo0a+Ta+SzU15tqSGJTCaQ== X-Google-Smtp-Source: AK7set8tO9nTFdrettltT4IWxh61gXrBDwA9xWU3+9eSg2d2Nv5R0J3ff1LnsWVtXQOFaVzGy3B8ezQvaqTi7wu9WJQ= X-Received: by 2002:a17:906:1ec6:b0:8b0:7e1d:f6fa with SMTP id m6-20020a1709061ec600b008b07e1df6famr10187131ejj.15.1678354778760; Thu, 09 Mar 2023 01:39:38 -0800 (PST) MIME-Version: 1.0 References: <20230309093109.3039327-1-yosryahmed@google.com> <20230309093109.3039327-3-yosryahmed@google.com> In-Reply-To: <20230309093109.3039327-3-yosryahmed@google.com> From: Yosry Ahmed Date: Thu, 9 Mar 2023 01:39:02 -0800 Message-ID: Subject: Re: [PATCH v2 2/3] mm: vmscan: refactor updating reclaimed pages in reclaim_state To: Alexander Viro , "Darrick J. Wong" , Christoph Lameter , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Matthew Wilcox (Oracle)" , Miaohe Lin , David Hildenbrand , Johannes Weiner , Peter Xu , NeilBrown , Shakeel Butt , Michal Hocko , Yu Zhao Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 838A418001B X-Stat-Signature: 4gxidxj9cntd63q3zyyrpa7a6fhf35z6 X-Rspam-User: X-HE-Tag: 1678354780-373779 X-HE-Meta: U2FsdGVkX18KvuVTBmmyT1OIEV12yYSWSQhcwYCuEA/Tdzv8g6twx1lixvEbEpj5lD6bwSR/sSJJ5ExDU118JaSTtPDhysO2d0zNQvhzLu8EVIdOBqHWzlB0EOZRMvhrp4IXZvX9RK6DWDRViaXRLN0pnJLl+cKs+N2RyYbC1bCFlRQMx2BIffF8goCXgJkuQOcCqGVJ3Q4ygtmsR4p9teavsaPvfm4PJlZucUvd/VNv7ThlDetYxKoeTsWjpeazmh3BxG6taYw3XCyWNWFXkiizfh8FuRyz9AqDt9KOM60ejWy4AJ0zJn1Y0KVvm10XFW8MKRkp+WuJHjoG2X4m+O6mL3VCwWXhkAYa4v1lmi8aaZB8B0epKxMNSazDYCzA//vlBLCDV82u7WD9wFyCqCW4GJrh9S2ckTjLO8zmiBMMraPg1qqMModjVS1m2bt2jjcGGCoJ5UoK6/6gfRoecOiQmmrsq+DgVKyuhfYHNdZH34/YLtswsSTR56/e7WJlB0z+frsV3XP8m1w1Rwacx25YS3QKw004/URUfHhzlOgPW39fP5Xm36mD+pg1Jn+lGfz+syUVYS+qZ193Bu84O+LOYCPgVGy1PdVzF6SG2ZGoQcF7GhzhIQp9oQpzrHee709PmgI+vRRAJvpwWBxZmeFrCCT+HnCQ4y5GWdJjGMWeFIMFoZxWPnwROAmPhy9Vg1WI43rk9BoBi+6VDJfQ7fkCljBd3+vL+gKK+EXCm9X+A16npkwibH85AP5NIffcDA/koAY1JBXRSTUC/1gk6mR7xzEyODzO+Qf3IS2WwZDdjCB19sQzE4jnYmDRKLxx7sBjgrtJ9ZBYZmV0Pgm/IDg9NSLthXRwwzL0p2zgVKajdk9u48s5r3O8EoOSx3iIVbE79j6+5c/6c78GaOGlp8Ny02lHjfdNO+IjoPr3jqKlzrKv5Ya5XLswXxA3f2duw+2vgDMovdbKWo1r/ce MozEdTqc huieLQeywJ4PD8/15ImL6zimlJfSA3B/tv3PxPkqt/1DiD/U7EnO8ZTFt/VYMJmteYtyC0hwSqY1TnI+h5v7rjjrn04TKjN15YcTKNoUwlh+MWwBLOqJVQg1b6IzQpbnqogMlM+kTpg5ptaL5yZmmmhSjAmCJMWN5cwwXH5Tdsp826t9WDQE9RApOZyRgySSwe3TTAzJA2CbeX2nqfoo2y07XTN1qrD+4XQ1Q5ck6YknBDxNh1cfDFho26Avt8mJ7POlIt6RMlUgdGYUd8Rmxtihwub8hTpgkO9+wrbVqKiLKG0/Du7r3JnXiE3rDiRv/oc0BJbHxPJheFKPIznbEEwIDARZD23IpwN+L2PKI3hQq4TjT+7upIjzq28n0uWEBluc0 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Mar 9, 2023 at 1:31=E2=80=AFAM Yosry Ahmed = wrote: > > During reclaim, we keep track of pages reclaimed from other means than > LRU-based reclaim through scan_control->reclaim_state->reclaimed_slab, > which we stash a pointer to in current task_struct. > > However, we keep track of more than just reclaimed slab pages through > this. We also use it for clean file pages dropped through pruned inodes, > and xfs buffer pages freed. Rename reclaimed_slab to reclaimed, and add > a helper function that wraps updating it through current, so that future > changes to this logic are contained within mm/vmscan.c. > > Signed-off-by: Yosry Ahmed > --- > fs/inode.c | 3 +-- > fs/xfs/xfs_buf.c | 3 +-- > include/linux/swap.h | 5 ++++- > mm/slab.c | 3 +-- > mm/slob.c | 6 ++---- > mm/slub.c | 5 ++--- > mm/vmscan.c | 36 ++++++++++++++++++++++++++++++------ > 7 files changed, 41 insertions(+), 20 deletions(-) > > diff --git a/fs/inode.c b/fs/inode.c > index 4558dc2f1355..e60fcc41faf1 100644 > --- a/fs/inode.c > +++ b/fs/inode.c > @@ -864,8 +864,7 @@ static enum lru_status inode_lru_isolate(struct list_= head *item, > __count_vm_events(KSWAPD_INODESTEAL, reap= ); > else > __count_vm_events(PGINODESTEAL, reap); > - if (current->reclaim_state) > - current->reclaim_state->reclaimed_slab += =3D reap; > + mm_account_reclaimed_pages(reap); > } > iput(inode); > spin_lock(lru_lock); > diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c > index 54c774af6e1c..060079f1e966 100644 > --- a/fs/xfs/xfs_buf.c > +++ b/fs/xfs/xfs_buf.c > @@ -286,8 +286,7 @@ xfs_buf_free_pages( > if (bp->b_pages[i]) > __free_page(bp->b_pages[i]); > } > - if (current->reclaim_state) > - current->reclaim_state->reclaimed_slab +=3D bp->b_page_co= unt; > + report_freed_pages(bp->b_page_count); Ugh I missed updating this one to mm_account_reclaimed_page(). This fixup needs to be squashed here. I will include it in v3 if a respin is needed, otherwise I hope Andrew can squash it in. diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 060079f1e966..15d1e5a7c2d3 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -286,7 +286,7 @@ xfs_buf_free_pages( if (bp->b_pages[i]) __free_page(bp->b_pages[i]); } - report_freed_pages(bp->b_page_count); + mm_account_reclaimed_pages(bp->b_page_count); if (bp->b_pages !=3D bp->b_page_array) kmem_free(bp->b_pages); > > > if (bp->b_pages !=3D bp->b_page_array) > kmem_free(bp->b_pages); > diff --git a/include/linux/swap.h b/include/linux/swap.h > index 209a425739a9..589ea2731931 100644 > --- a/include/linux/swap.h > +++ b/include/linux/swap.h > @@ -153,13 +153,16 @@ union swap_header { > * memory reclaim > */ > struct reclaim_state { > - unsigned long reclaimed_slab; > + /* pages reclaimed outside of LRU-based reclaim */ > + unsigned long reclaimed; > #ifdef CONFIG_LRU_GEN > /* per-thread mm walk data */ > struct lru_gen_mm_walk *mm_walk; > #endif > }; > > +void mm_account_reclaimed_pages(unsigned long pages); > + > #ifdef __KERNEL__ > > struct address_space; > diff --git a/mm/slab.c b/mm/slab.c > index dabc2a671fc6..64bf1de817b2 100644 > --- a/mm/slab.c > +++ b/mm/slab.c > @@ -1392,8 +1392,7 @@ static void kmem_freepages(struct kmem_cache *cache= p, struct slab *slab) > smp_wmb(); > __folio_clear_slab(folio); > > - if (current->reclaim_state) > - current->reclaim_state->reclaimed_slab +=3D 1 << order; > + mm_account_reclaimed_pages(1 << order); > unaccount_slab(slab, order, cachep); > __free_pages(&folio->page, order); > } > diff --git a/mm/slob.c b/mm/slob.c > index fe567fcfa3a3..79cc8680c973 100644 > --- a/mm/slob.c > +++ b/mm/slob.c > @@ -61,7 +61,7 @@ > #include > > #include > -#include /* struct reclaim_state */ > +#include /* mm_account_reclaimed_pages() */ > #include > #include > #include > @@ -211,9 +211,7 @@ static void slob_free_pages(void *b, int order) > { > struct page *sp =3D virt_to_page(b); > > - if (current->reclaim_state) > - current->reclaim_state->reclaimed_slab +=3D 1 << order; > - > + mm_account_reclaimed_pages(1 << order); > mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B, > -(PAGE_SIZE << order)); > __free_pages(sp, order); > diff --git a/mm/slub.c b/mm/slub.c > index 39327e98fce3..7aa30eef8235 100644 > --- a/mm/slub.c > +++ b/mm/slub.c > @@ -11,7 +11,7 @@ > */ > > #include > -#include /* struct reclaim_state */ > +#include /* mm_account_reclaimed_pages() */ > #include > #include > #include > @@ -2063,8 +2063,7 @@ static void __free_slab(struct kmem_cache *s, struc= t slab *slab) > /* Make the mapping reset visible before clearing the flag */ > smp_wmb(); > __folio_clear_slab(folio); > - if (current->reclaim_state) > - current->reclaim_state->reclaimed_slab +=3D pages; > + mm_account_reclaimed_pages(pages); > unaccount_slab(slab, order, s); > __free_pages(&folio->page, order); > } > diff --git a/mm/vmscan.c b/mm/vmscan.c > index fef7d1c0f82b..a3e38851b34a 100644 > --- a/mm/vmscan.c > +++ b/mm/vmscan.c > @@ -511,6 +511,34 @@ static void set_task_reclaim_state(struct task_struc= t *task, > task->reclaim_state =3D rs; > } > > +/* > + * mm_account_reclaimed_pages(): account reclaimed pages outside of LRU-= based > + * reclaim > + * @pages: number of pages reclaimed > + * > + * If the current process is undergoing a reclaim operation, increment t= he > + * number of reclaimed pages by @pages. > + */ > +void mm_account_reclaimed_pages(unsigned long pages) > +{ > + if (current->reclaim_state) > + current->reclaim_state->reclaimed +=3D pages; > +} > +EXPORT_SYMBOL(mm_account_reclaimed_pages); > + > +/* > + * flush_reclaim_state(): add pages reclaimed outside of LRU-based recla= im to > + * scan_control->nr_reclaimed. > + */ > +static void flush_reclaim_state(struct scan_control *sc, > + struct reclaim_state *rs) > +{ > + if (rs) { > + sc->nr_reclaimed +=3D rs->reclaimed; > + rs->reclaimed =3D 0; > + } > +} > + > static long xchg_nr_deferred(struct shrinker *shrinker, > struct shrink_control *sc) > { > @@ -5346,8 +5374,7 @@ static int shrink_one(struct lruvec *lruvec, struct= scan_control *sc) > vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - s= canned, > sc->nr_reclaimed - reclaimed); > > - sc->nr_reclaimed +=3D current->reclaim_state->reclaimed_slab; > - current->reclaim_state->reclaimed_slab =3D 0; > + flush_reclaim_state(sc, current->reclaim_state); > > return success ? MEMCG_LRU_YOUNG : 0; > } > @@ -6472,10 +6499,7 @@ static void shrink_node(pg_data_t *pgdat, struct s= can_control *sc) > > shrink_node_memcgs(pgdat, sc); > > - if (reclaim_state) { > - sc->nr_reclaimed +=3D reclaim_state->reclaimed_slab; > - reclaim_state->reclaimed_slab =3D 0; > - } > + flush_reclaim_state(sc, reclaim_state); > > /* Record the subtree's reclaim efficiency */ > if (!sc->proactive) > -- > 2.40.0.rc0.216.gc4246ad0f0-goog >