From: Klara Modin <klarasmodin@gmail.com>
To: Boris Burkov <boris@bur.io>
Cc: akpm@linux-foundation.org, linux-btrfs@vger.kernel.org,
linux-mm@kvack.org, linux-fsdevel@vger.kernel.org,
kernel-team@fb.com, shakeel.butt@linux.dev, wqu@suse.com,
willy@infradead.org, mhocko@kernel.org, muchun.song@linux.dev,
roman.gushchin@linux.dev, hannes@cmpxchg.org
Subject: Re: [PATCH v3 1/4] mm/filemap: add AS_UNCHARGED
Date: Thu, 21 Aug 2025 01:15:06 +0200 [thread overview]
Message-ID: <rhnvd3ohg3hludr4auyhezfiut3qdbdzxdfggpzehtmojxsym2@kfczkgovrtg3> (raw)
In-Reply-To: <20250820225222.GA4100662@zen.localdomain>
Hi,
On 2025-08-20 15:52:22 -0700, Boris Burkov wrote:
> On Thu, Aug 21, 2025 at 12:06:42AM +0200, Klara Modin wrote:
> > Hi,
> >
> > On 2025-08-18 17:36:53 -0700, Boris Burkov wrote:
> > > Btrfs currently tracks its metadata pages in the page cache, using a
> > > fake inode (fs_info->btree_inode) with offsets corresponding to where
> > > the metadata is stored in the filesystem's full logical address space.
> > >
> > > A consequence of this is that when btrfs uses filemap_add_folio(), this
> > > usage is charged to the cgroup of whichever task happens to be running
> > > at the time. These folios don't belong to any particular user cgroup, so
> > > I don't think it makes much sense for them to be charged in that way.
> > > Some negative consequences as a result:
> > > - A task can be holding some important btrfs locks, then need to lookup
> > > some metadata and go into reclaim, extending the duration it holds
> > > that lock for, and unfairly pushing its own reclaim pain onto other
> > > cgroups.
> > > - If that cgroup goes into reclaim, it might reclaim these folios a
> > > different non-reclaiming cgroup might need soon. This is naturally
> > > offset by LRU reclaim, but still.
> > >
> > > A very similar proposal to use the root cgroup was previously made by
> > > Qu, where he eventually proposed the idea of setting it per
> > > address_space. This makes good sense for the btrfs use case, as the
> > > uncharged behavior should apply to all use of the address_space, not
> > > select allocations. I.e., if someone adds another filemap_add_folio()
> > > call using btrfs's btree_inode, we would almost certainly want the
> > > uncharged behavior.
> > >
> > > Link: https://lore.kernel.org/linux-mm/b5fef5372ae454a7b6da4f2f75c427aeab6a07d6.1727498749.git.wqu@suse.com/
> > > Suggested-by: Qu Wenruo <wqu@suse.com>
> > > Acked-by: Shakeel Butt <shakeel.butt@linux.dev>
> > > Tested-by: syzbot@syzkaller.appspotmail.com
> > > Signed-off-by: Boris Burkov <boris@bur.io>
> >
> > I bisected the following null-dereference to 3f31e0d9912d ("btrfs: set
> > AS_UNCHARGED on the btree_inode") in mm-new but I believe it's a result of
> > this patch:
> >
...
> >
> > This means that not all folios will have a memcg attached also when
> > memcg is enabled. In lru_gen_eviction() mem_cgroup_id() is called
> > without a NULL check which then leads to the null-dereference.
> >
> > The following diff resolves the issue for me:
> >
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index fae105a9cb46..c70e789201fc 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -809,7 +809,7 @@ void mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
> >
> > static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg)
> > {
> > - if (mem_cgroup_disabled())
> > + if (mem_cgroup_disabled() || !memcg)
> > return 0;
> >
> > return memcg->id.id;
> >
> > However, it's mentioned in folio_memcg() that it can return NULL so this
> > might be an existing bug which this patch just makes more obvious.
> >
> > There's also workingset_eviction() which instead gets the memcg from
> > lruvec. Doing that in lru_gen_eviction() also resolves the issue for me:
> >
> > diff --git a/mm/workingset.c b/mm/workingset.c
> > index 68a76a91111f..e805eadf0ec7 100644
> > --- a/mm/workingset.c
> > +++ b/mm/workingset.c
> > @@ -243,6 +243,7 @@ static void *lru_gen_eviction(struct folio *folio)
> > int tier = lru_tier_from_refs(refs, workingset);
> > struct mem_cgroup *memcg = folio_memcg(folio);
> > struct pglist_data *pgdat = folio_pgdat(folio);
> > + int memcgid;
> >
> > BUILD_BUG_ON(LRU_GEN_WIDTH + LRU_REFS_WIDTH > BITS_PER_LONG - EVICTION_SHIFT);
> >
> > @@ -254,7 +255,9 @@ static void *lru_gen_eviction(struct folio *folio)
> > hist = lru_hist_from_seq(min_seq);
> > atomic_long_add(delta, &lrugen->evicted[hist][type][tier]);
> >
> > - return pack_shadow(mem_cgroup_id(memcg), pgdat, token, workingset);
> > + memcgid = mem_cgroup_id(lruvec_memcg(lruvec));
> > +
> > + return pack_shadow(memcgid, pgdat, token, workingset);
> > }
> >
> > /*
> >
> > I don't really know what I'm doing here, though.
>
> Me neither, clearly :)
>
> Thanks so much for the report and fix! I fear there might be some other
> paths that try to get memcg from lruvec or folio or whatever without
> checking it. I feel like in this exact case, I would want to go to the
> first sign of trouble and fix it at lruvec_memcg(). But then who knows
> what else we've missed.
>
> May I ask what you were running to trigger this? My fstests run (clearly
> not exercising enough interesting memory paths) did not hit it.
>
> This does make me wonder if the superior approach to the original patch
> isn't just to go back to the very first thing Qu did and account these
> to the root cgroup rather than do the whole uncharged thing.
>
> Boris
>
> >
> > Regards,
> > Klara Modin
For me it's easiest to trigger when cloning a large repository, e.g. the
kernel or gcc, with low-ish amount of RAM (maybe 1-4 GiB) so under memory
pressure. Also:
CONFIG_LRU_GEN=y
CONFIG_LRU_GEN_ENABLED=y
Shakeel:
I think I'll wait a little before submitting a patch to see if there are
any more comments.
Regards,
Klara Modin
next prev parent reply other threads:[~2025-08-20 23:15 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-19 0:36 [PATCH v3 0/4] introduce uncharged file mapped folios Boris Burkov
2025-08-19 0:36 ` [PATCH v3 1/4] mm/filemap: add AS_UNCHARGED Boris Burkov
2025-08-19 2:46 ` Matthew Wilcox
2025-08-19 3:57 ` Boris Burkov
2025-08-20 22:06 ` Klara Modin
2025-08-20 22:22 ` Shakeel Butt
2025-08-20 22:52 ` Boris Burkov
2025-08-20 23:15 ` Klara Modin [this message]
2025-08-20 23:53 ` Shakeel Butt
2025-08-21 19:37 ` Shakeel Butt
2025-08-19 0:36 ` [PATCH v3 2/4] mm: add vmstat for cgroup uncharged pages Boris Burkov
2025-08-19 2:50 ` Matthew Wilcox
2025-08-19 4:05 ` Boris Burkov
2025-08-19 15:53 ` Shakeel Butt
2025-08-19 23:46 ` Matthew Wilcox
2025-08-20 1:25 ` Shakeel Butt
2025-08-20 13:19 ` Matthew Wilcox
2025-08-20 16:21 ` Shakeel Butt
2025-08-19 0:36 ` [PATCH v3 3/4] btrfs: set AS_UNCHARGED on the btree_inode Boris Burkov
2025-08-19 0:36 ` [PATCH v3 4/4] memcg: remove warning from folio_lruvec Boris Burkov
2025-08-19 2:41 ` Matthew Wilcox
2025-08-19 5:20 ` Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=rhnvd3ohg3hludr4auyhezfiut3qdbdzxdfggpzehtmojxsym2@kfczkgovrtg3 \
--to=klarasmodin@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=boris@bur.io \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@fb.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=shakeel.butt@linux.dev \
--cc=willy@infradead.org \
--cc=wqu@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox