From: Dan Williams <dan.j.williams@intel.com>
To: David Hildenbrand <david@redhat.com>,
Alison Schofield <alison.schofield@intel.com>,
Alistair Popple <apopple@nvidia.com>
Cc: <linux-mm@kvack.org>, <nvdimm@lists.linux.dev>
Subject: Re: [BUG Report] 6.15-rc1 RIP: 0010:__lruvec_stat_mod_folio+0x7e/0x250
Date: Wed, 9 Apr 2025 13:08:05 -0700 [thread overview]
Message-ID: <67f6d3a52f77e_71fe294f0@dwillia2-xfh.jf.intel.com.notmuch> (raw)
In-Reply-To: <89c869fe-6552-4c7b-ae32-f8179628cade@redhat.com>
David Hildenbrand wrote:
[..]
> > Maybe there is something missing in ZONE_DEVICE freeing/splitting code
> > of large folios, where we should do the same, to make sure that all
> > page->memcg_data is actually 0?
> >
> > I assume so. Let me dig.
> >
>
> I suspect this should do the trick:
>
> diff --git a/fs/dax.c b/fs/dax.c
> index af5045b0f476e..8dffffef70d21 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -397,6 +397,10 @@ static inline unsigned long dax_folio_put(struct folio *folio)
> if (!order)
> return 0;
>
> +#ifdef NR_PAGES_IN_LARGE_FOLIO
> + folio->_nr_pages = 0;
> +#endif
I assume this new fs/dax.c instance of this pattern motivates a
folio_set_nr_pages() helper to hide the ifdef?
While it is concerning that fs/dax.c misses common expectations like
this, but I think that is the nature of bypassing the page allocator to
get folios().
However, raises the question if fixing it here is sufficient for other
ZONE_DEVICE folio cases. I did not immediately find a place where other
ZONE_DEVICE users might be calling prep_compound_page() and leaving
stale tail page metadata lying around. Alistair?
> +
> for (i = 0; i < (1UL << order); i++) {
> struct dev_pagemap *pgmap = page_pgmap(&folio->page);
> struct page *page = folio_page(folio, i);
>
>
> Alternatively (in the style of fa23a338de93aa03eb0b6146a0440f5762309f85)
>
> diff --git a/fs/dax.c b/fs/dax.c
> index af5045b0f476e..a1e354b748522 100644
> --- a/fs/dax.c
> +++ b/fs/dax.c
> @@ -412,6 +412,9 @@ static inline unsigned long dax_folio_put(struct folio *folio)
> */
> new_folio->pgmap = pgmap;
> new_folio->share = 0;
> +#ifdef CONFIG_MEMCG
> + new_folio->memcg_data = 0;
> +#endif
This looks correct, but I like the first option because I would never
expect a dax-page to need to worry about being part of a memcg.
> WARN_ON_ONCE(folio_ref_count(new_folio));
> }
>
>
>
> --
> Cheers,
>
> David / dhildenb
Thanks for the help, David!
next prev parent reply other threads:[~2025-04-09 20:08 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-09 0:20 Alison Schofield
2025-04-09 8:40 ` David Hildenbrand
2025-04-09 8:55 ` David Hildenbrand
2025-04-09 20:08 ` Dan Williams [this message]
2025-04-09 20:25 ` David Hildenbrand
2025-04-09 21:13 ` Alison Schofield
2025-04-09 21:41 ` Dan Williams
2025-04-10 8:48 ` Christoph Hellwig
2025-04-09 19:03 ` Dan Williams
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=67f6d3a52f77e_71fe294f0@dwillia2-xfh.jf.intel.com.notmuch \
--to=dan.j.williams@intel.com \
--cc=alison.schofield@intel.com \
--cc=apopple@nvidia.com \
--cc=david@redhat.com \
--cc=linux-mm@kvack.org \
--cc=nvdimm@lists.linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox