From: Dave Chinner <david@fromorbit.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: Zhaoyang Huang <huangzhaoyang@gmail.com>,
"zhaoyang.huang" <zhaoyang.huang@unisoc.com>,
Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, linux-kernel@vger.kernel.org,
ke.wang@unisoc.com, steve.kang@unisoc.com,
baocong.liu@unisoc.com, linux-fsdevel@vger.kernel.org
Subject: Re: [RFC PATCH] mm: move xa forward when run across zombie page
Date: Fri, 28 Oct 2022 15:05:54 +1100 [thread overview]
Message-ID: <20221028040554.GU2703033@dread.disaster.area> (raw)
In-Reply-To: <Y1lZ9Rm87GpFRM/Q@casper.infradead.org>
On Wed, Oct 26, 2022 at 05:01:57PM +0100, Matthew Wilcox wrote:
> On Thu, Oct 20, 2022 at 10:52:14PM +0100, Matthew Wilcox wrote:
> > But I think the tests you've done refute that theory. I'm all out of
> > ideas at the moment.
>
> I have a new idea. In page_cache_delete_batch(), we don't set the
> order of the entry before calling xas_store(). That means we can end
> up in a situation where we have an order-2 folio in the page cache,
> delete it and end up with a NULL pointer at (say) index 20 and sibling
> entries at indices 21-23. We can come along (potentially much later)
> and put an order-0 folio back at index 20. Now all of indices 20-23
> point to the index-20, order-0 folio. Worse, the xarray node can be
> freed with the sibling entries still intact and then be reallocated by
> an entirely different xarray.
>
> I don't know if this is going to fix the problem you're seeing. I can't
> quite draw a line from this situation to your symptoms. I came across
> it while auditing all the places which set folio->mapping to NULL.
> I did notice a mis-ordering; all the other places first remove the folio
> from the xarray before setting folio to NULL, but I have a hard time
> connecting that to your symptoms either.
>
> diff --git a/include/linux/xarray.h b/include/linux/xarray.h
> index 44dd6d6e01bc..cc1fd1f849a7 100644
> --- a/include/linux/xarray.h
> +++ b/include/linux/xarray.h
> @@ -1617,6 +1617,12 @@ static inline void xas_advance(struct xa_state *xas, unsigned long index)
> xas->xa_offset = (index >> shift) & XA_CHUNK_MASK;
> }
>
> +static inline void xas_adjust_order(struct xa_state *xas, unsigned int order)
> +{
> + xas->xa_shift = order - (order % XA_CHUNK_SHIFT);
> + xas->xa_sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1;
> +}
> +
> /**
> * xas_set_order() - Set up XArray operation state for a multislot entry.
> * @xas: XArray operation state.
> @@ -1628,8 +1634,7 @@ static inline void xas_set_order(struct xa_state *xas, unsigned long index,
> {
> #ifdef CONFIG_XARRAY_MULTI
> xas->xa_index = order < BITS_PER_LONG ? (index >> order) << order : 0;
> - xas->xa_shift = order - (order % XA_CHUNK_SHIFT);
> - xas->xa_sibs = (1 << (order % XA_CHUNK_SHIFT)) - 1;
> + xas_adjust_order(xas, order);
> xas->xa_node = XAS_RESTART;
> #else
> BUG_ON(order > 0);
> diff --git a/mm/filemap.c b/mm/filemap.c
> index 08341616ae7a..6e3f486131e4 100644
> --- a/mm/filemap.c
> +++ b/mm/filemap.c
> @@ -305,11 +305,13 @@ static void page_cache_delete_batch(struct address_space *mapping,
>
> WARN_ON_ONCE(!folio_test_locked(folio));
>
> + if (!folio_test_hugetlb(folio))
> + xas_adjust_order(&xas, folio_order(folio));
> + xas_store(&xas, NULL);
> folio->mapping = NULL;
> /* Leave folio->index set: truncation lookup relies on it */
>
> i++;
> - xas_store(&xas, NULL);
> total_pages += folio_nr_pages(folio);
> }
> mapping->nrpages -= total_pages;
Nope, that ain't it. I think I've got the data corruption fix sorted
now (at least, g/270 isn't assert failing on stray delalloc extents
anymore), so if that's the case, I can spend some time actively
trying to track this down....
-Dave.
--
Dave Chinner
david@fromorbit.com
next prev parent reply other threads:[~2022-10-28 4:06 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-14 5:30 zhaoyang.huang
2022-10-14 12:11 ` Matthew Wilcox
2022-10-17 5:34 ` Zhaoyang Huang
2022-10-17 6:58 ` Zhaoyang Huang
2022-10-17 15:55 ` Matthew Wilcox
2022-10-18 2:52 ` Zhaoyang Huang
2022-10-18 3:09 ` Matthew Wilcox
2022-10-18 22:30 ` Dave Chinner
2022-10-19 1:16 ` Dave Chinner
2022-10-19 4:47 ` Dave Chinner
2022-10-19 5:48 ` Zhaoyang Huang
2022-10-19 13:06 ` Matthew Wilcox
2022-10-20 1:27 ` Zhaoyang Huang
2022-10-26 19:49 ` Matthew Wilcox
2022-10-27 1:57 ` Zhaoyang Huang
2022-10-19 11:49 ` Brian Foster
2022-10-20 2:04 ` Dave Chinner
2022-10-20 3:12 ` Zhaoyang Huang
2022-10-19 15:23 ` Matthew Wilcox
2022-10-19 22:04 ` Dave Chinner
2022-10-19 22:46 ` Dave Chinner
2022-10-19 23:42 ` Dave Chinner
2022-10-20 21:52 ` Matthew Wilcox
2022-10-26 8:38 ` Zhaoyang Huang
2022-10-26 14:38 ` Matthew Wilcox
2022-10-26 16:01 ` Matthew Wilcox
2022-10-28 4:05 ` Dave Chinner [this message]
2022-11-01 7:17 ` Dave Chinner
2024-04-11 7:04 ` Zhaoyang Huang
2022-10-21 21:37 Pulavarty, Badari
2022-10-21 22:31 ` Matthew Wilcox
2022-10-21 22:40 ` Pulavarty, Badari
2022-10-31 19:25 ` Pulavarty, Badari
2022-10-31 19:39 ` Hugh Dickins
2022-10-31 21:33 ` Pulavarty, Badari
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221028040554.GU2703033@dread.disaster.area \
--to=david@fromorbit.com \
--cc=akpm@linux-foundation.org \
--cc=baocong.liu@unisoc.com \
--cc=huangzhaoyang@gmail.com \
--cc=ke.wang@unisoc.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=steve.kang@unisoc.com \
--cc=willy@infradead.org \
--cc=zhaoyang.huang@unisoc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox