From: Matthew Wilcox <willy@infradead.org>
To: Viacheslav Dubeyko <slava@dubeyko.com>
Cc: Linux FS Devel <linux-fsdevel@vger.kernel.org>,
linux-mm@kvack.org, Hugh Dickins <hughd@google.com>,
"Kirill A. Shutemov" <kirill.shutemov@linux.intel.com>
Subject: Re: Issue with 8K folio size in __filemap_get_folio()
Date: Sun, 3 Dec 2023 23:12:02 +0000 [thread overview]
Message-ID: <ZW0LQptvuFT9R4bw@casper.infradead.org> (raw)
In-Reply-To: <ZWzy3bLEmbaMr//d@casper.infradead.org>
On Sun, Dec 03, 2023 at 09:27:57PM +0000, Matthew Wilcox wrote:
> I was talking with Darrick on Friday and he convinced me that this is
> something we're going to need to fix sooner rather than later for the
> benefit of devices with block size 8kB. So it's definitely on my todo
> list, but I haven't investigated in any detail yet.
OK, here's my initial analysis of just not putting order-1 folios
on the deferred split list. folio->_deferred_list is only used in
mm/huge_memory.c, which makes this a nice simple analysis.
- folio_prep_large_rmappable() initialises the list_head. No problem,
just don't do that for order-1 folios.
- split_huge_page_to_list() will remove the folio from the split queue.
No problem, just don't do that.
- folio_undo_large_rmappable() removes it from the list if it's
on the list. Again, no problem, don't do that for order-1 folios.
- deferred_split_scan() walks the list, it won't find any order-1
folios.
- deferred_split_folio() will add the folio to the list. Returning
here will avoid adding the folio to the list. But what consequences
will that have? Ah. There's only one caller of
deferred_split_folio() and it's in page_remove_rmap() ... and it's
only called for anon folios anyway.
So it looks like we can support order-1 folios in the page cache without
any change in behaviour since file-backed folios were never added to
the deferred split list.
Now, is this the right direction? Is it a bug that we never called
deferred_split_folio() for pagecache folios? I would defer to Hugh
or Kirill on this. Ccs added.
next prev parent reply other threads:[~2023-12-03 23:12 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <B467D07C-00D2-47C6-A034-2D88FE88A092@dubeyko.com>
2023-12-03 21:27 ` Matthew Wilcox
2023-12-03 23:12 ` Matthew Wilcox [this message]
2023-12-04 5:57 ` Hugh Dickins
2023-12-04 15:09 ` David Hildenbrand
2023-12-04 16:51 ` David Hildenbrand
2023-12-04 17:17 ` Matthew Wilcox
2023-12-04 17:22 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZW0LQptvuFT9R4bw@casper.infradead.org \
--to=willy@infradead.org \
--cc=hughd@google.com \
--cc=kirill.shutemov@linux.intel.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=slava@dubeyko.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox