From: Ira Weiny <ira.weiny@intel.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>,
Thomas Gleixner <tglx@linutronix.de>,
"Fabio M. De Francesco" <fmdefrancesco@gmail.com>,
"Luis Chamberlain" <mcgrof@kernel.org>,
<linux-fsdevel@vger.kernel.org>, <linux-mm@kvack.org>
Subject: Re: folio_map
Date: Wed, 17 Aug 2022 13:23:41 -0700 [thread overview]
Message-ID: <Yv1OTWPVooKJivsL@iweiny-desk3> (raw)
In-Reply-To: <Yv1DzKKzkDjwVuKV@casper.infradead.org>
On Wed, Aug 17, 2022 at 08:38:52PM +0100, Matthew Wilcox wrote:
> On Wed, Aug 17, 2022 at 01:29:35PM +0300, Kirill A. Shutemov wrote:
> > On Tue, Aug 16, 2022 at 07:08:22PM +0100, Matthew Wilcox wrote:
> > > Some of you will already know all this, but I'll go into a certain amount
> > > of detail for the peanut gallery.
> > >
> > > One of the problems that people want to solve with multi-page folios
> > > is supporting filesystem block sizes > PAGE_SIZE. Such filesystems
> > > already exist; you can happily create a 64kB block size filesystem on
> > > a PPC/ARM/... today, then fail to mount it on an x86 machine.
> > >
> > > kmap_local_folio() only lets you map a single page from a folio.
> > > This works for the majority of cases (eg ->write_begin() works on a
> > > per-page basis *anyway*, so we can just map a single page from the folio).
> > > But this is somewhat hampering for ext2_get_page(), used for directory
> > > handling. A directory record may cross a page boundary (because it
> > > wasn't a page boundary on the machine which created the filesystem),
> > > and juggling two pages being mapped at once is tricky with the stack
> > > model for kmap_local.
> > >
> > > I don't particularly want to invest heavily in optimising for HIGHMEM.
> > > The number of machines which will use multi-page folios and HIGHMEM is
> > > not going to be large, one hopes, as 64-bit kernels are far more common.
> > > I'm happy for 32-bit to be slow, as long as it works.
> > >
> > > For these reasons, I proposing the logical equivalent to this:
> > >
> > > +void *folio_map_local(struct folio *folio)
> > > +{
> > > + if (!IS_ENABLED(CONFIG_HIGHMEM))
> > > + return folio_address(folio);
> > > + if (!folio_test_large(folio))
> > > + return kmap_local_page(&folio->page);
> > > + return vmap_folio(folio);
> > > +}
> > > +
> > > +void folio_unmap_local(const void *addr)
> > > +{
> > > + if (!IS_ENABLED(CONFIG_HIGHMEM))
> > > + return;
> > > + if (is_vmalloc_addr(addr))
> > > + vunmap(addr);
> > > + else
> > > + kunmap_local(addr);
> > > +}
> > >
> > > (where vmap_folio() is a new function that works a lot like vmap(),
> > > chunks of this get moved out-of-line, etc, etc., but this concept)
> >
> > So it aims at replacing kmap_local_page(), but for folios, right?
> > kmap_local_page() interface can be used from any context, but vmap helpers
> > might_sleep(). How do we rectify this?
>
> I'm not proposing getting rid of kmap_local_folio(). That should still
> exist and work for users who need to use it in atomic context. Indeed,
> I'm intending to put a note in the doc for folio_map_local() suggesting
> that users may prefer to use kmap_local_folio(). Good idea to put a
> might_sleep() in folio_map_local() though.
There is also a semantic miss-match WRT the unmapping order. But I think
Kirill brings up a bigger issue.
How many folios do you think will need to be mapped at a time? And is there
any practical limit on their size? Are 64k blocks a reasonable upper bound
until highmem can be deprecated completely?
I say this because I'm not sure that mapping a 64k block would always fail.
These mappings are transitory. How often will a filesystem be mapping more
than 2 folios at once?
In our conversions most of the time 2 pages are mapped at once,
source/destination.
That said, to help ensure that a full folio map never fails we could increase
the number of pages supported by kmap_local_page(). At first, I was not a fan
but that would only be a penalty for HIGHMEM systems. And as we are not
optimizing for such systems I'm not sure I see a downside to increasing the
limit to 32 or even 64. I'm also inclined to believe that HIGHMEM systems are
smaller core counts. So I don't think this is likely to multiply the space
wasted much.
Would doubling the support within kmap_local_page() be enough?
A final idea would be to hide the increase behind a 'support large block size
filesystems' config option under HIGHMEM systems. But I'm really not sure that
is even needed.
Ira
next prev parent reply other threads:[~2022-08-17 20:23 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-16 18:08 folio_map Matthew Wilcox
2022-08-16 21:23 ` folio_map John Hubbard
2022-08-17 10:29 ` folio_map Kirill A. Shutemov
2022-08-17 19:38 ` folio_map Matthew Wilcox
2022-08-17 20:23 ` Ira Weiny [this message]
2022-08-17 20:52 ` folio_map Ira Weiny
2022-08-17 21:34 ` folio_map Matthew Wilcox
2022-08-18 1:28 ` folio_map Ira Weiny
2022-08-18 21:10 ` [RFC] vmap_folio() Matthew Wilcox
2022-08-19 10:53 ` Uladzislau Rezki
2022-08-19 15:45 ` Matthew Wilcox
2022-08-22 19:54 ` Uladzislau Rezki
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Yv1OTWPVooKJivsL@iweiny-desk3 \
--to=ira.weiny@intel.com \
--cc=fmdefrancesco@gmail.com \
--cc=kirill@shutemov.name \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mcgrof@kernel.org \
--cc=tglx@linutronix.de \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox