From: Matthew Wilcox <willy@infradead.org>
To: "Christoph Lameter (Ampere)" <cl@linux.com>
Cc: linux-mm@kvack.org
Subject: Re: Project: Improving the PCP allocator
Date: Wed, 24 Jan 2024 21:03:44 +0000 [thread overview]
Message-ID: <ZbF7MFW_0ePkq0_Q@casper.infradead.org> (raw)
In-Reply-To: <a68be899-ea3c-1f01-8ea2-20095d8f39e9@linux.com>
On Wed, Jan 24, 2024 at 11:18:24AM -0800, Christoph Lameter (Ampere) wrote:
> On Mon, 22 Jan 2024, Matthew Wilcox wrote:
>
> > When we have memdescs, allocating a folio from the buddy is a two step
> > process. First we allocate the struct folio from slab, then we ask the
> > buddy allocator for 2^n pages, each of which gets its memdesc set to
> > point to this folio. It'll be similar for other memory descriptors,
> > but let's keep it simple and just talk about folios for now.
>
> I need to catch up on memdescs. One of the key issues may be fragmentation
> that occurs during alloc / free of folios of different sizes.
A lot of what we have now is opportunistic. We'll use larger allocations
if they're readily available, and if not we'll fall back (and also kick
kswapd to try to free up some memory). This is fine for the current
purposes, but may be less fine for the people who want to support large
LBA devices. I don't think it'll be a problem as they should be able
to allocate more memory that is large enough, just by evicting memory
from the page cache that comes from the same device (so is by definition
large enough).
> Maybe we could use an approach similar to what the slab allocator uses to
> defrag. Allocate larger folios/pages and then break out sub
> folios/sizes/components until the page is full and recycle any frees of
> components in that page before going to the next large page.
It's certainly something we could do, but then we're back to setting
up the compound page again, and the idea was to avoid doing that.
So really this is a competing idea, not a complementary idea.
next prev parent reply other threads:[~2024-01-24 21:03 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-22 16:01 Matthew Wilcox
2024-01-24 19:18 ` Christoph Lameter (Ampere)
2024-01-24 21:03 ` Matthew Wilcox [this message]
2025-03-12 8:53 ` Sharing the (failed) experimental results of the idea "Keeping compound pages as compound pages in the PCP" Harry Yoo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZbF7MFW_0ePkq0_Q@casper.infradead.org \
--to=willy@infradead.org \
--cc=cl@linux.com \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox