From: "Christoph Lameter (Ampere)" <cl@linux.com>
To: Matthew Wilcox <willy@infradead.org>
Cc: linux-mm@kvack.org
Subject: Re: Project: Improving the PCP allocator
Date: Wed, 24 Jan 2024 11:18:24 -0800 (PST) [thread overview]
Message-ID: <a68be899-ea3c-1f01-8ea2-20095d8f39e9@linux.com> (raw)
In-Reply-To: <Za6RXtSE_TSdrRm_@casper.infradead.org>
On Mon, 22 Jan 2024, Matthew Wilcox wrote:
> When we have memdescs, allocating a folio from the buddy is a two step
> process. First we allocate the struct folio from slab, then we ask the
> buddy allocator for 2^n pages, each of which gets its memdesc set to
> point to this folio. It'll be similar for other memory descriptors,
> but let's keep it simple and just talk about folios for now.
I need to catch up on memdescs. One of the key issues may be fragmentation
that occurs during alloc / free of folios of different sizes.
Maybe we could use an approach similar to what the slab allocator uses to
defrag. Allocate larger folios/pages and then break out sub
folios/sizes/components until the page is full and recycle any frees of
components in that page before going to the next large page.
With that we end up with a list of per cpu huge pages or so that the page
allocator will serve from similar to the cpu partial lists in SLUB.
Once the huge page is used up then the page allocator needs to move on to
a huge page that already has a lot of recent frees of smaller fragments.
So something like a partial lists can exist also in the page allocator
that is basically sorted by available space within each huge page.
There is the additional issue of different sizes to break out so it may
not be as easy as in the SLUB allocator because different sizes are in one
huge page.
Basically this is a move from SLAB style object management (caching large
lists of small objects without regard to locality which increases
fragmentation) to a combination of spatial considerations as well as list
of large frames. I think this is necessary in order to keep memory as
defragmented as possible.
> I think this could be a huge saving. Consider allocating an order-9 PMD
> sized THP. Today we initialise compound_head in each of the 511 tail
> pages. Since a page is 64 bytes, we touch 32kB of memory! That's 2/3 of
> my CPU's L1 D$, so it's just pushed out a good chunk of my working set.
> And it's all dirty, so it has to get written back.
Right.
> We still need to distinguish between specifically folios (which
> need the folio_prep_large_rmappable() call on allocation and
> folio_undo_large_rmappable() on free) and other compound allocations which
> do not need or want this, but that's touching one/two extra cachelines,
> not 511.
> Do we have a volunteer?
Maybe. I have to think about this but since I got my hands dirty years ago
on the PCP logic I may qualify.
Need to get my head around the details and see where this could go.
next prev parent reply other threads:[~2024-01-24 19:18 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-01-22 16:01 Matthew Wilcox
2024-01-24 19:18 ` Christoph Lameter (Ampere) [this message]
2024-01-24 21:03 ` Matthew Wilcox
2025-03-12 8:53 ` Sharing the (failed) experimental results of the idea "Keeping compound pages as compound pages in the PCP" Harry Yoo
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a68be899-ea3c-1f01-8ea2-20095d8f39e9@linux.com \
--to=cl@linux.com \
--cc=linux-mm@kvack.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox