From: Byungchul Park <byungchul@sk.com>
To: Mina Almasry <almasrymina@google.com>
Cc: Jesper Dangaard Brouer <hawk@kernel.org>,
netdev <netdev@vger.kernel.org>,
willy@infradead.org, ilias.apalodimas@linaro.org,
kernel_team@skhynix.com, 42.hyeyoo@gmail.com, linux-mm@kvack.org
Subject: Re: [RFC] shrinking struct page (part of page pool)
Date: Wed, 16 Apr 2025 14:24:48 +0900 [thread overview]
Message-ID: <20250416052448.GB39145@system.software.com> (raw)
In-Reply-To: <CAHS8izO_9gXzj2sUubyNSQjp-a3h_332pQNRPBtW6bLOXS-XoA@mail.gmail.com>
On Tue, Apr 15, 2025 at 08:39:47AM -0700, Mina Almasry wrote:
> On Sun, Apr 13, 2025 at 6:36 PM Byungchul Park <byungchul@sk.com> wrote:
> >
> > Hi guys,
> >
> > I'm looking at network's page pool code to help 'shrinking struct page'
> > project by Matthew Wilcox. See the following link:
> >
> > https://kernelnewbies.org/MatthewWilcox/Memdescs/Path
> >
> > My first goal is to remove fields for page pool from struct page like:
> >
>
> Remove them, but put them where? The page above specificies "Split the
We need to introduce a new struct that will be used as a new descriptor
e.g. bump, instead of struct page, similar to net_iov, overlaying struct
page for now.
> pagepool bump allocator out of struct page, as has been done for, eg,
> slab and ptdesc.", but I'm not familiar what happened with slab and
> ptdesc. Are these fields moving to a different location? Or being
Move to the newly introduced struct e.g. bump and temporarily let it
overlay struct page for now.
> somehow removed entirely?
And then we can remove the fields from struct page.
> > struct { /* page_pool used by netstack */
> > /**
> > * @pp_magic: magic value to avoid recycling non
> > * page_pool allocated pages.
> > */
> > unsigned long pp_magic;
> > struct page_pool *pp;
> > unsigned long _pp_mapping_pad;
> > unsigned long dma_addr;
> > atomic_long_t pp_ref_count;
> > };
> >
> > Fortunately, many prerequisite works have been done by Mina but I guess
> > he or she has done it for other purpose than 'shrinking struct page'.
> >
>
> Yeah, we did it to support non-page memory in the net stack, which is
> quite orthogonal to what you're trying to do AFAICT so far. Looks like
> maybe some implementation details are shared by luck?
Oh.
> > I'd like to just finalize the work so that the fields above can be
> > removed from struct page. However, I need to resolve a curiousity
> > before starting.
> >
> > Network guys already introduced a sperate strcut, struct net_iov,
> > to overlay the interesting fields. However, another separate struct
> > for system memory might be also needed e.g. struct bump so that
> > struct net_iov and struct bump can be overlayed depending on the
> > source:
> >
> > struct bump {
> > unsigned long _page_flags;
> > unsigned long bump_magic;
> > struct page_pool *bump_pp;
> > unsigned long _pp_mapping_pad;
> > unsigned long dma_addr;
> > atomic_long_t bump_ref_count;
> > unsigned int _page_type;
> > atomic_t _refcount;
> > };
> >
> > To netwrok guys, any thoughts on it?
>
> Need more details. What does struct bump represent? If it's meant to
'bump' comes from how page pool works. See the following link:
https://en.wikipedia.org/wiki/Region-based_memory_management
However, any better name suggestion from network guys should be
appreciated.
> replace the fields used by the page_pool referenced above, then it
> should not have _page_flags, bump_ref_count should be pp_ref_count,
> and should not have _page_type or _refcount.
These are place holders that might be needed for now but should be
removed later.
> > To Willy, do I understand correctly your direction?
> >
> > Plus, it's a quite another issue but I'm curious, that is, what do you
> > guys think about moving the bump allocator(= page pool) code from
> > network to mm? I'd like to start on the work once gathering opinion
> > from both Willy and network guys.
> >
>
> What is the terminology "bump"? Are you wanting to rename page_pool to
> "bump"? What does the new name mean?
I hope the link above explain it.
Byungchul
>
> --
> Thanks,
> Mina
next prev parent reply other threads:[~2025-04-16 5:25 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-14 1:36 Byungchul Park
2025-04-14 1:52 ` Byungchul Park
2025-04-14 23:30 ` Jakub Kicinski
2025-04-16 10:20 ` Byungchul Park
2025-05-10 7:02 ` Ilias Apalodimas
2025-05-10 13:53 ` Andrew Lunn
2025-05-12 4:24 ` Christoph Hellwig
2025-05-19 5:38 ` Ilias Apalodimas
2025-04-15 15:39 ` Mina Almasry
2025-04-16 5:24 ` Byungchul Park [this message]
2025-04-16 16:02 ` Mina Almasry
2025-04-15 23:22 ` Vishal Moola (Oracle)
2025-04-16 5:25 ` Byungchul Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250416052448.GB39145@system.software.com \
--to=byungchul@sk.com \
--cc=42.hyeyoo@gmail.com \
--cc=almasrymina@google.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=kernel_team@skhynix.com \
--cc=linux-mm@kvack.org \
--cc=netdev@vger.kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox