From: Jakub Kicinski <kuba@kernel.org>
To: Byungchul Park <byungchul@sk.com>
Cc: willy@infradead.org, ilias.apalodimas@linaro.org,
almasrymina@google.com, kernel_team@skhynix.com,
42.hyeyoo@gmail.com, linux-mm@kvack.org, hawk@kernel.org,
netdev@vger.kernel.org
Subject: Re: [RFC] shrinking struct page (part of page pool)
Date: Mon, 14 Apr 2025 16:30:02 -0700 [thread overview]
Message-ID: <20250414163002.166d1a36@kernel.org> (raw)
In-Reply-To: <20250414015207.GA50437@system.software.com>
On Mon, 14 Apr 2025 10:52:07 +0900 Byungchul Park wrote:
> > Fortunately, many prerequisite works have been done by Mina but I guess
> > he or she has done it for other purpose than 'shrinking struct page'.
> >
> > I'd like to just finalize the work so that the fields above can be
> > removed from struct page. However, I need to resolve a curiousity
> > before starting.
I don't understand what the question is but FWIW from my perspective
the ZC APIs are fairly contained, or at least we tried to make sure
that net_iov pages cannot reach random parts of the stack.
Replacing all uses of struct page would require converting much more
of the stack, AFAIU. But that's best discussed over posted patches.
> > Network guys already introduced a sperate strcut, struct net_iov,
> > to overlay the interesting fields. However, another separate struct
> > for system memory might be also needed e.g. struct bump so that
> > struct net_iov and struct bump can be overlayed depending on the
> > source:
> >
> > struct bump {
> > unsigned long _page_flags;
> > unsigned long bump_magic;
> > struct page_pool *bump_pp;
> > unsigned long _pp_mapping_pad;
> > unsigned long dma_addr;
> > atomic_long_t bump_ref_count;
> > unsigned int _page_type;
> > atomic_t _refcount;
> > };
> >
> > To netwrok guys, any thoughts on it?
> > To Willy, do I understand correctly your direction?
> >
> > Plus, it's a quite another issue but I'm curious, that is, what do you
> > guys think about moving the bump allocator(= page pool) code from
> > network to mm? I'd like to start on the work once gathering opinion
> > from both Willy and network guys.
I don't see any benefit from moving page pool to MM. It is quite
networking specific. But we can discuss this later. Moving code
is trivial, it should not be the initial focus.
next prev parent reply other threads:[~2025-04-14 23:30 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-14 1:36 Byungchul Park
2025-04-14 1:52 ` Byungchul Park
2025-04-14 23:30 ` Jakub Kicinski [this message]
2025-04-16 10:20 ` Byungchul Park
2025-05-10 7:02 ` Ilias Apalodimas
2025-05-10 13:53 ` Andrew Lunn
2025-05-12 4:24 ` Christoph Hellwig
2025-05-19 5:38 ` Ilias Apalodimas
2025-04-15 15:39 ` Mina Almasry
2025-04-16 5:24 ` Byungchul Park
2025-04-16 16:02 ` Mina Almasry
2025-04-15 23:22 ` Vishal Moola (Oracle)
2025-04-16 5:25 ` Byungchul Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250414163002.166d1a36@kernel.org \
--to=kuba@kernel.org \
--cc=42.hyeyoo@gmail.com \
--cc=almasrymina@google.com \
--cc=byungchul@sk.com \
--cc=hawk@kernel.org \
--cc=ilias.apalodimas@linaro.org \
--cc=kernel_team@skhynix.com \
--cc=linux-mm@kvack.org \
--cc=netdev@vger.kernel.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox