From: Byungchul Park <byungchul@sk.com>
To: willy@infradead.org, ilias.apalodimas@linaro.org, almasrymina@google.com
Cc: kernel_team@skhynix.com, 42.hyeyoo@gmail.com, linux-mm@kvack.org
Subject: [RFC] shrinking struct page (part of page pool)
Date: Mon, 14 Apr 2025 10:36:27 +0900 [thread overview]
Message-ID: <20250414013627.GA9161@system.software.com> (raw)
Hi guys,
I'm looking at network's page pool code to help 'shrinking struct page'
project by Matthew Wilcox. See the following link:
https://kernelnewbies.org/MatthewWilcox/Memdescs/Path
My first goal is to remove fields for page pool from struct page like:
struct { /* page_pool used by netstack */
/**
* @pp_magic: magic value to avoid recycling non
* page_pool allocated pages.
*/
unsigned long pp_magic;
struct page_pool *pp;
unsigned long _pp_mapping_pad;
unsigned long dma_addr;
atomic_long_t pp_ref_count;
};
Fortunately, many prerequisite works have been done by Mina but I guess
he or she has done it for other purpose than 'shrinking struct page'.
I'd like to just finalize the work so that the fields above can be
removed from struct page. However, I need to resolve a curiousity
before starting.
Network guys already introduced a sperate strcut, struct net_iov,
to overlay the interesting fields. However, another separate struct
for system memory might be also needed e.g. struct bump so that
struct net_iov and struct bump can be overlayed depending on the
source:
struct bump {
unsigned long _page_flags;
unsigned long bump_magic;
struct page_pool *bump_pp;
unsigned long _pp_mapping_pad;
unsigned long dma_addr;
atomic_long_t bump_ref_count;
unsigned int _page_type;
atomic_t _refcount;
};
To netwrok guys, any thoughts on it?
To Willy, do I understand correctly your direction?
Plus, it's a quite another issue but I'm curious, that is, what do you
guys think about moving the bump allocator(= page pool) code from
network to mm? I'd like to start on the work once gathering opinion
from both Willy and network guys.
Byungchul
next reply other threads:[~2025-04-14 1:36 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-14 1:36 Byungchul Park [this message]
2025-04-14 1:52 ` Byungchul Park
2025-04-14 23:30 ` Jakub Kicinski
2025-04-16 10:20 ` Byungchul Park
2025-05-10 7:02 ` Ilias Apalodimas
2025-05-10 13:53 ` Andrew Lunn
2025-05-12 4:24 ` Christoph Hellwig
2025-05-19 5:38 ` Ilias Apalodimas
2025-04-15 15:39 ` Mina Almasry
2025-04-16 5:24 ` Byungchul Park
2025-04-16 16:02 ` Mina Almasry
2025-04-15 23:22 ` Vishal Moola (Oracle)
2025-04-16 5:25 ` Byungchul Park
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250414013627.GA9161@system.software.com \
--to=byungchul@sk.com \
--cc=42.hyeyoo@gmail.com \
--cc=almasrymina@google.com \
--cc=ilias.apalodimas@linaro.org \
--cc=kernel_team@skhynix.com \
--cc=linux-mm@kvack.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox