linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <jbrouer@redhat.com>
To: Matthew Wilcox <willy@infradead.org>,
	Yunsheng Lin <linyunsheng@huawei.com>
Cc: brouer@redhat.com, Jesper Dangaard Brouer <hawk@kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	netdev@vger.kernel.org, linux-mm@kvack.org,
	Shakeel Butt <shakeelb@google.com>
Subject: Re: [PATCH v3 00/26] Split netmem from struct page
Date: Thu, 12 Jan 2023 11:15:55 +0100	[thread overview]
Message-ID: <9cdc89f3-8c00-3673-5fdb-4f5bebd95d7a@redhat.com> (raw)
In-Reply-To: <Y763vcTFUZvWNgYv@casper.infradead.org>


On 11/01/2023 14.21, Matthew Wilcox wrote:
> On Wed, Jan 11, 2023 at 04:25:46PM +0800, Yunsheng Lin wrote:
>> On 2023/1/11 12:21, Matthew Wilcox (Oracle) wrote:
>>> The MM subsystem is trying to reduce struct page to a single pointer.
>>> The first step towards that is splitting struct page by its individual
>>> users, as has already been done with folio and slab.  This patchset does
>>> that for netmem which is used for page pools.
>> As page pool is only used for rx side in the net stack depending on the
>> driver, a lot more memory for the net stack is from page_frag_alloc_align(),
>> kmem cache, etc.
>> naming it netmem seems a little overkill, perhaps a more specific name for
>> the page pool? such as pp_cache.
>>
>> @Jesper & Ilias
>> Any better idea?

I like the 'netmem' name.

>> And it seem some API may need changing too, as we are not pooling 'pages'
>> now.

IMHO it would be overkill to rename the page_pool to e.g. netmem_pool.
as it would generate too much churn and will be hard to follow in git
as the code filename page_pool.c would also have to be renamed.
It guess we keep page_pool for historical reasons ;-)

> I raised the question of naming in v1, six weeks ago, and nobody had
> any better names.  Seems a little unfair to ignore the question at first
> and then bring it up now.  I'd hate to miss the merge window because of
> a late-breaking major request like this.
> 
> https://lore.kernel.org/netdev/20221130220803.3657490-1-willy@infradead.org/
> 
> I'd like to understand what we think we'll do in networking when we trim
> struct page down to a single pointer,  All these usages that aren't from
> page_pool -- what information does networking need to track per-allocation?
> Would it make sense for the netmem to describe all memory used by the
> networking stack, and have allocators other than page_pool also return
> netmem, 

This is also how I see the future, that other netstack "allocators" can
return and work-with 'netmem' objects.   IMHO we are already cramming
too many use-cases into page_pool (like the frag support Yunsheng
added).  IMHO there are room for other netstack "allocators" that can
utilize netmem.  The page_pool is optimized for RX-NAPI workloads, using
it for other purposes is a mistake IMHO.  People should create other
netstack "allocators" that solves their specific use-cases.  E.g. The TX
path likely needs another "allocator" optimized for this TX use-case.

> or does the normal usage of memory in the net stack not need to
> track that information?

The page refcnt is (obviously) used by netstack as tracked information.
I have seen drivers that use the DMA mapping directly in page/'netmem',
instead of having to store this separately in the drivers.

--Jesper



  parent reply	other threads:[~2023-01-12 10:16 UTC|newest]

Thread overview: 41+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-11  4:21 Matthew Wilcox (Oracle)
2023-01-11  4:21 ` [PATCH v3 01/26] netmem: Create new type Matthew Wilcox (Oracle)
2023-01-11  4:21 ` [PATCH v3 02/26] netmem: Add utility functions Matthew Wilcox (Oracle)
2023-01-11  4:21 ` [PATCH v3 03/26] page_pool: Add netmem_set_dma_addr() and netmem_get_dma_addr() Matthew Wilcox (Oracle)
2023-01-11  4:21 ` [PATCH v3 04/26] page_pool: Convert page_pool_release_page() to page_pool_release_netmem() Matthew Wilcox (Oracle)
2023-01-11  4:21 ` [PATCH v3 05/26] page_pool: Start using netmem in allocation path Matthew Wilcox (Oracle)
2023-01-12 15:36   ` Shay Agroskin
2023-01-12 20:29     ` Matthew Wilcox
2023-01-11  4:21 ` [PATCH v3 06/26] page_pool: Convert page_pool_return_page() to page_pool_return_netmem() Matthew Wilcox (Oracle)
2023-01-11  4:21 ` [PATCH v3 07/26] page_pool: Convert __page_pool_put_page() to __page_pool_put_netmem() Matthew Wilcox (Oracle)
2023-01-11  4:21 ` [PATCH v3 08/26] page_pool: Convert pp_alloc_cache to contain netmem Matthew Wilcox (Oracle)
2023-01-14 12:28   ` Shay Agroskin
2023-01-14 17:58     ` Matthew Wilcox
2023-01-15 11:03       ` Shay Agroskin
2023-01-11  4:21 ` [PATCH v3 09/26] page_pool: Convert page_pool_defrag_page() to page_pool_defrag_netmem() Matthew Wilcox (Oracle)
2023-01-11  4:21 ` [PATCH v3 10/26] page_pool: Convert page_pool_put_defragged_page() to netmem Matthew Wilcox (Oracle)
2023-01-11  4:21 ` [PATCH v3 11/26] page_pool: Convert page_pool_empty_ring() to use netmem Matthew Wilcox (Oracle)
2023-01-11  4:22 ` [PATCH v3 12/26] page_pool: Convert page_pool_alloc_pages() to page_pool_alloc_netmem() Matthew Wilcox (Oracle)
2023-01-11  4:22 ` [PATCH v3 13/26] page_pool: Convert page_pool_dma_sync_for_device() to take a netmem Matthew Wilcox (Oracle)
2023-01-11  4:22 ` [PATCH v3 14/26] page_pool: Convert page_pool_recycle_in_cache() to netmem Matthew Wilcox (Oracle)
2023-01-11  4:22 ` [PATCH v3 15/26] page_pool: Remove __page_pool_put_page() Matthew Wilcox (Oracle)
2023-01-11  4:22 ` [PATCH v3 16/26] page_pool: Use netmem in page_pool_drain_frag() Matthew Wilcox (Oracle)
2023-01-11  4:22 ` [PATCH v3 17/26] page_pool: Convert page_pool_return_skb_page() to use netmem Matthew Wilcox (Oracle)
2023-01-12  8:46   ` Jesper Dangaard Brouer
2023-01-11  4:22 ` [PATCH v3 18/26] page_pool: Allow page_pool_recycle_direct() to take a netmem or a page Matthew Wilcox (Oracle)
2023-01-11 12:48   ` kernel test robot
2023-01-11 13:43     ` Matthew Wilcox
2023-01-12  8:45       ` Jesper Dangaard Brouer
2023-01-11  4:22 ` [PATCH v3 19/26] page_pool: Convert frag_page to frag_nmem Matthew Wilcox (Oracle)
2023-01-11  4:22 ` [PATCH v3 20/26] xdp: Convert to netmem Matthew Wilcox (Oracle)
2023-01-11  4:22 ` [PATCH v3 21/26] mm: Remove page pool members from struct page Matthew Wilcox (Oracle)
2023-01-11  4:22 ` [PATCH v3 22/26] page_pool: Pass a netmem to init_callback() Matthew Wilcox (Oracle)
2023-01-11  4:22 ` [PATCH v3 23/26] net: Add support for netmem in skb_frag Matthew Wilcox (Oracle)
2023-01-11  4:22 ` [PATCH v3 24/26] mvneta: Convert to netmem Matthew Wilcox (Oracle)
2023-01-11  4:22 ` [PATCH v3 25/26] mlx5: " Matthew Wilcox (Oracle)
2023-01-11  4:22 ` [PATCH v3 26/26] hns3: " Matthew Wilcox (Oracle)
2023-01-11  8:25 ` [PATCH v3 00/26] Split netmem from struct page Yunsheng Lin
2023-01-11 13:21   ` Matthew Wilcox
2023-01-12  2:12     ` Yunsheng Lin
2023-01-12 10:15     ` Jesper Dangaard Brouer [this message]
2023-01-13  2:19       ` Yunsheng Lin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9cdc89f3-8c00-3673-5fdb-4f5bebd95d7a@redhat.com \
    --to=jbrouer@redhat.com \
    --cc=brouer@redhat.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=linux-mm@kvack.org \
    --cc=linyunsheng@huawei.com \
    --cc=netdev@vger.kernel.org \
    --cc=shakeelb@google.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox