linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <jbrouer@redhat.com>
To: Matthew Wilcox <willy@infradead.org>,
	Jesper Dangaard Brouer <jbrouer@redhat.com>
Cc: brouer@redhat.com, Jesper Dangaard Brouer <hawk@kernel.org>,
	Ilias Apalodimas <ilias.apalodimas@linaro.org>,
	netdev@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [PATCH 00/24] Split page pools from struct page
Date: Thu, 8 Dec 2022 16:33:02 +0100	[thread overview]
Message-ID: <9e9af4ec-9bd2-8b10-c95a-4272442cb926@redhat.com> (raw)
In-Reply-To: <Y49o8e6F5SP4h+wF@casper.infradead.org>


On 06/12/2022 17.08, Matthew Wilcox wrote:
> On Tue, Dec 06, 2022 at 10:43:05AM +0100, Jesper Dangaard Brouer wrote:
>>
>> On 05/12/2022 17.31, Matthew Wilcox wrote:
>>> On Mon, Dec 05, 2022 at 04:34:10PM +0100, Jesper Dangaard Brouer wrote:
>>>> I have a micro-benchmark [1][2], that I want to run on this patchset.
>>>> Reducing the asm code 'text' size is less likely to improve a
>>>> microbenchmark. The 100Gbit mlx5 driver uses page_pool, so perhaps I can
>>>> run a packet benchmark that can show the (expected) performance improvement.
>>>>
>>>> [1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_simple.c
>>>> [2] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/lib/bench_page_pool_cross_cpu.c
>>>
>>> Appreciate it!  I'm not expecting any performance change outside noise,
>>> but things do surprise me.  I'd appreciate it if you'd test with a
>>> "distro" config, ie enabling CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP so
>>> we show the most expensive case.

I've tested with [1] and [2] and the performance numbers are the same.

Microbench [1] is easiest to compare, and numbers below were basically
same for both with+without patchset.

  Type:tasklet_page_pool01_fast_path Per elem: 16 cycles(tsc) 4.484 ns
  Type:tasklet_page_pool02_ptr_ring Per elem: 47 cycles(tsc) 13.147 ns
  Type:tasklet_page_pool03_slow Per elem: 173 cycles(tsc) 48.278 ns

The last line (with 173 cycles) is then pages are not recycled, but 
instead returned back into systems page allocator.  To related this to 
something, allocating order-0 pages via normal page allocator API costs 
approx 282 cycles(tsc) 78.385 ns on this system (with .config).  I 
believe page_pool is faster, because we leverage the bulk page allocator.

--Jesper



  reply	other threads:[~2022-12-08 15:33 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-30 22:07 Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 01/24] netmem: Create new type Matthew Wilcox (Oracle)
2022-12-05 14:42   ` Jesper Dangaard Brouer
2022-11-30 22:07 ` [PATCH 02/24] netmem: Add utility functions Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 03/24] page_pool: Add netmem_set_dma_addr() and netmem_get_dma_addr() Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 04/24] page_pool: Convert page_pool_release_page() to page_pool_release_netmem() Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 05/24] page_pool: Start using netmem in allocation path Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 06/24] page_pool: Convert page_pool_return_page() to page_pool_return_netmem() Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 07/24] page_pool: Convert __page_pool_put_page() to __page_pool_put_netmem() Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 08/24] page_pool: Convert pp_alloc_cache to contain netmem Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 09/24] page_pool: Convert page_pool_defrag_page() to page_pool_defrag_netmem() Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 10/24] page_pool: Convert page_pool_put_defragged_page() to netmem Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 11/24] page_pool: Convert page_pool_empty_ring() to use netmem Matthew Wilcox (Oracle)
2022-12-02 21:25   ` Alexander H Duyck
2022-11-30 22:07 ` [PATCH 12/24] page_pool: Convert page_pool_alloc_pages() to page_pool_alloc_netmem() Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 13/24] page_pool: Convert page_pool_dma_sync_for_device() to take a netmem Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 14/24] page_pool: Convert page_pool_recycle_in_cache() to netmem Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 15/24] page_pool: Remove page_pool_defrag_page() Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 16/24] page_pool: Use netmem in page_pool_drain_frag() Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 17/24] page_pool: Convert page_pool_return_skb_page() to use netmem Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 18/24] page_pool: Convert frag_page to frag_nmem Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 19/24] xdp: Convert to netmem Matthew Wilcox (Oracle)
2022-11-30 22:07 ` [PATCH 20/24] mm: Remove page pool members from struct page Matthew Wilcox (Oracle)
2022-11-30 22:08 ` [PATCH 21/24] netmem_to_virt Matthew Wilcox (Oracle)
2022-11-30 22:08 ` [PATCH 22/24] page_pool: Pass a netmem to init_callback() Matthew Wilcox (Oracle)
2022-11-30 22:08 ` [PATCH 23/24] net: Add support for netmem in skb_frag Matthew Wilcox (Oracle)
2022-11-30 22:08 ` [PATCH 24/24] mvneta: Convert to netmem Matthew Wilcox (Oracle)
2022-12-05 15:34 ` [PATCH 00/24] Split page pools from struct page Jesper Dangaard Brouer
2022-12-05 15:44   ` Ilias Apalodimas
2022-12-05 16:31   ` Matthew Wilcox
2022-12-06  9:43     ` Jesper Dangaard Brouer
2022-12-06 16:08       ` Matthew Wilcox
2022-12-08 15:33         ` Jesper Dangaard Brouer [this message]
2022-12-06 16:05 ` [PATCH 25/26] netpool: Additional utility functions Matthew Wilcox (Oracle)
2022-12-06 16:05 ` [PATCH 26/26] mlx5: Convert to netmem Matthew Wilcox (Oracle)
2022-12-08 15:10   ` Jesper Dangaard Brouer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9e9af4ec-9bd2-8b10-c95a-4272442cb926@redhat.com \
    --to=jbrouer@redhat.com \
    --cc=brouer@redhat.com \
    --cc=hawk@kernel.org \
    --cc=ilias.apalodimas@linaro.org \
    --cc=linux-mm@kvack.org \
    --cc=netdev@vger.kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox