linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alexander Duyck <alexander.duyck@gmail.com>
To: Yunsheng Lin <linyunsheng@huawei.com>
Cc: davem@davemloft.net, kuba@kernel.org, pabeni@redhat.com,
	 netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	 Shuah Khan <skhan@linuxfoundation.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	 Linux-MM <linux-mm@kvack.org>
Subject: Re: [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1)
Date: Tue, 29 Oct 2024 08:45:11 -0700	[thread overview]
Message-ID: <CAKgT0UdzFYyWjku=RfD7QXjTGeBFiBKQcKPXJW-Jx8YYuxePxA@mail.gmail.com> (raw)
In-Reply-To: <472a7a09-387f-480d-b66c-761e0b6192ef@huawei.com>

On Tue, Oct 29, 2024 at 2:36 AM Yunsheng Lin <linyunsheng@huawei.com> wrote:
>
> On 2024/10/28 23:30, Alexander Duyck wrote:
>
> ...
>
> >>
> >>
> >
> > Is this actually the numbers for this patch set? Seems like you have
> > been using the same numbers for the last several releases. I can
>
> Yes, as recent refactoring doesn't seems big enough that the perf data is
> reused for the last several releases.
>
> > understand the "before" being mostly the same, but since we have
>
> As there is rebasing for the latest net-next tree, even the 'before'
> might not be the same as the testing seems sensitive to other changing,
> like binary size changing and page allocator changing during different
> version.
>
> So it might need both the same kernel and config for 'before' and 'after'.
>
> > factored out the refactor portion of it the numbers for the "after"
> > should have deviated as I find it highly unlikely the numbers are
> > exactly the same down to the nanosecond. from the previous patch set.
> Below is the the performance data for Part-1 with the latest net-next:
>
> Before this patchset:
>  Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000' (200 runs):
>
>          17.990790      task-clock (msec)         #    0.003 CPUs utilized            ( +-  0.19% )
>                  8      context-switches          #    0.444 K/sec                    ( +-  0.09% )
>                  0      cpu-migrations            #    0.000 K/sec                    ( +-100.00% )
>                 81      page-faults               #    0.004 M/sec                    ( +-  0.09% )
>           46712295      cycles                    #    2.596 GHz                      ( +-  0.19% )
>           34466157      instructions              #    0.74  insn per cycle           ( +-  0.01% )
>            8011755      branches                  #  445.325 M/sec                    ( +-  0.01% )
>              39913      branch-misses             #    0.50% of all branches          ( +-  0.07% )
>
>        6.382252558 seconds time elapsed                                          ( +-  0.07% )
>
>  Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000 test_align=1' (200 runs):
>
>          17.638466      task-clock (msec)         #    0.003 CPUs utilized            ( +-  0.01% )
>                  8      context-switches          #    0.451 K/sec                    ( +-  0.20% )
>                  0      cpu-migrations            #    0.001 K/sec                    ( +- 70.53% )
>                 81      page-faults               #    0.005 M/sec                    ( +-  0.08% )
>           45794305      cycles                    #    2.596 GHz                      ( +-  0.01% )
>           34435077      instructions              #    0.75  insn per cycle           ( +-  0.00% )
>            8004416      branches                  #  453.805 M/sec                    ( +-  0.00% )
>              39758      branch-misses             #    0.50% of all branches          ( +-  0.06% )
>
>        5.328976590 seconds time elapsed                                          ( +-  0.60% )
>
>
> After this patchset:
> Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000' (200 runs):
>
>          18.647432      task-clock (msec)         #    0.003 CPUs utilized            ( +-  1.11% )
>                  8      context-switches          #    0.422 K/sec                    ( +-  0.36% )
>                  0      cpu-migrations            #    0.005 K/sec                    ( +- 22.54% )
>                 81      page-faults               #    0.004 M/sec                    ( +-  0.08% )
>           48418108      cycles                    #    2.597 GHz                      ( +-  1.11% )
>           35889299      instructions              #    0.74  insn per cycle           ( +-  0.11% )
>            8318363      branches                  #  446.086 M/sec                    ( +-  0.11% )
>              19263      branch-misses             #    0.23% of all branches          ( +-  0.13% )
>
>        5.624666079 seconds time elapsed                                          ( +-  0.07% )
>
>
>  Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=16 test_pop_cpu=17 test_alloc_len=12 nr_test=51200000 test_align=1' (200 runs):
>
>          18.466768      task-clock (msec)         #    0.007 CPUs utilized            ( +-  1.23% )
>                  8      context-switches          #    0.428 K/sec                    ( +-  0.26% )
>                  0      cpu-migrations            #    0.002 K/sec                    ( +- 34.73% )
>                 81      page-faults               #    0.004 M/sec                    ( +-  0.09% )
>           47949220      cycles                    #    2.597 GHz                      ( +-  1.23% )
>           35859039      instructions              #    0.75  insn per cycle           ( +-  0.12% )
>            8309086      branches                  #  449.948 M/sec                    ( +-  0.11% )
>              19246      branch-misses             #    0.23% of all branches          ( +-  0.08% )
>
>        2.573546035 seconds time elapsed                                          ( +-  0.04% )
>

Interesting. It doesn't look like too much changed in terms of most of
the metrics other than the fact that we reduced the number of branch
misses by just over half.

> >
> > Also it wouldn't hurt to have an explanation for the 3.4->0.9 second
> > performance change as it seems like the samples don't seem to match up
> > with the elapsed time data.
>
> As there is also a 4.6->3.4 second performance change for the 'before'
> part, I am not really thinking much at that.
>
> I am guessing some timing for implementation of ptr_ring or cpu cache
> cause the above performance change?
>
> I used the same cpu for both pop and push thread, the performance change
> doesn't seems to exist anymore, and the performance improvement doesn't
> seems to exist anymore either:
>
> After this patchset:
>  Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=0 test_pop_cpu=0 test_alloc_len=12 nr_test=512000' (10 runs):
>
>          13.293402      task-clock (msec)         #    0.002 CPUs utilized            ( +-  5.05% )
>                  7      context-switches          #    0.534 K/sec                    ( +-  1.41% )
>                  0      cpu-migrations            #    0.015 K/sec                    ( +-100.00% )
>                 80      page-faults               #    0.006 M/sec                    ( +-  0.38% )
>           34494793      cycles                    #    2.595 GHz                      ( +-  5.05% )
>            9663299      instructions              #    0.28  insn per cycle           ( +-  1.45% )
>            1767284      branches                  #  132.944 M/sec                    ( +-  1.70% )
>              19798      branch-misses             #    1.12% of all branches          ( +-  1.18% )
>
>        8.119681413 seconds time elapsed                                          ( +-  0.01% )
>
>  Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=0 test_pop_cpu=0 test_alloc_len=12 nr_test=512000 test_align=1' (10 runs):
>
>          12.289096      task-clock (msec)         #    0.002 CPUs utilized            ( +-  0.07% )
>                  7      context-switches          #    0.570 K/sec                    ( +-  2.13% )
>                  0      cpu-migrations            #    0.033 K/sec                    ( +- 66.67% )
>                 81      page-faults               #    0.007 M/sec                    ( +-  0.43% )
>           31886319      cycles                    #    2.595 GHz                      ( +-  0.07% )
>            9468850      instructions              #    0.30  insn per cycle           ( +-  0.06% )
>            1723487      branches                  #  140.245 M/sec                    ( +-  0.05% )
>              19263      branch-misses             #    1.12% of all branches          ( +-  0.47% )
>
>        8.119686950 seconds time elapsed                                          ( +-  0.01% )
>
> Before this patchset:
>  Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=0 test_pop_cpu=0 test_alloc_len=12 nr_test=512000' (10 runs):
>
>          13.320328      task-clock (msec)         #    0.002 CPUs utilized            ( +-  5.00% )
>                  7      context-switches          #    0.541 K/sec                    ( +-  1.85% )
>                  0      cpu-migrations            #    0.008 K/sec                    ( +-100.00% )
>                 80      page-faults               #    0.006 M/sec                    ( +-  0.36% )
>           34572091      cycles                    #    2.595 GHz                      ( +-  5.01% )
>            9664910      instructions              #    0.28  insn per cycle           ( +-  1.51% )
>            1768276      branches                  #  132.750 M/sec                    ( +-  1.80% )
>              19592      branch-misses             #    1.11% of all branches          ( +-  1.33% )
>
>        8.119686381 seconds time elapsed                                          ( +-  0.01% )
>
>  Performance counter stats for 'insmod ./page_frag_test.ko test_push_cpu=0 test_pop_cpu=0 test_alloc_len=12 nr_test=512000 test_align=1' (10 runs):
>
>          12.306471      task-clock (msec)         #    0.002 CPUs utilized            ( +-  0.08% )
>                  7      context-switches          #    0.585 K/sec                    ( +-  1.85% )
>                  0      cpu-migrations            #    0.000 K/sec
>                 80      page-faults               #    0.007 M/sec                    ( +-  0.28% )
>           31937686      cycles                    #    2.595 GHz                      ( +-  0.08% )
>            9462218      instructions              #    0.30  insn per cycle           ( +-  0.08% )
>            1721989      branches                  #  139.925 M/sec                    ( +-  0.07% )
>              19114      branch-misses             #    1.11% of all branches          ( +-  0.31% )
>
>        8.118897296 seconds time elapsed                                          ( +-  0.00% )

That isn't too surprising. Most likely you are at the mercy of the
scheduler and you are just waiting for it to cycle back and forth from
producer to consumer and back in order to allow you to complete the
test.


  reply	other threads:[~2024-10-29 15:45 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-10-28 11:53 Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 1/7] mm: page_frag: add a test module for page_frag Yunsheng Lin
2024-11-14 16:02   ` Mark Brown
2024-11-15  9:03     ` Yunsheng Lin
2024-11-15 14:12       ` Mark Brown
2024-11-15 22:34         ` Jakub Kicinski
2024-11-16  5:08           ` Yunsheng Lin
2024-11-16  4:59         ` Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 2/7] mm: move the page fragment allocator from page_alloc into its own file Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 3/7] mm: page_frag: use initial zero offset for page_frag_alloc_align() Yunsheng Lin
2025-01-23 19:15   ` Florian Fainelli
2025-01-24  9:52     ` Yunsheng Lin
2025-01-24 18:55       ` Florian Fainelli
2024-10-28 11:53 ` [PATCH net-next v23 4/7] mm: page_frag: avoid caller accessing 'page_frag_cache' directly Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 5/7] xtensa: remove the get_order() implementation Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 6/7] mm: page_frag: reuse existing space for 'size' and 'pfmemalloc' Yunsheng Lin
2024-10-28 11:53 ` [PATCH net-next v23 7/7] mm: page_frag: use __alloc_pages() to replace alloc_pages_node() Yunsheng Lin
2024-10-28 15:30 ` [PATCH net-next v23 0/7] Replace page_frag with page_frag_cache (Part-1) Alexander Duyck
2024-10-29  9:36   ` Yunsheng Lin
2024-10-29 15:45     ` Alexander Duyck [this message]
2024-11-05 23:57 ` Jakub Kicinski
2024-11-08  0:02   ` Alexander Duyck
2024-11-11 22:20 ` patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAKgT0UdzFYyWjku=RfD7QXjTGeBFiBKQcKPXJW-Jx8YYuxePxA@mail.gmail.com' \
    --to=alexander.duyck@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=davem@davemloft.net \
    --cc=kuba@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linyunsheng@huawei.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=skhan@linuxfoundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox