From: jane.chu@oracle.com
To: logang@deltatee.com, hch@lst.de, gregkh@linuxfoundation.org,
jgg@ziepe.ca, willy@infradead.org, kch@nvidia.com,
axboe@kernel.dk, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, linux-pci@vger.kernel.org,
linux-nvme@lists.infradead.org, linux-block@vger.kernel.org
Subject: Resend: Report: Performance regression from ib_umem_get on zone device pages
Date: Wed, 23 Apr 2025 12:34:55 -0700 [thread overview]
Message-ID: <a59b447b-3c42-4a50-9b1a-cb7044ecfa5a@oracle.com> (raw)
In-Reply-To: <fe761ea8-650a-4118-bd53-e1e4408fea9c@oracle.com>
Resend due to a serious typo.
On 4/23/2025 12:21 PM, jane.chu@oracle.com wrote:
> Hi,
>
> I recently looked at an mr cache registration regression issue that
> follows device-dax backed mr memory, not system RAM backed mr memory.
>
> It boils down to
> 1567b49d1a40 lib/scatterlist: add check when merging zone device pages
> [PATCH v11 5/9] lib/scatterlist: add check when merging zone device
> pages
> https://lore.kernel.org/all/20221021174116.7200-6-logang@deltatee.com/
>
> that went into v6.2-rc1.
>
> The line that introduced the regression is
> ib_uverbs_reg_mr
> mlx5_ib_reg_user_mr
> ib_umem_get
> sg_alloc_append_table_from_pages
> pages_are_mergeable
> zone_device_pages_have_same_pgmap(a,b)
> return a->pgmap == b->pgmap <-------
>
> Sub "return a->pgmap == b->pgmap" with "return true" purely as an
> experiment and the regression reliably went away.
>
> So this looks like a case of CPU cache thrashing, but I don't know how to
> fix it. Could someone help address the issue? I'd be happy to help
> verifying.
>
> My test system is a two-socket bare metal Intel(R) Xeon(R) Platinum
> 8352Y with with 12 Intel NVDIMMs installed.
>
> # lscpu
> Architecture: x86_64
> CPU op-mode(s): 32-bit, 64-bit
> Model name: Intel(R) Xeon(R) Platinum 8352Y CPU @ 2.20GHz
> L1d cache: 48K <----
> L1i cache: 32K
> L2 cache: 1280K
> L3 cache: 49152K
> NUMA node0 CPU(s): 0-31,64-95
> NUMA node1 CPU(s): 32-63,96-127
>
> # cat /proc/meminfo
> MemTotal: 263744088 kB
> MemFree: 252151828 kB
> MemAvailable: 251806008 kB
>
> There are 12 device-dax instances configured exactly the same -
> # ndctl list -m devdax | egrep -m 1 'map'
> "map":"mem",
> # ndctl list -m devdax | egrep -c 'map'
> 12
> # ndctl list -m devdax
> [
> {
> "dev":"namespace1.0",
> "mode":"devdax",
> "map":"mem",
> "size":135289372672,
> "uuid":"a67deda8-e5b3-4a6e-bea2-c1ebdc0fd996",
> "chardev":"dax1.0",
> "align":2097152
> },
> [..]
>
> The system is idle unless running mr registration test. The test
> attempts to register 61440 mrs by 64 threads in parallel, each mr is 2MB
> and is backed by device-dax memory.
>
> The flow of a single test run:
> 1. reserve virtual address space for (61440 * 2MB) via mmap with
> PROT_NONE and MAP_ANONYMOUS | MAP_NORESERVE| MAP_PRIVATE
> 2. mmap ((61440 * 2MB) / 12) from each of the 12 device-dax to the
> reserved virtual address space sequentially to form a continual VA space
> 3. touch the entire mapped memory page by page
> 4. take timestamp,
> create 40 pthreads, each thread registers (61440 / 40) mrs via
> ibv_reg_mr(),
> take another timestamp after pthread_join
> 5. wait 10 seconds
> 6. repeat step 4 except for deregistration via ibv_dereg_mr()
> 7. tear down everything
>
> I hope the above description is helpful as I am not at liberty to share
> the test code.
>
> Here is the highlight from perfdiff comparing the culprit(PATCH 5/9)
> against the baseline(PATCH 4/9).
>
> baseline = 49580e690755 block: add check when merging zone device pages
> culprit = 1567b49d1a40 lib/scatterlist: add check when merging zone
> device pages
>
> # Baseline Delta Abs Shared Object Symbol
> # ........ ......... ......................... ............................................................
> #
> 26.53% -19.46% [kernel.kallsyms] [k] follow_page_mask
> 49.15% +11.56% [kernel.kallsyms] [k]
> native_queued_spin_lock_slowpath
> +1.38% [kernel.kallsyms] [k]
> pages_are_mergeable <----
> +0.82% [kernel.kallsyms] [k]
> __rdma_block_iter_next
> 0.74% +0.68% [kernel.kallsyms] [k] osq_lock
> +0.56% [kernel.kallsyms] [k]
> mlx5r_umr_update_mr_pas
> 2.25% +0.49% [kernel.kallsyms] [k]
> follow_pmd_mask.isra.0
> 1.92% +0.37% [kernel.kallsyms] [k] _raw_spin_lock
> 1.13% +0.35% [kernel.kallsyms] [k] __get_user_pages
>
> With baseline, per mr registration takes ~2950 nanoseconds, +- 50ns,
> with culprit, per mr registration takes ~6850 nanoseconds, +- 50ns.
>
> Regards,
> -jane
next prev parent reply other threads:[~2025-04-23 19:35 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-04-23 19:21 jane.chu
2025-04-23 19:34 ` jane.chu [this message]
2025-04-23 23:28 ` Jason Gunthorpe
2025-04-24 1:49 ` jane.chu
2025-04-24 2:55 ` jane.chu
2025-04-24 3:00 ` jane.chu
2025-04-24 5:35 ` jane.chu
2025-04-24 12:01 ` Jason Gunthorpe
2025-04-28 19:11 ` jane.chu
2025-04-29 12:29 ` Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a59b447b-3c42-4a50-9b1a-cb7044ecfa5a@oracle.com \
--to=jane.chu@oracle.com \
--cc=axboe@kernel.dk \
--cc=gregkh@linuxfoundation.org \
--cc=hch@lst.de \
--cc=jgg@ziepe.ca \
--cc=kch@nvidia.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-pci@vger.kernel.org \
--cc=logang@deltatee.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox