linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: lizhe.67@bytedance.com
Cc: david@redhat.com, jgg@nvidia.com, torvalds@linux-foundation.org,
	kvm@vger.kernel.org, linux-mm@kvack.org, farman@linux.ibm.com
Subject: Re: [PATCH v5 0/5] vfio/type1: optimize vfio_pin_pages_remote() and vfio_unpin_pages_remote()
Date: Mon, 6 Oct 2025 13:44:03 -0600	[thread overview]
Message-ID: <20251006134403.4fc77b97.alex.williamson@redhat.com> (raw)
In-Reply-To: <20250814064714.56485-1-lizhe.67@bytedance.com>

On Thu, 14 Aug 2025 14:47:09 +0800
lizhe.67@bytedance.com wrote:

> From: Li Zhe <lizhe.67@bytedance.com>
> 
> This patchset is an integration of the two previous patchsets[1][2].
> 
> When vfio_pin_pages_remote() is called with a range of addresses that
> includes large folios, the function currently performs individual
> statistics counting operations for each page. This can lead to significant
> performance overheads, especially when dealing with large ranges of pages.
> 
> The function vfio_unpin_pages_remote() has a similar issue, where executing
> put_pfn() for each pfn brings considerable consumption.
> 
> This patchset primarily optimizes the performance of the relevant functions
> by batching the less efficient operations mentioned before.
> 
> The first two patch optimizes the performance of the function
> vfio_pin_pages_remote(), while the remaining patches optimize the
> performance of the function vfio_unpin_pages_remote().
> 
> The performance test results, based on v6.16, for completing the 16G
> VFIO MAP/UNMAP DMA, obtained through unit test[3] with slight
> modifications[4], are as follows.
> 
> Base(6.16):
> ------- AVERAGE (MADV_HUGEPAGE) --------
> VFIO MAP DMA in 0.049 s (328.5 GB/s)
> VFIO UNMAP DMA in 0.141 s (113.7 GB/s)
> ------- AVERAGE (MAP_POPULATE) --------
> VFIO MAP DMA in 0.268 s (59.6 GB/s)
> VFIO UNMAP DMA in 0.307 s (52.2 GB/s)
> ------- AVERAGE (HUGETLBFS) --------
> VFIO MAP DMA in 0.051 s (310.9 GB/s)
> VFIO UNMAP DMA in 0.135 s (118.6 GB/s)
> 
> With this patchset:
> ------- AVERAGE (MADV_HUGEPAGE) --------
> VFIO MAP DMA in 0.025 s (633.1 GB/s)
> VFIO UNMAP DMA in 0.044 s (363.2 GB/s)
> ------- AVERAGE (MAP_POPULATE) --------
> VFIO MAP DMA in 0.249 s (64.2 GB/s)
> VFIO UNMAP DMA in 0.289 s (55.3 GB/s)
> ------- AVERAGE (HUGETLBFS) --------
> VFIO MAP DMA in 0.030 s (533.2 GB/s)
> VFIO UNMAP DMA in 0.044 s (361.3 GB/s)
> 
> For large folio, we achieve an over 40% performance improvement for VFIO
> MAP DMA and an over 67% performance improvement for VFIO DMA UNMAP. For
> small folios, the performance test results show a slight improvement with
> the performance before optimization.
> 
> [1]: https://lore.kernel.org/all/20250529064947.38433-1-lizhe.67@bytedance.com/
> [2]: https://lore.kernel.org/all/20250620032344.13382-1-lizhe.67@bytedance.com/#t
> [3]: https://github.com/awilliam/tests/blob/vfio-pci-mem-dma-map/vfio-pci-mem-dma-map.c
> [4]: https://lore.kernel.org/all/20250610031013.98556-1-lizhe.67@bytedance.com/
> 
> Li Zhe (5):
>   mm: introduce num_pages_contiguous()
>   vfio/type1: optimize vfio_pin_pages_remote()
>   vfio/type1: batch vfio_find_vpfn() in function
>     vfio_unpin_pages_remote()
>   vfio/type1: introduce a new member has_rsvd for struct vfio_dma
>   vfio/type1: optimize vfio_unpin_pages_remote()
> 
>  drivers/vfio/vfio_iommu_type1.c | 112 ++++++++++++++++++++++++++------
>  include/linux/mm.h              |   7 +-
>  include/linux/mm_inline.h       |  35 ++++++++++
>  3 files changed, 132 insertions(+), 22 deletions(-)

I've added this to the vfio next branch, please test.  As described
previously, barring objections I'll try to get this into the current
merge window since it almost made v6.17, but was then dropped due to
disagreements in the mm space, then blocked by merge conflicts.  Thanks,

Alex



      parent reply	other threads:[~2025-10-06 19:44 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-14  6:47 lizhe.67
2025-08-14  6:47 ` [PATCH v5 1/5] mm: introduce num_pages_contiguous() lizhe.67
2025-08-14  6:54   ` David Hildenbrand
2025-08-14  7:58     ` lizhe.67
2025-08-27 18:10   ` Alex Williamson
2025-09-01  3:25     ` lizhe.67
2025-09-29  3:21       ` lizhe.67
2025-09-29 20:19         ` Alex Williamson
2025-09-30  3:36           ` lizhe.67
2025-08-14  6:47 ` [PATCH v5 2/5] vfio/type1: optimize vfio_pin_pages_remote() lizhe.67
2025-08-14  6:47 ` [PATCH v5 3/5] vfio/type1: batch vfio_find_vpfn() in function vfio_unpin_pages_remote() lizhe.67
2025-08-14  6:47 ` [PATCH v5 4/5] vfio/type1: introduce a new member has_rsvd for struct vfio_dma lizhe.67
2025-08-14  6:47 ` [PATCH v5 5/5] vfio/type1: optimize vfio_unpin_pages_remote() lizhe.67
2025-10-06 19:44 ` Alex Williamson [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251006134403.4fc77b97.alex.williamson@redhat.com \
    --to=alex.williamson@redhat.com \
    --cc=david@redhat.com \
    --cc=farman@linux.ibm.com \
    --cc=jgg@nvidia.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lizhe.67@bytedance.com \
    --cc=torvalds@linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox