From: David Hildenbrand <david@redhat.com>
To: lizhe.67@bytedance.com
Cc: akpm@linux-foundation.org, alex.williamson@redhat.com,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, peterx@redhat.com
Subject: Re: [PATCH v4 2/3] gup: introduce unpin_user_folio_dirty_locked()
Date: Tue, 17 Jun 2025 11:27:50 +0200 [thread overview]
Message-ID: <e0c741a0-450a-4512-8796-bd83a5618409@redhat.com> (raw)
In-Reply-To: <20250617092117.10772-1-lizhe.67@bytedance.com>
On 17.06.25 11:21, lizhe.67@bytedance.com wrote:
> On Tue, 17 Jun 2025 09:43:56 +0200, david@redhat.com wrote:
>
>> On 17.06.25 06:18, lizhe.67@bytedance.com wrote:
>>> From: Li Zhe <lizhe.67@bytedance.com>
>>>
>>> When vfio_unpin_pages_remote() is called with a range of addresses that
>>> includes large folios, the function currently performs individual
>>> put_pfn() operations for each page. This can lead to significant
>>> performance overheads, especially when dealing with large ranges of pages.
>>>
>>> This patch optimize this process by batching the put_pfn() operations.
>>>
>>> The performance test results, based on v6.15, for completing the 16G VFIO
>>> IOMMU DMA unmapping, obtained through unit test[1] with slight
>>> modifications[2], are as follows.
>>>
>>> Base(v6.15):
>>> ./vfio-pci-mem-dma-map 0000:03:00.0 16
>>> ------- AVERAGE (MADV_HUGEPAGE) --------
>>> VFIO MAP DMA in 0.047 s (338.6 GB/s)
>>> VFIO UNMAP DMA in 0.138 s (116.2 GB/s)
>>> ------- AVERAGE (MAP_POPULATE) --------
>>> VFIO MAP DMA in 0.280 s (57.2 GB/s)
>>> VFIO UNMAP DMA in 0.312 s (51.3 GB/s)
>>> ------- AVERAGE (HUGETLBFS) --------
>>> VFIO MAP DMA in 0.052 s (308.3 GB/s)
>>> VFIO UNMAP DMA in 0.139 s (115.1 GB/s)
>>>
>>> Map[3] + This patchset:
>>> ------- AVERAGE (MADV_HUGEPAGE) --------
>>> VFIO MAP DMA in 0.028 s (563.9 GB/s)
>>> VFIO UNMAP DMA in 0.049 s (325.1 GB/s)
>>> ------- AVERAGE (MAP_POPULATE) --------
>>> VFIO MAP DMA in 0.294 s (54.4 GB/s)
>>> VFIO UNMAP DMA in 0.296 s (54.1 GB/s)
>>> ------- AVERAGE (HUGETLBFS) --------
>>> VFIO MAP DMA in 0.033 s (485.1 GB/s)
>>> VFIO UNMAP DMA in 0.049 s (324.4 GB/s)
>>>
>>> For large folio, we achieve an approximate 64% performance improvement
>>> in the VFIO UNMAP DMA item. For small folios, the performance test
>>> results appear to show no significant changes.
>>>
>>> [1]: https://github.com/awilliam/tests/blob/vfio-pci-mem-dma-map/vfio-pci-mem-dma-map.c
>>> [2]: https://lore.kernel.org/all/20250610031013.98556-1-lizhe.67@bytedance.com/
>>> [3]: https://lore.kernel.org/all/20250529064947.38433-1-lizhe.67@bytedance.com/
>>>
>>> Signed-off-by: Li Zhe <lizhe.67@bytedance.com>
>>> ---
>>> drivers/vfio/vfio_iommu_type1.c | 35 +++++++++++++++++++++++++++++----
>>> 1 file changed, 31 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
>>> index e952bf8bdfab..159ba80082a8 100644
>>> --- a/drivers/vfio/vfio_iommu_type1.c
>>> +++ b/drivers/vfio/vfio_iommu_type1.c
>>> @@ -806,11 +806,38 @@ static long vfio_unpin_pages_remote(struct vfio_dma *dma, dma_addr_t iova,
>>> bool do_accounting)
>>> {
>>> long unlocked = 0, locked = vpfn_pages(dma, iova, npage);
>>> - long i;
>>>
>>> - for (i = 0; i < npage; i++)
>>> - if (put_pfn(pfn++, dma->prot))
>>> - unlocked++;
>>> + while (npage) {
>>> + long nr_pages = 1;
>>> +
>>> + if (!is_invalid_reserved_pfn(pfn)) {
>>> + struct page *page = pfn_to_page(pfn);
>>> + struct folio *folio = page_folio(page);
>>> + long folio_pages_num = folio_nr_pages(folio);
>>> +
>>> + /*
>>> + * For a folio, it represents a physically
>>> + * contiguous set of bytes, and all of its pages
>>> + * share the same invalid/reserved state.
>>> + *
>>> + * Here, our PFNs are contiguous. Therefore, if we
>>> + * detect that the current PFN belongs to a large
>>> + * folio, we can batch the operations for the next
>>> + * nr_pages PFNs.
>>> + */
>>> + if (folio_pages_num > 1)
>>> + nr_pages = min_t(long, npage,
>>> + folio_pages_num -
>>> + folio_page_idx(folio, page));
>>> +
>>
>> (I know I can be a pain :) )
>
> No, not at all! I really appreciate you taking the time to review my
> patch.
>
>> But the long comment indicates that this is confusing.
>>
>>
>> That is essentially the logic in gup_folio_range_next().
>>
>> What about factoring that out into a helper like
>>
>> /*
>> * TODO, returned number includes the provided current page.
>> */
>> unsigned long folio_remaining_pages(struct folio *folio,
>> struct pages *pages, unsigned long max_pages)
>> {
>> if (!folio_test_large(folio))
>> return 1;
>> return min_t(unsigned long, max_pages,
>> folio_nr_pages(folio) - folio_page_idx(folio, page));
>> }
>>
>>
>> Then here you would do
>>
>> if (!is_invalid_reserved_pfn(pfn)) {
>> struct page *page = pfn_to_page(pfn);
>> struct folio *folio = page_folio(page);
>>
>> /* We can batch-process pages belonging to the same folio. */
>> nr_pages = folio_remaining_pages(folio, page, npage);
>>
>> unpin_user_folio_dirty_locked(folio, nr_pages,
>> dma->prot & IOMMU_WRITE);
>> unlocked += nr_pages;
>> }
>
> Yes, this indeed makes the code much more comprehensible. Do you think
> the implementation of the patch as follows look viable to you? I have
> added some brief comments on top of your work to explain why we can
> batch-process pages belonging to the same folio. This was suggested by
> Alex[1].
>
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index e952bf8bdfab..d7653f4c10d5 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -801,16 +801,43 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
> return pinned;
> }
>
> +/* Returned number includes the provided current page. */
> +static inline unsigned long folio_remaining_pages(struct folio *folio,
> + struct page *page, unsigned long max_pages)
> +{
> + if (!folio_test_large(folio))
> + return 1;
> + return min_t(unsigned long, max_pages,
> + folio_nr_pages(folio) - folio_page_idx(folio, page));
> +}
Note that I think that should go somewhere into mm.h, and also get used
by GUP. So factoring it out from GUP and then using it here.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2025-06-17 9:28 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-06-17 4:18 [PATCH v4 0/3] optimize vfio_unpin_pages_remote() for large folio lizhe.67
2025-06-17 4:18 ` [PATCH v4 1/3] vfio/type1: batch vfio_find_vpfn() in function vfio_unpin_pages_remote() lizhe.67
2025-06-17 4:18 ` [PATCH v4 2/3] gup: introduce unpin_user_folio_dirty_locked() lizhe.67
2025-06-17 7:35 ` David Hildenbrand
2025-06-17 13:42 ` Jason Gunthorpe
2025-06-17 13:45 ` David Hildenbrand
2025-06-17 13:58 ` David Hildenbrand
2025-06-17 14:04 ` David Hildenbrand
2025-06-17 15:22 ` Jason Gunthorpe
2025-06-18 6:28 ` lizhe.67
2025-06-18 8:20 ` David Hildenbrand
2025-06-18 11:36 ` Jason Gunthorpe
2025-06-18 11:40 ` David Hildenbrand
2025-06-18 11:42 ` David Hildenbrand
2025-06-18 11:46 ` Jason Gunthorpe
2025-06-18 11:52 ` David Hildenbrand
2025-06-18 11:56 ` Jason Gunthorpe
2025-06-18 12:19 ` lizhe.67
2025-06-18 13:23 ` Jason Gunthorpe
2025-06-19 9:05 ` lizhe.67
2025-06-19 12:35 ` Jason Gunthorpe
2025-06-19 12:49 ` lizhe.67
2025-06-17 4:18 ` [PATCH v4 3/3] vfio/type1: optimize vfio_unpin_pages_remote() for large folio lizhe.67
2025-06-17 7:43 ` David Hildenbrand
2025-06-17 9:21 ` [PATCH v4 2/3] gup: introduce unpin_user_folio_dirty_locked() lizhe.67
2025-06-17 9:27 ` David Hildenbrand [this message]
2025-06-17 9:47 ` [PATCH v4 3/3] vfio/type1: optimize vfio_unpin_pages_remote() for large folio lizhe.67
2025-06-17 9:49 ` David Hildenbrand
2025-06-17 12:42 ` lizhe.67
2025-06-17 13:47 ` David Hildenbrand
2025-06-18 6:11 ` lizhe.67
2025-06-18 7:22 ` lizhe.67
2025-06-18 8:54 ` David Hildenbrand
2025-06-18 9:39 ` lizhe.67
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e0c741a0-450a-4512-8796-bd83a5618409@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lizhe.67@bytedance.com \
--cc=peterx@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox