linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Alex Williamson <alex.williamson@redhat.com>
To: "Jason Cai (Xiang Feng)" <jason.cai@linux.alibaba.com>
Cc: pbonzini@redhat.com, kvm@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	gnehzuil@linux.alibaba.com
Subject: Re: [RFC v2] vfio iommu type1: improve memory pinning process for raw PFN mapping
Date: Mon, 12 Mar 2018 16:06:44 -0600	[thread overview]
Message-ID: <20180312160644.0de6f96b@w520.home> (raw)
In-Reply-To: <25959294-E232-43EB-9CE2-E558A8D62F57@linux.alibaba.com>

On Sat, 3 Mar 2018 20:10:33 +0800
"Jason Cai (Xiang Feng)" <jason.cai@linux.alibaba.com> wrote:

> When using vfio to pass through a PCIe device (e.g. a GPU card) that
> has a huge BAR (e.g. 16GB), a lot of cycles are wasted on memory
> pinning because PFNs of PCI BAR are not backed by struct page, and
> the corresponding VMA has flag VM_PFNMAP.
> 
> With this change, when pinning a region which is a raw PFN mapping,
> it can skip unnecessary user memory pinning process. Thus, it can
> significantly improve VM's boot up time when passing through devices
> via VFIO.
> 
> Signed-off-by: Jason Cai (Xiang Feng) <jason.cai@linux.alibaba.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c | 24 ++++++++++++++----------
>  1 file changed, 14 insertions(+), 10 deletions(-)


It looks reasonable to me, is this still really an RFC?  It would also
be interesting to include performance data in the commit log, how much
faster is it to map that 16GB BAR with this change?  Thanks,

Alex

 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index e30e29ae4819..82ccfa350315 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -385,7 +385,6 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
>  {
>         unsigned long pfn = 0;
>         long ret, pinned = 0, lock_acct = 0;
> -       bool rsvd;
>         dma_addr_t iova = vaddr - dma->vaddr + dma->iova;
> 
>         /* This code path is only user initiated */
> @@ -396,14 +395,22 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
>         if (ret)
>                 return ret;
> 
> +       if (is_invalid_reserved_pfn(*pfn_base)) {
> +               struct vm_area_struct *vma;
> +               down_read(&current->mm->mmap_sem);
> +               vma = find_vma_intersection(current->mm, vaddr, vaddr + 1);
> +               pinned = min(npage, (long)vma_pages(vma));
> +               up_read(&current->mm->mmap_sem);
> +               return pinned;
> +       }
> +
>         pinned++;
> -       rsvd = is_invalid_reserved_pfn(*pfn_base);
> 
>         /*
>          * Reserved pages aren't counted against the user, externally pinned
>          * pages are already counted against the user.
>          */
> -       if (!rsvd && !vfio_find_vpfn(dma, iova)) {
> +       if (!vfio_find_vpfn(dma, iova)) {
>                 if (!lock_cap && current->mm->locked_vm + 1 > limit) {
>                         put_pfn(*pfn_base, dma->prot);
>                         pr_warn("%s: RLIMIT_MEMLOCK (%ld) exceeded\n", __func__,
> @@ -423,13 +430,12 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
>                 if (ret)
>                         break;
> 
> -               if (pfn != *pfn_base + pinned ||
> -                   rsvd != is_invalid_reserved_pfn(pfn)) {
> +               if (pfn != *pfn_base + pinned) {
>                         put_pfn(pfn, dma->prot);
>                         break;
>                 }
> 
> -               if (!rsvd && !vfio_find_vpfn(dma, iova)) {
> +               if (!vfio_find_vpfn(dma, iova)) {
>                         if (!lock_cap &&
>                             current->mm->locked_vm + lock_acct + 1 > limit) {
>                                 put_pfn(pfn, dma->prot);
> @@ -447,10 +453,8 @@ static long vfio_pin_pages_remote(struct vfio_dma *dma, unsigned long vaddr,
> 
>  unpin_out:
>         if (ret) {
> -               if (!rsvd) {
> -                       for (pfn = *pfn_base ; pinned ; pfn++, pinned--)
> -                               put_pfn(pfn, dma->prot);
> -               }
> +               for (pfn = *pfn_base ; pinned ; pfn++, pinned--)
> +                       put_pfn(pfn, dma->prot);
> 
>                 return ret;
>         }
> --
> 2.13.6

      reply	other threads:[~2018-03-12 22:06 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-24  5:44 [RFC] " jason
2018-02-26 19:19 ` Alex Williamson
2018-02-27  7:44   ` Jason Cai (Xiang Feng)
2018-03-03 12:10   ` [RFC v2] " Jason Cai (Xiang Feng)
2018-03-12 22:06     ` Alex Williamson [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180312160644.0de6f96b@w520.home \
    --to=alex.williamson@redhat.com \
    --cc=gnehzuil@linux.alibaba.com \
    --cc=jason.cai@linux.alibaba.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=pbonzini@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox