From: Jason Gunthorpe <jgg@ziepe.ca>
To: Christoph Hellwig <hch@infradead.org>
Cc: Ming Mao <maoming.maoming@huawei.com>,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org,
linux-mm@kvack.org, alex.williamson@redhat.com,
akpm@linux-foundation.org, cohuck@redhat.com,
jianjay.zhou@huawei.com, weidong.huang@huawei.com,
peterx@redhat.com, aarcange@redhat.com, wangyunjian@huawei.com,
willy@infradead.org, jhubbard@nvidia.com
Subject: Re: [PATCH V4 1/2] vfio dma_map/unmap: optimized for hugetlbfs pages
Date: Wed, 9 Sep 2020 10:05:18 -0300 [thread overview]
Message-ID: <20200909130518.GE87483@ziepe.ca> (raw)
In-Reply-To: <20200909080114.GA8321@infradead.org>
On Wed, Sep 09, 2020 at 09:01:14AM +0100, Christoph Hellwig wrote:
> I really don't think this approach is any good. You workaround
> a deficiency in the pin_user_pages API in one particular caller for
> one particular use case.
RDMA has the same basic issues, this should should not be solved with
workarounds in VFIO - a common API would be good
> I think you'd rather want either:
>
> (1) a FOLL_HUGEPAGE flag for the pin_user_pages API family that returns
> a single struct page for any kind of huge page, which would also
> benefit all kinds of other users rather than adding these kinds of
> hacks to vfio.
How to use? The VMAs can have mixed page sizes so the caller would
have to somehow switch and call twice? Not sure this is faster.
> (2) add a bvec version of the API that returns a variable size
> "extent"
This is the best one, I think.. The IOMMU setup can have multiple page
sizes, so having largest contiguous blocks pre-computed should speed
that up.
vfio should be a win to use a sgl rather than a page list?
Especially if we can also reduce the number of pages pinned by only
pinning head pages..
> I had started on (2) a while ago, and here is branch with my code (which
> is broken and fails test, but might be a start):
>
> http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/gup-bvec
>
> But for now I wonder if (1) is the better start, which could still be
> reused to (2) later.
What about some 'pin_user_page_sgl' as a stepping stone?
Switching from that point to bvec seems like a smaller step?
Jason
next prev parent reply other threads:[~2020-09-09 13:05 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-09-08 13:32 [PATCH V4 0/2] vfio: optimized for hugetlbf pages when dma map/unmap Ming Mao
2020-09-08 13:32 ` [PATCH V4 1/2] vfio dma_map/unmap: optimized for hugetlbfs pages Ming Mao
2020-09-09 8:01 ` Christoph Hellwig
2020-09-09 13:05 ` Jason Gunthorpe [this message]
2020-09-09 14:29 ` Christoph Hellwig
2020-09-09 15:00 ` Matthew Wilcox
2020-09-09 15:09 ` Jason Gunthorpe
2020-09-09 13:41 ` Matthew Wilcox
2020-09-09 17:04 ` Matthew Wilcox
2020-09-08 13:32 ` [PATCH V4 2/2] vfio: optimized for unpinning pages Ming Mao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200909130518.GE87483@ziepe.ca \
--to=jgg@ziepe.ca \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=cohuck@redhat.com \
--cc=hch@infradead.org \
--cc=jhubbard@nvidia.com \
--cc=jianjay.zhou@huawei.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maoming.maoming@huawei.com \
--cc=peterx@redhat.com \
--cc=wangyunjian@huawei.com \
--cc=weidong.huang@huawei.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox